Statistical machine-learning techniques have been used in security
applications for over 20 years, starting with spam filtering, fraud
engines and intrusion detection. In the process we have become
familiar with attacks from poisoning to polymorphism, and issues from
redlining to snake oil. The neural network revolution has recently
brought many people into ML research who are unfamiliar with this
history, so it should surprise nobody that many new products
are insecure. In this talk I will describe some recent research projects
where we examine whether we should try to make machine-vision systems
robust against adversarial samples, or fragile enough to detect them
when they appear; whether adversarial samples
have constructive uses; how we can do service-denial attacks on
neural-network models; on the need to sanity-check outputs; and on the
need to sanitise inputs. We need to shift the emphasis from the design
of "secure" ML classifiers, to the design of secure
systems that use ML classifiers as components.
The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336, VAT Registration Number GB 592 9507 00, and is acknowledged by the UK authorities as a “Recognised body” which has been granted degree awarding powers.
Any views expressed within media held on this service are those of the contributors, should not be taken as approved or endorsed by the University, and do not necessarily reflect the views of the University in respect of any particular issue.