Statistical machine-learning techniques have been used in security
applications for over 20 years, starting with spam filtering, fraud
engines and intrusion detection. In the process we have become
familiar with attacks from poisoning to polymorphism, and issues from
redlining to snake oil. The neural network revolution has recently
brought many people into ML research who are unfamiliar with this
history, so it should surprise nobody that many new products
are insecure. In this talk I will describe some recent research projects
where we examine whether we should try to make machine-vision systems
robust against adversarial samples, or fragile enough to detect them
when they appear; whether adversarial samples
have constructive uses; how we can do service-denial attacks on
neural-network models; on the need to sanity-check outputs; and on the
need to sanitise inputs. We need to shift the emphasis from the design
of "secure" ML classifiers, to the design of secure
systems that use ML classifiers as components.