Deep learning algorithms are known to be vulnerable to targeted attack. For example, a driverless car may misinterpret a traffic "Stop" sign on the roadside as a speed limit sign when minimal graffiti is added.
I will briefly discuss the ongoing conflict between attack and defence strategies in the design of robust deep learning tools. Then I will take a step back and consider a related higher-level question: under realistic assumptions, do successful attacks always exist with high probability?
Professor Des Higham, School of Mathematics
The University of Edinburgh