Availability Attacks against Neural Networks
New research on using specially crafted inputs to slow down machine-learning neural network systems:
Sponge Examples: Energy-Latency Attacks on Neural Networks shows how to find adversarial examples that cause a DNN to burn more energy, take more time, or both. They affect a wide range of DNN applications, from image recognition to natural language processing (NLP). Adversaries might use these examples for all sorts of mischief—from draining mobile phone batteries, though degrading the machine-vision systems on which self-driving cars rely, to jamming cognitive radar.
So far, our most spectacular results are against NLP systems. By feeding them confusing inputs we can slow them down over 100 times. There are already examples in the real world where people pause or stumble when asked hard questions but we now have a dependable method for generating such examples automatically and at scale. We can also neutralize the performance improvements of accelerators for computer vision tasks, and make them operate on their worst case performance.
The paper.
scot • June 10, 2020 8:31 AM
I used the Tesseract OCR engine on a project years ago looking for text on blueprints. If I were to just send the entire image to the OCR engine, it could get lost in things like shaded sections for hours, trying to find text where there was none. I had to break the image down into groups of connected pixels and then filter those objects based on the size, density, and spatial frequency to find blocks of what was likely text, and pass just those pixels into the OCR engine. Neural networks do some amazing things, but how they do it is opaque, and they tend to be very brittle when you push the boundaries of their training set.