Entries Tagged "machine learning"

Page 1 of 3

Split-Second Phantom Images Fool Autopilots

Researchers are tricking autopilots by inserting split-second images into roadside billboards.

Researchers at Israel’s Ben Gurion University of the Negev … previously revealed that they could use split-second light projections on roads to successfully trick Tesla’s driver-assistance systems into automatically stopping without warning when its camera sees spoofed images of road signs or pedestrians. In new research, they’ve found they can pull off the same trick with just a few frames of a road sign injected on a billboard’s video. And they warn that if hackers hijacked an internet-connected billboard to carry out the trick, it could be used to cause traffic jams or even road accidents while leaving little evidence behind.

[…]

In this latest set of experiments, the researchers injected frames of a phantom stop sign on digital billboards, simulating what they describe as a scenario in which someone hacked into a roadside billboard to alter its video. They also upgraded to Tesla’s most recent version of Autopilot known as HW3. They found that they could again trick a Tesla or cause the same Mobileye device to give the driver mistaken alerts with just a few frames of altered video.

The researchers found that an image that appeared for 0.42 seconds would reliably trick the Tesla, while one that appeared for just an eighth of a second would fool the Mobileye device. They also experimented with finding spots in a video frame that would attract the least notice from a human eye, going so far as to develop their own algorithm for identifying key blocks of pixels in an image so that a half-second phantom road sign could be slipped into the “uninteresting” portions.

The paper:

Abstract: In this paper, we investigate “split-second phantom attacks,” a scientific gap that causes two commercial advanced driver-assistance systems (ADASs), Telsa Model X (HW 2.5 and HW 3) and Mobileye 630, to treat a depthless object that appears for a few milliseconds as a real obstacle/object. We discuss the challenge that split-second phantom attacks create for ADASs. We demonstrate how attackers can apply split-second phantom attacks remotely by embedding phantom road signs into an advertisement presented on a digital billboard which causes Tesla’s autopilot to suddenly stop the car in the middle of a road and Mobileye 630 to issue false notifications. We also demonstrate how attackers can use a projector in order to cause Tesla’s autopilot to apply the brakes in response to a phantom of a pedestrian that was projected on the road and Mobileye 630 to issue false notifications in response to a projected road sign. To counter this threat, we propose a countermeasure which can determine whether a detected object is a phantom or real using just the camera sensor. The countermeasure (GhostBusters) uses a “committee of experts” approach and combines the results obtained from four lightweight deep convolutional neural networks that assess the authenticity of an object based on the object’s light, context, surface, and depth. We demonstrate our countermeasure’s effectiveness (it obtains a TPR of 0.994 with an FPR of zero) and test its robustness to adversarial machine learning attacks.

Posted on October 19, 2020 at 6:28 AMView Comments

Adversarial Machine Learning and the CFAA

I just co-authored a paper on the legal risks of doing machine learning research, given the current state of the Computer Fraud and Abuse Act:

Abstract: Adversarial Machine Learning is booming with ML researchers increasingly targeting commercial ML systems such as those used in Facebook, Tesla, Microsoft, IBM, Google to demonstrate vulnerabilities. In this paper, we ask, “What are the potential legal risks to adversarial ML researchers when they attack ML systems?” Studying or testing the security of any operational system potentially runs afoul the Computer Fraud and Abuse Act (CFAA), the primary United States federal statute that creates liability for hacking. We claim that Adversarial ML research is likely no different. Our analysis show that because there is a split in how CFAA is interpreted, aspects of adversarial ML attacks, such as model inversion, membership inference, model stealing, reprogramming the ML system and poisoning attacks, may be sanctioned in some jurisdictions and not penalized in others. We conclude with an analysis predicting how the US Supreme Court may resolve some present inconsistencies in the CFAA’s application in Van Buren v. United States, an appeal expected to be decided in 2021. We argue that the court is likely to adopt a narrow construction of the CFAA, and that this will actually lead to better adversarial ML security outcomes in the long term.

Medium post on the paper. News article, which uses our graphic without attribution.

Posted on July 23, 2020 at 6:03 AMView Comments

Fawkes: Digital Image Cloaking

Fawkes is a system for manipulating digital images so that they aren’t recognized by facial recognition systems.

At a high level, Fawkes takes your personal images, and makes tiny, pixel-level changes to them that are invisible to the human eye, in a process we call image cloaking. You can then use these “cloaked” photos as you normally would, sharing them on social media, sending them to friends, printing them or displaying them on digital devices, the same way you would any other photo. The difference, however, is that if and when someone tries to use these photos to build a facial recognition model, “cloaked” images will teach the model an highly distorted version of what makes you look like you. The cloak effect is not easily detectable, and will not cause errors in model training. However, when someone tries to identify you using an unaltered image of you (e.g. a photo taken in public), and tries to identify you, they will fail.

Research paper.

EDITED TO ADD (8/3): Kashmir Hill checks it out, and it’s got problems.

Another article.

Posted on July 22, 2020 at 9:12 AMView Comments

Identifying a Person Based on a Photo, LinkedIn and Etsy Profiles, and Other Internet Bread Crumbs

Interesting story of how the police can identify someone by following the evidence chain from website to website.

According to filings in Blumenthal’s case, FBI agents had little more to go on when they started their investigation than the news helicopter footage of the woman setting the police car ablaze as it was broadcast live May 30.

It showed the woman, in flame-retardant gloves, grabbing a burning piece of a police barricade that had already been used to set one squad car on fire and tossing it into the police SUV parked nearby. Within seconds, that car was also engulfed in flames.

Investigators discovered other images depicting the same scene on Instagram and the video sharing website Vimeo. Those allowed agents to zoom in and identify a stylized tattoo of a peace sign on the woman’s right forearm.

Scouring other images ­– including a cache of roughly 500 photos of the Philly protest shared by an amateur photographer ­– agents found shots of a woman with the same tattoo that gave a clear depiction of the slogan on her T-shirt.

[…]

That shirt, agents said, was found to have been sold only in one location: a shop on Etsy, the online marketplace for crafters, purveyors of custom-made clothing and jewelry, and other collectibles….

The top review on her page, dated just six days before the protest, was from a user identifying herself as “Xx Mv,” who listed her location as Philadelphia and her username as “alleycatlore.”

A Google search of that handle led agents to an account on Poshmark, the mobile fashion marketplace, with a user handle “lore-elisabeth.” And subsequent searches for that name turned up Blumenthal’s LinkedIn profile, where she identifies herself as a graduate of William Penn Charter School and several yoga and massage therapy training centers.

From there, they located Blumenthal’s Jenkintown massage studio and its website, which featured videos demonstrating her at work. On her forearm, agents discovered, was the same distinctive tattoo that investigators first identified on the arsonist in the original TV video.

The obvious moral isn’t a new one: don’t have a distinctive tattoo. But more interesting is how different pieces of evidence can be strung together in order to identify someone. This particular chain was put together manually, but expect machine learning techniques to be able to do this sort of thing automatically — and for organizations like the NSA to implement them on a broad scale.

Another article did a more detailed analysis, and concludes that the Etsy review was the linchpin.

Note to commenters: political commentary on the protesters or protests will be deleted. There are many other forums on the Internet to discuss that.

Posted on June 22, 2020 at 7:35 AMView Comments

Availability Attacks against Neural Networks

New research on using specially crafted inputs to slow down machine-learning neural network systems:

Sponge Examples: Energy-Latency Attacks on Neural Networks shows how to find adversarial examples that cause a DNN to burn more energy, take more time, or both. They affect a wide range of DNN applications, from image recognition to natural language processing (NLP). Adversaries might use these examples for all sorts of mischief — from draining mobile phone batteries, though degrading the machine-vision systems on which self-driving cars rely, to jamming cognitive radar.

So far, our most spectacular results are against NLP systems. By feeding them confusing inputs we can slow them down over 100 times. There are already examples in the real world where people pause or stumble when asked hard questions but we now have a dependable method for generating such examples automatically and at scale. We can also neutralize the performance improvements of accelerators for computer vision tasks, and make them operate on their worst case performance.

The paper.

Posted on June 10, 2020 at 6:31 AMView Comments

Fooling NLP Systems Through Word Swapping

MIT researchers have built a system that fools natural-language processing systems by swapping words with synonyms:

The software, developed by a team at MIT, looks for the words in a sentence that are most important to an NLP classifier and replaces them with a synonym that a human would find natural. For example, changing the sentence “The characters, cast in impossibly contrived situations, are totally estranged from reality” to “The characters, cast in impossibly engineered circumstances, are fully estranged from reality” makes no real difference to how we read it. But the tweaks made an AI interpret the sentences completely differently.

The results of this adversarial machine learning attack are impressive:

For example, Google’s powerful BERT neural net was worse by a factor of five to seven at identifying whether reviews on Yelp were positive or negative.

The paper:

Abstract: Machine learning algorithms are often vulnerable to adversarial examples that have imperceptible alterations from the original counterparts but can fool the state-of-the-art models. It is helpful to evaluate or even improve the robustness of these models by exposing the maliciously crafted adversarial examples. In this paper, we present TextFooler, a simple but strong baseline to generate natural adversarial text. By applying it to two fundamental natural language tasks, text classification and textual entailment, we successfully attacked three target models, including the powerful pre-trained BERT, and the widely used convolutional and recurrent neural networks. We demonstrate the advantages of this framework in three ways: (1) effective — it outperforms state-of-the-art attacks in terms of success rate and perturbation rate, (2) utility-preserving — it preserves semantic content and grammaticality, and remains correctly classified by humans, and (3) efficient — it generates adversarial text with computational complexity linear to the text length.

EDITED TO ADD: This post has been translated into Spanish.

Posted on April 28, 2020 at 10:38 AMView Comments

Vulnerability Finding Using Machine Learning

Microsoft is training a machine-learning system to find software bugs:

At Microsoft, 47,000 developers generate nearly 30 thousand bugs a month. These items get stored across over 100 AzureDevOps and GitHub repositories. To better label and prioritize bugs at that scale, we couldn’t just apply more people to the problem. However, large volumes of semi-curated data are perfect for machine learning. Since 2001 Microsoft has collected 13 million work items and bugs. We used that data to develop a process and machine learning model that correctly distinguishes between security and non-security bugs 99 percent of the time and accurately identifies the critical, high priority security bugs, 97 percent of the time.

News article.

I wrote about this in 2018:

The problem of finding software vulnerabilities seems well-suited for ML systems. Going through code line by line is just the sort of tedious problem that computers excel at, if we can only teach them what a vulnerability looks like. There are challenges with that, of course, but there is already a healthy amount of academic literature on the topic — and research is continuing. There’s every reason to expect ML systems to get better at this as time goes on, and some reason to expect them to eventually become very good at it.

Finding vulnerabilities can benefit both attackers and defenders, but it’s not a fair fight. When an attacker’s ML system finds a vulnerability in software, the attacker can use it to compromise systems. When a defender’s ML system finds the same vulnerability, he or she can try to patch the system or program network defenses to watch for and block code that tries to exploit it.

But when the same system is in the hands of a software developer who uses it to find the vulnerability before the software is ever released, the developer fixes it so it can never be used in the first place. The ML system will probably be part of his or her software design tools and will automatically find and fix vulnerabilities while the code is still in development.

Fast-forward a decade or so into the future. We might say to each other, “Remember those years when software vulnerabilities were a thing, before ML vulnerability finders were built into every compiler and fixed them before the software was ever released? Wow, those were crazy years.” Not only is this future possible, but I would bet on it.

Getting from here to there will be a dangerous ride, though. Those vulnerability finders will first be unleashed on existing software, giving attackers hundreds if not thousands of vulnerabilities to exploit in real-world attacks. Sure, defenders can use the same systems, but many of today’s Internet of Things (IoT) systems have no engineering teams to write patches and no ability to download and install patches. The result will be hundreds of vulnerabilities that attackers can find and use.

Posted on April 20, 2020 at 6:22 AMView Comments

Deep Learning to Find Malicious Email Attachments

Google presented its system of using deep-learning techniques to identify malicious email attachments:

At the RSA security conference in San Francisco on Tuesday, Google’s security and anti-abuse research lead Elie Bursztein will present findings on how the new deep-learning scanner for documents is faring against the 300 billion attachments it has to process each week. It’s challenging to tell the difference between legitimate documents in all their infinite variations and those that have specifically been manipulated to conceal something dangerous. Google says that 63 percent of the malicious documents it blocks each day are different than the ones its systems flagged the day before. But this is exactly the type of pattern-recognition problem where deep learning can be helpful.

[…]

The document analyzer looks for common red flags, probes files if they have components that may have been purposefully obfuscated, and does other checks like examining macros­ — the tool in Microsoft Word documents that chains commands together in a series and is often used in attacks. The volume of malicious documents that attackers send out varies widely day to day. Bursztein says that since its deployment, the document scanner has been particularly good at flagging suspicious documents sent in bursts by malicious botnets or through other mass distribution methods. He was also surprised to discover how effective the scanner is at analyzing Microsoft Excel documents, a complicated file format that can be difficult to assess.

This is the sort of thing that’s pretty well optimized for machine-learning techniques.

Posted on February 28, 2020 at 11:57 AMView Comments

1 2 3

Sidebar photo of Bruce Schneier by Joe MacInnis.