Attacking Driverless Cars with Projected Images

Interesting research -- "Phantom Attacks Against Advanced Driving Assistance Systems":

Abstract: The absence of deployed vehicular communication systems, which prevents the advanced driving assistance systems (ADASs) and autopilots of semi/fully autonomous cars to validate their virtual perception regarding the physical environment surrounding the car with a third party, has been exploited in various attacks suggested by researchers. Since the application of these attacks comes with a cost (exposure of the attacker's identity), the delicate exposure vs. application balance has held, and attacks of this kind have not yet been encountered in the wild. In this paper, we investigate a new perceptual challenge that causes the ADASs and autopilots of semi/fully autonomous to consider depthless objects (phantoms) as real. We show how attackers can exploit this perceptual challenge to apply phantom attacks and change the abovementioned balance, without the need to physically approach the attack scene, by projecting a phantom via a drone equipped with a portable projector or by presenting a phantom on a hacked digital billboard that faces the Internet and is located near roads. We show that the car industry has not considered this type of attack by demonstrating the attack on today's most advanced ADAS and autopilot technologies: Mobileye 630 PRO and the Tesla Model X, HW 2.5; our experiments show that when presented with various phantoms, a car's ADAS or autopilot considers the phantoms as real objects, causing these systems to trigger the brakes, steer into the lane of oncoming traffic, and issue notifications about fake road signs. In order to mitigate this attack, we present a model that analyzes a detected object's context, surface, and reflected light, which is capable of detecting phantoms with 0.99 AUC. Finally, we explain why the deployment of vehicular communication systems might reduce attackers' opportunities to apply phantom attacks but won't eliminate them.

The paper will be presented at CyberTech at the end of the month.

Posted on February 3, 2020 at 6:24 AM5 Comments

Friday Squid Blogging: The Pterosaur Ate Squid

New research: "Pterosaurs ate soft-bodied cephalopods (Coleiodea)." News article.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Posted on January 31, 2020 at 3:58 PM32 Comments

NSA Security Awareness Posters

From a FOIA request, over a hundred old NSA security awareness posters. Here are the BBC's favorites. Here are Motherboard's favorites.

I have a related personal story. Back in 1993, during the first Crypto Wars, I and a handful of other academic cryptographers visited the NSA for some meeting or another. These sorts of security awareness posters were everywhere, but there was one I especially liked -- and I asked for a copy. I have no idea who, but someone at the NSA mailed it to me. It's currently framed and on my wall.

I'll bet that the NSA didn't get permission from Jay Ward Productions.

Tell me your favorite in the comments.

Posted on January 31, 2020 at 1:36 PM31 Comments

U.S. Department of Interior Grounding All Drones

The Department of Interior is grounding all non-emergency drones due to security concerns:

The order comes amid a spate of warnings and bans at multiple government agencies, including the Department of Defense, about possible vulnerabilities in Chinese-made drone systems that could be allowing Beijing to conduct espionage. The Army banned the use of Chinese-made DJI drones three years ago following warnings from the Navy about "highly vulnerable" drone systems.

One memo drafted by the Navy & Marine Corps Small Tactical Unmanned Aircraft Systems Program Manager has warned "images, video and flight records could be uploaded to unsecured servers in other countries via live streaming." The Navy has also warned adversaries may view video and metadata from drone systems even though the air vehicle is encrypted. The Department of Homeland Security previously warned the private sector their data may be pilfered off if they use commercial drone systems made in China.

I'm actually not that worried about this risk. Data moving across the Internet is obvious -- it's too easy for a country that tries this to get caught. I am much more worried about remote kill switches in the equipment.

Posted on January 31, 2020 at 6:46 AM17 Comments

Collating Hacked Data Sets

Two Harvard undergraduates completed a project where they went out on the dark web and found a bunch of stolen datasets. Then they correlated all the information, and combined it with additional, publicly available, information. No surprise: the result was much more detailed and personal.

"What we were able to do is alarming because we can now find vulnerabilities in people's online presence very quickly," Metropolitansky said. "For instance, if I can aggregate all the leaked credentials associated with you in one place, then I can see the passwords and usernames that you use over and over again."

Of the 96,000 passwords contained in the dataset the students used, only 26,000 were unique.

"We also showed that a cyber criminal doesn't have to have a specific victim in mind. They can now search for victims who meet a certain set of criteria," Metropolitansky said.

For example, in less than 10 seconds she produced a dataset with more than 1,000 people who have high net worth, are married, have children, and also have a username or password on a cheating website. Another query pulled up a list of senior-level politicians, revealing the credit scores, phone numbers, and addresses of three U.S. senators, three U.S. representatives, the mayor of Washington, D.C., and a Cabinet member.

"Hopefully, this serves as a wake-up call that leaks are much more dangerous than we think they are," Metropolitansky said. "We're two college students. If someone really wanted to do some damage, I'm sure they could use these same techniques to do something horrible."

That's about right.

And you can be sure that the world's major intelligence organizations have already done all of this.

Posted on January 30, 2020 at 8:39 AM20 Comments

Customer Tracking at Ralphs Grocery Store

To comply with California's new data privacy law, companies that collect information on consumers and users are forced to be more transparent about it. Sometimes the results are creepy. Here's an article about Ralphs, a California supermarket chain owned by Kroger:

...the form proceeds to state that, as part of signing up for a rewards card, Ralphs "may collect" information such as "your level of education, type of employment, information about your health and information about insurance coverage you might carry."

It says Ralphs may pry into "financial and payment information like your bank account, credit and debit card numbers, and your credit history."

Wait, it gets even better.

Ralphs says it's gathering "behavioral information" such as "your purchase and transaction histories" and "geolocation data," which could mean the specific Ralphs aisles you browse or could mean the places you go when not shopping for groceries, thanks to the tracking capability of your smartphone.

Ralphs also reserves the right to go after "information about what you do online" and says it will make "inferences" about your interests "based on analysis of other information we have collected."

Other information? This can include files from "consumer research firms" ­-- read: professional data brokers ­-- and "public databases," such as property records and bankruptcy filings.

The reaction from John Votava, a Ralphs spokesman:

"I can understand why it raises eyebrows," he said. We may need to change the wording on the form."

That's the company's solution. Don't spy on people less, just change the wording so they don't realize it.

More consumer protection laws will be required.

Posted on January 29, 2020 at 6:20 AM32 Comments

Google Receives Geofence Warrants

Sometimes it's hard to tell the corporate surveillance operations from the government ones:

Google reportedly has a database called Sensorvault in which it stores location data for millions of devices going back almost a decade.

The article is about geofence warrants, where the police go to companies like Google and ask for information about every device in a particular geographic area at a particular time. In 2013, we learned from Edward Snowden that the NSA does this worldwide. Its program is called CO-TRAVELLER. The NSA claims it stopped doing that in 2014 -- probably just stopped doing it in the US -- but why should it bother when the government can just get the data from Google.

Both the New York Times and EFF have written about Sensorvault.

Posted on January 28, 2020 at 6:53 AM16 Comments

Modern Mass Surveillance: Identify, Correlate, Discriminate

Communities across the United States are starting to ban facial recognition technologies. In May of last year, San Francisco banned facial recognition; the neighboring city of Oakland soon followed, as did Somerville and Brookline in Massachusetts (a statewide ban may follow). In December, San Diego suspended a facial recognition program in advance of a new statewide law, which declared it illegal, coming into effect. Forty major music festivals pledged not to use the technology, and activists are calling for a nationwide ban. Many Democratic presidential candidates support at least a partial ban on the technology.

These efforts are well-intentioned, but facial recognition bans are the wrong way to fight against modern surveillance. Focusing on one particular identification method misconstrues the nature of the surveillance society we're in the process of building. Ubiquitous mass surveillance is increasingly the norm. In countries like China, a surveillance infrastructure is being built by the government for social control. In countries like the United States, it's being built by corporations in order to influence our buying behavior, and is incidentally used by the government.

In all cases, modern mass surveillance has three broad components: identification, correlation and discrimination. Let's take them in turn.

Facial recognition is a technology that can be used to identify people without their knowledge or consent. It relies on the prevalence of cameras, which are becoming both more powerful and smaller, and machine learning technologies that can match the output of these cameras with images from a database of existing photos.

But that's just one identification technology among many. People can be identified at a distance by their heartbeat or by their gait, using a laser-based system. Cameras are so good that they can read fingerprints and iris patterns from meters away. And even without any of these technologies, we can always be identified because our smartphones broadcast unique numbers called MAC addresses. Other things identify us as well: our phone numbers, our credit card numbers, the license plates on our cars. China, for example, uses multiple identification technologies to support its surveillance state.

Once we are identified, the data about who we are and what we are doing can be correlated with other data collected at other times. This might be movement data, which can be used to "follow" us as we move throughout our day. It can be purchasing data, Internet browsing data, or data about who we talk to via email or text. It might be data about our income, ethnicity, lifestyle, profession and interests. There is an entire industry of data brokers who make a living analyzing and augmenting data about who we are ­-- using surveillance data collected by all sorts of companies and then sold without our knowledge or consent.

There is a huge ­-- and almost entirely unregulated ­-- data broker industry in the United States that trades on our information. This is how large Internet companies like Google and Facebook make their money. It's not just that they know who we are, it's that they correlate what they know about us to create profiles about who we are and what our interests are. This is why many companies buy license plate data from states. It's also why companies like Google are buying health records, and part of the reason Google bought the company Fitbit, along with all of its data.

The whole purpose of this process is for companies --­ and governments ­-- to treat individuals differently. We are shown different ads on the Internet and receive different offers for credit cards. Smart billboards display different advertisements based on who we are. In the future, we might be treated differently when we walk into a store, just as we currently are when we visit websites.

The point is that it doesn't matter which technology is used to identify people. That there currently is no comprehensive database of heartbeats or gaits doesn't make the technologies that gather them any less effective. And most of the time, it doesn't matter if identification isn't tied to a real name. What's important is that we can be consistently identified over time. We might be completely anonymous in a system that uses unique cookies to track us as we browse the Internet, but the same process of correlation and discrimination still occurs. It's the same with faces; we can be tracked as we move around a store or shopping mall, even if that tracking isn't tied to a specific name. And that anonymity is fragile: If we ever order something online with a credit card, or purchase something with a credit card in a store, then suddenly our real names are attached to what was anonymous tracking information.

Regulating this system means addressing all three steps of the process. A ban on facial recognition won't make any difference if, in response, surveillance systems switch to identifying people by smartphone MAC addresses. The problem is that we are being identified without our knowledge or consent, and society needs rules about when that is permissible.

Similarly, we need rules about how our data can be combined with other data, and then bought and sold without our knowledge or consent. The data broker industry is almost entirely unregulated; there's only one law ­-- passed in Vermont in 2018 ­-- that requires data brokers to register and explain in broad terms what kind of data they collect. The large Internet surveillance companies like Facebook and Google collect dossiers on us are more detailed than those of any police state of the previous century. Reasonable laws would prevent the worst of their abuses.

Finally, we need better rules about when and how it is permissible for companies to discriminate. Discrimination based on protected characteristics like race and gender is already illegal, but those rules are ineffectual against the current technologies of surveillance and control. When people can be identified and their data correlated at a speed and scale previously unseen, we need new rules.

Today, facial recognition technologies are receiving the brunt of the tech backlash, but focusing on them misses the point. We need to have a serious conversation about all the technologies of identification, correlation and discrimination, and decide how much we as a society want to be spied on by governments and corporations -- and what sorts of influence we want them to have over our lives.

This essay previously appeared in the New York Times.

EDITED TO ADD: Rereading this post-publication, I see that it comes off as overly critical of those who are doing activism in this space. Writing the piece, I wasn't thinking about political tactics. I was thinking about the technologies that support surveillance capitalism, and law enforcement's usage of that corporate platform. Of course it makes sense to focus on face recognition in the short term. It's something that's easy to explain, viscerally creepy, and obviously actionable. It also makes sense to focus specifically on law enforcement's use of the technology; there are clear civil and constitutional rights issues. The fact that law enforcement is so deeply involved in the technology's marketing feels wrong. And the technology is currently being deployed in Hong Kong against political protesters. It's why the issue has momentum, and why we've gotten the small wins we've had. (The EU is considering a five-year ban on face recognition technologies.) Those wins build momentum, which lead to more wins. I should have been kinder to those in the trenches.

If you want to help, sign the petition from Public Voice calling on a moratorium on facial recognition technology for mass surveillance. Or write to your US congressperson and demand similar action. There's more information from EFF and EPIC.

Posted on January 27, 2020 at 12:21 PM37 Comments

Smartphone Election in Washington State

This year:

King County voters will be able to use their name and birthdate to log in to a Web portal through the Internet browser on their phones, says Bryan Finney, the CEO of Democracy Live, the Seattle-based voting company providing the technology.

Once voters have completed their ballots, they must verify their submissions and then submit a signature on the touch screen of their device.

Finney says election officials in Washington are adept at signature verification because the state votes entirely by mail. That will be the way people are caught if they log in to the system under false pretenses and try to vote as someone else.

The King County elections office plans to print out the ballots submitted electronically by voters whose signatures match and count the papers alongside the votes submitted through traditional routes.

While advocates say this creates an auditable paper trail, many security experts say that because the ballots cross the Internet before they are printed, any subsequent audits on them would be moot. If a cyberattack occurred, an audit could essentially require double-checking ballots that may already have been altered, says Buell.

Of course it's not an auditable paper trail. There's a reason why security experts use the phrase "voter-verifiable paper ballots." A centralized printout of a received Internet message is not voter verifiable.

Another news article.

Posted on January 27, 2020 at 6:03 AM22 Comments

Friday Squid Blogging: More on the Giant Squid's DNA

Following on from last week's post, here's more information on sequencing the DNA of the giant squid.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Posted on January 24, 2020 at 4:18 PM76 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.