Blog: January 2020 Archives

NSA Security Awareness Posters

From a FOIA request, over a hundred old NSA security awareness posters. Here are the BBC’s favorites. Here are Motherboard’s favorites.

I have a related personal story. Back in 1993, during the first Crypto Wars, I and a handful of other academic cryptographers visited the NSA for some meeting or another. These sorts of security awareness posters were everywhere, but there was one I especially liked—and I asked for a copy. I have no idea who, but someone at the NSA mailed it to me. It’s currently framed and on my wall.

I’ll bet that the NSA didn’t get permission from Jay Ward Productions.

Tell me your favorite in the comments.

Posted on January 31, 2020 at 1:36 PM34 Comments

US Department of Interior Grounding All Drones

The Department of Interior is grounding all non-emergency drones due to security concerns:

The order comes amid a spate of warnings and bans at multiple government agencies, including the Department of Defense, about possible vulnerabilities in Chinese-made drone systems that could be allowing Beijing to conduct espionage. The Army banned the use of Chinese-made DJI drones three years ago following warnings from the Navy about “highly vulnerable” drone systems.

One memo drafted by the Navy & Marine Corps Small Tactical Unmanned Aircraft Systems Program Manager has warned “images, video and flight records could be uploaded to unsecured servers in other countries via live streaming.” The Navy has also warned adversaries may view video and metadata from drone systems even though the air vehicle is encrypted. The Department of Homeland Security previously warned the private sector their data may be pilfered off if they use commercial drone systems made in China.

I’m actually not that worried about this risk. Data moving across the Internet is obvious—it’s too easy for a country that tries this to get caught. I am much more worried about remote kill switches in the equipment.

Posted on January 31, 2020 at 6:46 AM20 Comments

Collating Hacked Data Sets

Two Harvard undergraduates completed a project where they went out on the dark web and found a bunch of stolen datasets. Then they correlated all the information, and combined it with additional, publicly available, information. No surprise: the result was much more detailed and personal.

“What we were able to do is alarming because we can now find vulnerabilities in people’s online presence very quickly,” Metropolitansky said. “For instance, if I can aggregate all the leaked credentials associated with you in one place, then I can see the passwords and usernames that you use over and over again.”

Of the 96,000 passwords contained in the dataset the students used, only 26,000 were unique.

“We also showed that a cyber criminal doesn’t have to have a specific victim in mind. They can now search for victims who meet a certain set of criteria,” Metropolitansky said.

For example, in less than 10 seconds she produced a dataset with more than 1,000 people who have high net worth, are married, have children, and also have a username or password on a cheating website. Another query pulled up a list of senior-level politicians, revealing the credit scores, phone numbers, and addresses of three U.S. senators, three U.S. representatives, the mayor of Washington, D.C., and a Cabinet member.

“Hopefully, this serves as a wake-up call that leaks are much more dangerous than we think they are,” Metropolitansky said. “We’re two college students. If someone really wanted to do some damage, I’m sure they could use these same techniques to do something horrible.”

That’s about right.

And you can be sure that the world’s major intelligence organizations have already done all of this.

Posted on January 30, 2020 at 8:39 AM24 Comments

Customer Tracking at Ralphs Grocery Store

To comply with California’s new data privacy law, companies that collect information on consumers and users are forced to be more transparent about it. Sometimes the results are creepy. Here’s an article about Ralphs, a California supermarket chain owned by Kroger:

…the form proceeds to state that, as part of signing up for a rewards card, Ralphs “may collect” information such as “your level of education, type of employment, information about your health and information about insurance coverage you might carry.”

It says Ralphs may pry into “financial and payment information like your bank account, credit and debit card numbers, and your credit history.”

Wait, it gets even better.

Ralphs says it’s gathering “behavioral information” such as “your purchase and transaction histories” and “geolocation data,” which could mean the specific Ralphs aisles you browse or could mean the places you go when not shopping for groceries, thanks to the tracking capability of your smartphone.

Ralphs also reserves the right to go after “information about what you do online” and says it will make “inferences” about your interests “based on analysis of other information we have collected.”

Other information? This can include files from “consumer research firms” ­—read: professional data brokers ­—and “public databases,” such as property records and bankruptcy filings.

The reaction from John Votava, a Ralphs spokesman:

“I can understand why it raises eyebrows,” he said. We may need to change the wording on the form.”

That’s the company’s solution. Don’t spy on people less, just change the wording so they don’t realize it.

More consumer protection laws will be required.

Posted on January 29, 2020 at 6:20 AM32 Comments

Google Receives Geofence Warrants

Sometimes it’s hard to tell the corporate surveillance operations from the government ones:

Google reportedly has a database called Sensorvault in which it stores location data for millions of devices going back almost a decade.

The article is about geofence warrants, where the police go to companies like Google and ask for information about every device in a particular geographic area at a particular time. In 2013, we learned from Edward Snowden that the NSA does this worldwide. Its program is called CO-TRAVELLER. The NSA claims it stopped doing that in 2014—probably just stopped doing it in the US—but why should it bother when the government can just get the data from Google.

Both the New York Times and EFF have written about Sensorvault.

Posted on January 28, 2020 at 6:53 AM17 Comments

Modern Mass Surveillance: Identify, Correlate, Discriminate

Communities across the United States are starting to ban facial recognition technologies. In May of last year, San Francisco banned facial recognition; the neighboring city of Oakland soon followed, as did Somerville and Brookline in Massachusetts (a statewide ban may follow). In December, San Diego suspended a facial recognition program in advance of a new statewide law, which declared it illegal, coming into effect. Forty major music festivals pledged not to use the technology, and activists are calling for a nationwide ban. Many Democratic presidential candidates support at least a partial ban on the technology.

These efforts are well-intentioned, but facial recognition bans are the wrong way to fight against modern surveillance. Focusing on one particular identification method misconstrues the nature of the surveillance society we’re in the process of building. Ubiquitous mass surveillance is increasingly the norm. In countries like China, a surveillance infrastructure is being built by the government for social control. In countries like the United States, it’s being built by corporations in order to influence our buying behavior, and is incidentally used by the government.

In all cases, modern mass surveillance has three broad components: identification, correlation and discrimination. Let’s take them in turn.

Facial recognition is a technology that can be used to identify people without their knowledge or consent. It relies on the prevalence of cameras, which are becoming both more powerful and smaller, and machine learning technologies that can match the output of these cameras with images from a database of existing photos.

But that’s just one identification technology among many. People can be identified at a distance by their heartbeat or by their gait, using a laser-based system. Cameras are so good that they can read fingerprints and iris patterns from meters away. And even without any of these technologies, we can always be identified because our smartphones broadcast unique numbers called MAC addresses. Other things identify us as well: our phone numbers, our credit card numbers, the license plates on our cars. China, for example, uses multiple identification technologies to support its surveillance state.

Once we are identified, the data about who we are and what we are doing can be correlated with other data collected at other times. This might be movement data, which can be used to “follow” us as we move throughout our day. It can be purchasing data, Internet browsing data, or data about who we talk to via email or text. It might be data about our income, ethnicity, lifestyle, profession and interests. There is an entire industry of data brokers who make a living analyzing and augmenting data about who we are ­—using surveillance data collected by all sorts of companies and then sold without our knowledge or consent.

There is a huge ­—and almost entirely unregulated ­—data broker industry in the United States that trades on our information. This is how large Internet companies like Google and Facebook make their money. It’s not just that they know who we are, it’s that they correlate what they know about us to create profiles about who we are and what our interests are. This is why many companies buy license plate data from states. It’s also why companies like Google are buying health records, and part of the reason Google bought the company Fitbit, along with all of its data.

The whole purpose of this process is for companies—­ and governments ­—to treat individuals differently. We are shown different ads on the Internet and receive different offers for credit cards. Smart billboards display different advertisements based on who we are. In the future, we might be treated differently when we walk into a store, just as we currently are when we visit websites.

The point is that it doesn’t matter which technology is used to identify people. That there currently is no comprehensive database of heartbeats or gaits doesn’t make the technologies that gather them any less effective. And most of the time, it doesn’t matter if identification isn’t tied to a real name. What’s important is that we can be consistently identified over time. We might be completely anonymous in a system that uses unique cookies to track us as we browse the Internet, but the same process of correlation and discrimination still occurs. It’s the same with faces; we can be tracked as we move around a store or shopping mall, even if that tracking isn’t tied to a specific name. And that anonymity is fragile: If we ever order something online with a credit card, or purchase something with a credit card in a store, then suddenly our real names are attached to what was anonymous tracking information.

Regulating this system means addressing all three steps of the process. A ban on facial recognition won’t make any difference if, in response, surveillance systems switch to identifying people by smartphone MAC addresses. The problem is that we are being identified without our knowledge or consent, and society needs rules about when that is permissible.

Similarly, we need rules about how our data can be combined with other data, and then bought and sold without our knowledge or consent. The data broker industry is almost entirely unregulated; there’s only one law ­—passed in Vermont in 2018 ­—that requires data brokers to register and explain in broad terms what kind of data they collect. The large Internet surveillance companies like Facebook and Google collect dossiers on us are more detailed than those of any police state of the previous century. Reasonable laws would prevent the worst of their abuses.

Finally, we need better rules about when and how it is permissible for companies to discriminate. Discrimination based on protected characteristics like race and gender is already illegal, but those rules are ineffectual against the current technologies of surveillance and control. When people can be identified and their data correlated at a speed and scale previously unseen, we need new rules.

Today, facial recognition technologies are receiving the brunt of the tech backlash, but focusing on them misses the point. We need to have a serious conversation about all the technologies of identification, correlation and discrimination, and decide how much we as a society want to be spied on by governments and corporations—and what sorts of influence we want them to have over our lives.

This essay previously appeared in the New York Times.

EDITED TO ADD: Rereading this post-publication, I see that it comes off as overly critical of those who are doing activism in this space. Writing the piece, I wasn’t thinking about political tactics. I was thinking about the technologies that support surveillance capitalism, and law enforcement’s usage of that corporate platform. Of course it makes sense to focus on face recognition in the short term. It’s something that’s easy to explain, viscerally creepy, and obviously actionable. It also makes sense to focus specifically on law enforcement’s use of the technology; there are clear civil and constitutional rights issues. The fact that law enforcement is so deeply involved in the technology’s marketing feels wrong. And the technology is currently being deployed in Hong Kong against political protesters. It’s why the issue has momentum, and why we’ve gotten the small wins we’ve had. (The EU is considering a five-year ban on face recognition technologies.) Those wins build momentum, which lead to more wins. I should have been kinder to those in the trenches.

If you want to help, sign the petition from Public Voice calling on a moratorium on facial recognition technology for mass surveillance. Or write to your US congressperson and demand similar action. There’s more information from EFF and EPIC.

EDITED TO ADD (3/16): This essay has been translated into Spanish.

Posted on January 27, 2020 at 12:21 PM40 Comments

Smartphone Election in Washington State

This year:

King County voters will be able to use their name and birthdate to log in to a Web portal through the Internet browser on their phones, says Bryan Finney, the CEO of Democracy Live, the Seattle-based voting company providing the technology.

Once voters have completed their ballots, they must verify their submissions and then submit a signature on the touch screen of their device.

Finney says election officials in Washington are adept at signature verification because the state votes entirely by mail. That will be the way people are caught if they log in to the system under false pretenses and try to vote as someone else.

The King County elections office plans to print out the ballots submitted electronically by voters whose signatures match and count the papers alongside the votes submitted through traditional routes.

While advocates say this creates an auditable paper trail, many security experts say that because the ballots cross the Internet before they are printed, any subsequent audits on them would be moot. If a cyberattack occurred, an audit could essentially require double-checking ballots that may already have been altered, says Buell.

Of course it’s not an auditable paper trail. There’s a reason why security experts use the phrase “voter-verifiable paper ballots.” A centralized printout of a received Internet message is not voter verifiable.

Another news article.

Posted on January 27, 2020 at 6:03 AM22 Comments

Technical Report of the Bezos Phone Hack

Motherboard obtained and published the technical report on the hack of Jeff Bezos’s phone, which is being attributed to Saudi Arabia, specifically to Crown Prince Mohammed bin Salman.

…investigators set up a secure lab to examine the phone and its artifacts and spent two days poring over the device but were unable to find any malware on it. Instead, they only found a suspicious video file sent to Bezos on May 1, 2018 that “appears to be an Arabic language promotional film about telecommunications.”

That file shows an image of the Saudi Arabian flag and Swedish flags and arrived with an encrypted downloader. Because the downloader was encrypted this delayed or further prevented “study of the code delivered along with the video.”

Investigators determined the video or downloader were suspicious only because Bezos’ phone subsequently began transmitting large amounts of data. “[W]ithin hours of the encrypted downloader being received, a massive and unauthorized exfiltration of data from Bezos’ phone began, continuing and escalating for months thereafter,” the report states.

“The amount of data being transmitted out of Bezos’ phone changed dramatically after receiving the WhatsApp video file and never returned to baseline. Following execution of the encrypted downloader sent from MBS’ account, egress on the device immediately jumped by approximately 29,000 percent,” it notes. “Forensic artifacts show that in the six (6) months prior to receiving the WhatsApp video, Bezos’ phone had an average of 430KB of egress per day, fairly typical of an iPhone. Within hours of the WhatsApp video, egress jumped to 126MB. The phone maintained an unusually high average of 101MB of egress data per day for months thereafter, including many massive and highly atypical spikes of egress data.”

The Motherboard article also quotes forensic experts on the report:

A mobile forensic expert told Motherboard that the investigation as depicted in the report is significantly incomplete and would only have provided the investigators with about 50 percent of what they needed, especially if this is a nation-state attack. She says the iTunes backup and other extractions they did would get them only messages, photo files, contacts and other files that the user is interested in saving from their applications, but not the core files.

“They would need to use a tool like Graykey or Cellebrite Premium or do a jailbreak to get a look at the full file system. That’s where that state-sponsored malware is going to be found. Good state-sponsored malware should never show up in a backup,” said Sarah Edwards, an author and teacher of mobile forensics for the SANS Institute.

“The full file system is getting into the device and getting every single file on there­—the whole operating system, the application data, the databases that will not be backed up. So really the in-depth analysis should be done on that full file system, for this level of investigation anyway. I would have insisted on that right from the start.”

The investigators do note on the last page of their report that they need to jailbreak Bezos’s phone to examine the root file system. Edwards said this would indeed get them everything they would need to search for persistent spyware like the kind created and sold by the NSO Group. But the report doesn’t indicate if that did get done.

Posted on January 24, 2020 at 8:34 AM48 Comments

Apple Abandoned Plans for Encrypted iCloud Backup after FBI Complained

This is new from Reuters:

More than two years ago, Apple told the FBI that it planned to offer users end-to-end encryption when storing their phone data on iCloud, according to one current and three former FBI officials and one current and one former Apple employee.

Under that plan, primarily designed to thwart hackers, Apple would no longer have a key to unlock the encrypted data, meaning it would not be able to turn material over to authorities in a readable form even under court order.

In private talks with Apple soon after, representatives of the FBI’s cyber crime agents and its operational technology division objected to the plan, arguing it would deny them the most effective means for gaining evidence against iPhone-using suspects, the government sources said.

When Apple spoke privately to the FBI about its work on phone security the following year, the end-to-end encryption plan had been dropped, according to the six sources. Reuters could not determine why exactly Apple dropped the plan.

EDITED TO ADD (2/13): Android has enrypted backups.

Posted on January 23, 2020 at 6:10 AM32 Comments

Half a Million IoT Device Passwords Published

It’s a list of easy-to-guess passwords for IoT devices on the Internet as recently as last October and November. Useful for anyone putting together a bot network:

A hacker has published this week a massive list of Telnet credentials for more than 515,000 servers, home routers, and IoT (Internet of Things) “smart” devices.

The list, which was published on a popular hacking forum, includes each device’s IP address, along with a username and password for the Telnet service, a remote access protocol that can be used to control devices over the internet.

According to experts to who ZDNet spoke this week, and a statement from the leaker himself, the list was compiled by scanning the entire internet for devices that were exposing their Telnet port. The hacker than tried using (1) factory-set default usernames and passwords, or (2) custom, but easy-to-guess password combinations.

Posted on January 22, 2020 at 6:09 AM16 Comments

Brazil Charges Glenn Greenwald with Cybercrimes

Glenn Greenwald has been charged with cybercrimes in Brazil, stemming from publishing information and documents that were embarrassing to the government. The charges are that he actively helped the people who actually did the hacking:

Citing intercepted messages between Mr. Greenwald and the hackers, prosecutors say the journalist played a “clear role in facilitating the commission of a crime.”

For instance, prosecutors contend that Mr. Greenwald encouraged the hackers to delete archives that had already been shared with The Intercept Brasil, in order to cover their tracks.

Prosecutors also say that Mr. Greenwald was communicating with the hackers while they were actively monitoring private chats on Telegram, a messaging app. The complaint charged six other individuals, including four who were detained last year in connection with the cellphone hacking.

This isn’t new, or unique to Brazil. Last year, Julian Assange was charged by the US with doing essentially the same thing with Chelsea Manning:

The indictment alleges that in March 2010, Assange engaged in a conspiracy with Chelsea Manning, a former intelligence analyst in the U.S. Army, to assist Manning in cracking a password stored on U.S. Department of Defense computers connected to the Secret Internet Protocol Network (SIPRNet), a U.S. government network used for classified documents and communications. Manning, who had access to the computers in connection with her duties as an intelligence analyst, was using the computers to download classified records to transmit to WikiLeaks. Cracking the password would have allowed Manning to log on to the computers under a username that did not belong to her. Such a deceptive measure would have made it more difficult for investigators to determine the source of the illegal disclosures.

During the conspiracy, Manning and Assange engaged in real-time discussions regarding Manning’s transmission of classified records to Assange. The discussions also reflect Assange actively encouraging Manning to provide more information. During an exchange, Manning told Assange that “after this upload, that’s all I really have got left.” To which Assange replied, “curious eyes never run dry in my experience.”

Good commentary on the Assange case here.

It’s too early for any commentary on the Greenwald case. Lots of news articles are essentially saying the same thing. I’ll post more news when there is some.

EDITED TO ADD (2/12): Marcy Wheeler compares the Greenwald case with the Assange case.

Posted on January 21, 2020 at 3:23 PM63 Comments

SIM Hijacking

SIM hijacking—or SIM swapping—is an attack where a fraudster contacts your cell phone provider and convinces them to switch your account to a phone that they control. Since your smartphone often serves as a security measure or backup verification system, this allows the fraudster to take over other accounts of yours. Sometimes this involves people inside the phone companies.

Phone companies have added security measures since this attack became popular and public, but a new study (news article) shows that the measures aren’t helping:

We examined the authentication procedures used by five pre-paid wireless carriers when a customer attempted to change their SIM card. These procedures are an important line of defense against attackers who seek to hijack victims’ phone numbers by posing as the victim and calling the carrier to request that service be transferred to a SIM card the attacker possesses. We found that all five carriers used insecure authentication challenges that could be easily subverted by attackers.We also found that attackers generally only needed to target the most vulnerable authentication challenges, because the rest could be bypassed.

It’s a classic security vs. usability trade-off. The phone companies want to provide easy customer service for their legitimate customers, and that system is what’s being exploited by the SIM hijackers. Companies could make the fraud harder, but it would necessarily also make it harder for legitimate customers to modify their accounts.

Posted on January 21, 2020 at 6:30 AM30 Comments

Clearview AI and Facial Recognition

The New York Times has a long story about Clearview AI, a small company that scrapes identified photos of people from pretty much everywhere, and then uses unstated magical AI technology to identify people in other photos.

His tiny company, Clearview AI, devised a groundbreaking facial recognition app. You take a picture of a person, upload it and get to see public photos of that person, along with links to where those photos appeared. The system—whose backbone is a database of more than three billion images that Clearview claims to have scraped from Facebook, YouTube, Venmo and millions of other websites—goes far beyond anything ever constructed by the United States government or Silicon Valley giants.

Federal and state law enforcement officers said that while they had only limited knowledge of how Clearview works and who is behind it, they had used its app to help solve shoplifting, identity theft, credit card fraud, murder and child sexual exploitation cases.

[…]

But without public scrutiny, more than 600 law enforcement agencies have started using Clearview in the past year, according to the company, which declined to provide a list. The computer code underlying its app, analyzed by The New York Times, includes programming language to pair it with augmented-reality glasses; users would potentially be able to identify every person they saw. The tool could identify activists at a protest or an attractive stranger on the subway, revealing not just their names but where they lived, what they did and whom they knew.

And it’s not just law enforcement: Clearview has also licensed the app to at least a handful of companies for security purposes.

Another article.

EDITED TO ADD (1/23): Twitter told the company to stop scraping its photos.

Posted on January 20, 2020 at 8:53 AM30 Comments

Friday Squid Blogging: Giant Squid Genome Analyzed

This is fantastic work:

In total, the researchers identified approximately 2.7 billion DNA base pairs, which is around 90 percent the size of the human genome. There’s nothing particularly special about that size, especially considering that the axolotl genome is 10 times larger than the human genome. It’s going to take some time to fully understand and appreciate the intricacies of the giant squid’s genetic profile, but these preliminary results are already helping to explain some of its more remarkable features.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Posted on January 17, 2020 at 4:19 PM72 Comments

Critical Windows Vulnerability Discovered by NSA

Yesterday’s Microsoft Windows patches included a fix for a critical vulnerability in the system’s crypto library.

A spoofing vulnerability exists in the way Windows CryptoAPI (Crypt32.dll) validates Elliptic Curve Cryptography (ECC) certificates.

An attacker could exploit the vulnerability by using a spoofed code-signing certificate to sign a malicious executable, making it appear the file was from a trusted, legitimate source. The user would have no way of knowing the file was malicious, because the digital signature would appear to be from a trusted provider.

A successful exploit could also allow the attacker to conduct man-in-the-middle attacks and decrypt confidential information on user connections to the affected software.

That’s really bad, and you should all patch your system right now, before you finish reading this blog post.

This is a zero-day vulnerability, meaning that it was not detected in the wild before the patch was released. It was discovered by security researchers. Interestingly, it was discovered by NSA security researchers, and the NSA security advisory gives a lot more information about it than the Microsoft advisory does.

Exploitation of the vulnerability allows attackers to defeat trusted network connections and deliver executable code while appearing as legitimately trusted entities. Examples where validation of trust may be impacted include:

  • HTTPS connections
  • Signed files and emails
  • Signed executable code launched as user-mode processes

The vulnerability places Windows endpoints at risk to a broad range of exploitation vectors. NSA assesses the vulnerability to be severe and that sophisticated cyber actors will understand the underlying flaw very quickly and, if exploited, would render the previously mentioned platforms as fundamentally vulnerable.The consequences of not patching the vulnerability are severe and widespread. Remote exploitation tools will likely be made quickly and widely available.Rapid adoption of the patch is the only known mitigation at this time and should be the primary focus for all network owners.

Early yesterday morning, NSA’s Cybersecurity Directorate head Anne Neuberger hosted a media call where she talked about the vulnerability and—to my shock—took questions from the attendees. According to her, the NSA discovered this vulnerability as part of its security research. (If it found it in some other nation’s cyberweapons stash—my personal favorite theory—she declined to say.) She did not answer when asked how long ago the NSA discovered the vulnerability. She said that this is not the first time the NSA sent Microsoft a vulnerability to fix, but it was the first time it has publicly taken credit for the discovery. The reason is that the NSA is trying to rebuild trust with the security community, and this disclosure is a result of its new initiative to share findings more quickly and more often.

Barring any other information, I would take the NSA at its word here. So, good for it.

And—seriously—patch your systems now: Windows 10 and Windows Server 2016/2019. Assume that this vulnerability has already been weaponized, probably by criminals and certainly by major governments. Even assume that the NSA is using this vulnerability—why wouldn’t it?

Ars Technica article. Wired article. CERT advisory.

EDITED TO ADD: Washington Post article.

EDITED TO ADD (1/16): The attack was demonstrated in less than 24 hours.

Brian Krebs blog post.

Posted on January 15, 2020 at 6:38 AM79 Comments

Upcoming Speaking Engagements

This is a current list of where and when I am scheduled to speak:

  • I’m speaking at Indiana University Bloomington on January 30, 2020.
  • I’ll be at RSA Conference 2020 in San Francisco. On Wednesday, February 26, at 2:50 PM, I’ll be part of a panel on “How to Reduce Supply Chain Risk: Lessons from Efforts to Block Huawei.” On Thursday, February 27, at 9:20 AM, I’m giving a keynote on “Hacking Society.”
  • I’m speaking at SecIT by Heise in Hannover, Germany on March 26, 2020.

The list is maintained on this page.

Posted on January 14, 2020 at 1:00 PM4 Comments

5G Security

The security risks inherent in Chinese-made 5G networking equipment are easy to understand. Because the companies that make the equipment are subservient to the Chinese government, they could be forced to include backdoors in the hardware or software to give Beijing remote access. Eavesdropping is also a risk, although efforts to listen in would almost certainly be detectable. More insidious is the possibility that Beijing could use its access to degrade or disrupt communications services in the event of a larger geopolitical conflict. Since the internet, especially the “internet of things,” is expected to rely heavily on 5G infrastructure, potential Chinese infiltration is a serious national security threat.

But keeping untrusted companies like Huawei out of Western infrastructure isn’t enough to secure 5G. Neither is banning Chinese microchips, software, or programmers. Security vulnerabilities in the standards—­the protocols and software for 5G—­ensure that vulnerabilities will remain, regardless of who provides the hardware and software. These insecurities are a result of market forces that prioritize costs over security and of governments, including the United States, that want to preserve the option of surveillance in 5G networks. If the United States is serious about tackling the national security threats related to an insecure 5G network, it needs to rethink the extent to which it values corporate profits and government espionage over security.

To be sure, there are significant security improvements in 5G over 4G­in encryption, authentication, integrity protection, privacy, and network availability. But the enhancements aren’t enough.

The 5G security problems are threefold. First, the standards are simply too complex to implement securely. This is true for all software, but the 5G protocols offer particular difficulties. Because of how it is designed, the system blurs the wireless portion of the network connecting phones with base stations and the core portion that routes data around the world. Additionally, much of the network is virtualized, meaning that it will rely on software running on dynamically configurable hardware. This design dramatically increases the points vulnerable to attack, as does the expected massive increase in both things connected to the network and the data flying about it.

Second, there’s so much backward compatibility built into the 5G network that older vulnerabilities remain. 5G is an evolution of the decade-old 4G network, and most networks will mix generations. Without the ability to do a clean break from 4G to 5G, it will simply be impossible to improve security in some areas. Attackers may be able to force 5G systems to use more vulnerable 4G protocols, for example, and 5G networks will inherit many existing problems.

Third, the 5G standards committees missed many opportunities to improve security. Many of the new security features in 5G are optional, and network operators can choose not to implement them. The same happened with 4G; operators even ignored security features defined as mandatory in the standard because implementing them was expensive. But even worse, for 5G, development, performance, cost, and time to market were all prioritized over security, which was treated as an afterthought.

Already problems are being discovered. In November 2019, researchers published vulnerabilities that allow 5G users to be tracked in real time, be sent fake emergency alerts, or be disconnected from the 5G network altogether. And this wasn’t the first reporting to find issues in 5G protocols and implementations.

Chinese, Iranians, North Koreans, and Russians have been breaking into U.S. networks for years without having any control over the hardware, the software, or the companies that produce the devices. (And the U.S. National Security Agency, or NSA, has been breaking into foreign networks for years without having to coerce companies into deliberately adding backdoors.) Nothing in 5G prevents these activities from continuing, even increasing, in the future.

Solutions are few and far between and not very satisfying. It’s really too late to secure 5G networks. Susan Gordon, then-U.S. principal deputy director of national intelligence, had it right when she said last March: “You have to presume a dirty network.” Indeed, the United States needs to accept 5G’s insecurities and build secure systems on top of it. In some cases, doing so isn’t hard: Adding encryption to an iPhone or a messaging system like WhatsApp provides security from eavesdropping, and distributed protocols provide security from disruption­—regardless of how insecure the network they operate on is. In other cases, it’s impossible. If your smartphone is vulnerable to a downloaded exploit, it doesn’t matter how secure the networking protocols are. Often, the task will be somewhere in between these two extremes.

5G security is just one of the many areas in which near-term corporate profits prevailed against broader social good. In a capitalist free market economy, the only solution is to regulate companies, and the United States has not shown any serious appetite for that.

What’s more, U.S. intelligence agencies like the NSA rely on inadvertent insecurities for their worldwide data collection efforts, and law enforcement agencies like the FBI have even tried to introduce new ones to make their own data collection efforts easier. Again, near-term self-interest has so far triumphed over society’s long-term best interests.

In turn, rather than mustering a major effort to fix 5G, what’s most likely to happen is that the United States will muddle along with the problems the network has, as it has done for decades. Maybe things will be different with 6G, which is starting to be discussed in technical standards committees. The U.S. House of Representatives just passed a bill directing the State Department to participate in the international standards-setting process so that it is just run by telecommunications operators and more interested countries, but there is no chance of that measure becoming law.

The geopolitics of 5G are complicated, involving a lot more than security. China is subsidizing the purchase of its companies’ networking equipment in countries around the world. The technology will quickly become critical national infrastructure, and security problems will become life-threatening. Both criminal attacks and government cyber-operations will become more common and more damaging. Eventually, Washington will have do so something. That something will be difficult and expensive—­let’s hope it won’t also be too late.

This essay previously appeared in Foreign Policy.

EDITED TO ADD (1/16): Slashdot thread.

EDITED TO ADD (3/16): This essay has been translated into Spanish.

EDITED TO ADD: This essay has been translated into Portuguese.

Posted on January 14, 2020 at 7:42 AM30 Comments

Artificial Personas and Public Discourse

Presidential campaign season is officially, officially, upon us now, which means it’s time to confront the weird and insidious ways in which technology is warping politics. One of the biggest threats on the horizon: artificial personas are coming, and they’re poised to take over political debate. The risk arises from two separate threads coming together: artificial intelligence-driven text generation and social media chatbots. These computer-generated “people” will drown out actual human discussions on the Internet.

Text-generation software is already good enough to fool most people most of the time. It’s writing news stories, particularly in sports and finance. It’s talking with customers on merchant websites. It’s writing convincing op-eds on topics in the news (though there are limitations). And it’s being used to bulk up “pink-slime journalism”—websites meant to appear like legitimate local news outlets but that publish propaganda instead.

There’s a record of algorithmic content pretending to be from individuals, as well. In 2017, the Federal Communications Commission had an online public-commenting period for its plans to repeal net neutrality. A staggering 22 million comments were received. Many of them—maybe half—were fake, using stolen identities. These comments were also crude; 1.3 million were generated from the same template, with some words altered to make them appear unique. They didn’t stand up to even cursory scrutiny.

These efforts will only get more sophisticated. In a recent experiment, Harvard senior Max Weiss used a text-generation program to create 1,000 comments in response to a government call on a Medicaid issue. These comments were all unique, and sounded like real people advocating for a specific policy position. They fooled the Medicaid.gov administrators, who accepted them as genuine concerns from actual human beings. This being research, Weiss subsequently identified the comments and asked for them to be removed, so that no actual policy debate would be unfairly biased. The next group to try this won’t be so honorable.

Chatbots have been skewing social-media discussions for years. About a fifth of all tweets about the 2016 presidential election were published by bots, according to one estimate, as were about a third of all tweets about that year’s Brexit vote. An Oxford Internet Institute report from last year found evidence of bots being used to spread propaganda in 50 countries. These tended to be simple programs mindlessly repeating slogans: a quarter million pro-Saudi “We all have trust in Mohammed bin Salman” tweets following the 2018 murder of Jamal Khashoggi, for example. Detecting many bots with a few followers each is harder than detecting a few bots with lots of followers. And measuring the effectiveness of these bots is difficult. The best analyses indicate that they did not affect the 2016 US presidential election. More likely, they distort people’s sense of public sentiment and their faith in reasoned political debate. We are all in the middle of a novel social experiment.

Over the years, algorithmic bots have evolved to have personas. They have fake names, fake bios, and fake photos—sometimes generated by AI. Instead of endlessly spewing propaganda, they post only occasionally. Researchers can detect that these are bots and not people, based on their patterns of posting, but the bot technology is getting better all the time, outpacing tracking attempts. Future groups won’t be so easily identified. They’ll embed themselves in human social groups better. Their propaganda will be subtle, and interwoven in tweets about topics relevant to those social groups.

Combine these two trends and you have the recipe for nonhuman chatter to overwhelm actual political speech.

Soon, AI-driven personas will be able to write personalized letters to newspapers and elected officials, submit individual comments to public rule-making processes, and intelligently debate political issues on social media. They will be able to comment on social-media posts, news sites, and elsewhere, creating persistent personas that seem real even to someone scrutinizing them. They will be able to pose as individuals on social media and send personalized texts. They will be replicated in the millions and engage on the issues around the clock, sending billions of messages, long and short. Putting all this together, they’ll be able to drown out any actual debate on the Internet. Not just on social media, but everywhere there’s commentary.

Maybe these persona bots will be controlled by foreign actors. Maybe it’ll be domestic political groups. Maybe it’ll be the candidates themselves. Most likely, it’ll be everybody. The most important lesson from the 2016 election about misinformation isn’t that misinformation occurred; it is how cheap and easy misinforming people was. Future technological improvements will make it all even more affordable.

Our future will consist of boisterous political debate, mostly bots arguing with other bots. This is not what we think of when we laud the marketplace of ideas, or any democratic political process. Democracy requires two things to function properly: information and agency. Artificial personas can starve people of both.

Solutions are hard to imagine. We can regulate the use of bots—a proposed California law would require bots to identify themselves—but that is effective only against legitimate influence campaigns, such as advertising. Surreptitious influence operations will be much harder to detect. The most obvious defense is to develop and standardize better authentication methods. If social networks verify that an actual person is behind each account, then they can better weed out fake personas. But fake accounts are already regularly created for real people without their knowledge or consent, and anonymous speech is essential for robust political debate, especially when speakers are from disadvantaged or marginalized communities. We don’t have an authentication system that both protects privacy and scales to the billions of users.

We can hope that our ability to identify artificial personas keeps up with our ability to disguise them. If the arms race between deep fakes and deep-fake detectors is any guide, that’ll be hard as well. The technologies of obfuscation always seem one step ahead of the technologies of detection. And artificial personas will be designed to act exactly like real people.

In the end, any solutions have to be nontechnical. We have to recognize the limitations of online political conversation, and again prioritize face-to-face interactions. These are harder to automate, and we know the people we’re talking with are actual people. This would be a cultural shift away from the internet and text, stepping back from social media and comment threads. Today that seems like a completely unrealistic solution.

Misinformation efforts are now common around the globe, conducted in more than 70 countries. This is the normal way to push propaganda in countries with authoritarian leanings, and it’s becoming the way to run a political campaign, for either a candidate or an issue.

Artificial personas are the future of propaganda. And while they may not be effective in tilting debate to one side or another, they easily drown out debate entirely. We don’t know the effect of that noise on democracy, only that it’ll be pernicious, and that it’s inevitable.

This essay previously appeared in TheAtlantic.com.

EDITED TO ADD: Jamie Susskind wrote a similar essay.

EDITED TO ADD (3/16): This essay has been translated into Spanish.

EDITED TO ADD (6/4): This essay has been translated into Portuguese.

Posted on January 13, 2020 at 8:21 AM43 Comments

Police Surveillance Tools from Special Services Group

Special Services Group, a company that sells surveillance tools to the FBI, DEA, ICE, and other US government agencies, has had its secret sales brochure published. Motherboard received the brochure as part of a FOIA request to the Irvine Police Department in California.

“The Tombstone Cam is our newest video concealment offering the ability to conduct remote surveillance operations from cemeteries,” one section of the Black Book reads. The device can also capture audio, its battery can last for two days, and “the Tombstone Cam is fully portable and can be easily moved from location to location as necessary,” the brochure adds. Another product is a video and audio capturing device that looks like an alarm clock, suitable for “hotel room stings,” and other cameras are designed to appear like small tree trunks and rocks, the brochure reads.

The “Shop-Vac Covert DVR Recording System” is essentially a camera and 1TB harddrive hidden inside a vacuum cleaner. “An AC power connector is available for long-term deployments, and DC power options can be connected for mobile deployments also,” the brochure reads. The description doesn’t say whether the vacuum cleaner itself works.

[…]

One of the company’s “Rapid Vehicle Deployment Kits” includes a camera hidden inside a baby car seat. “The system is fully portable, so you are not restricted to the same drop car for each mission,” the description adds.

[…]

The so-called “K-MIC In-mouth Microphone & Speaker Set” is a tiny Bluetooth device that sits on a user’s teeth and allows them to “communicate hands-free in crowded, noisy surroundings” with “near-zero visual indications,” the Black Book adds.

Other products include more traditional surveillance cameras and lenses as well as tools for surreptitiously gaining entry to buildings. The “Phantom RFID Exploitation Toolkit” lets a user clone an access card or fob, and the so-called “Shadow” product can “covertly provide the user with PIN code to an alarm panel,” the brochure reads.

The Motherboard article also reprints the scary emails Motherboard received from Special Services Group, when asked for comment. Of course, Motherboard published the information anyway.

Posted on January 10, 2020 at 8:41 AM34 Comments

New SHA-1 Attack

There’s a new, practical, collision attack against SHA-1:

In this paper, we report the first practical implementation of this attack, and its impact on real-world security with a PGP/GnuPG impersonation attack. We managed to significantly reduce the complexity of collisions attack against SHA-1: on an Nvidia GTX 970, identical-prefix collisions can now be computed with a complexity of 261.2rather than264.7, and chosen-prefix collisions with a complexity of263.4rather than267.1. When renting cheap GPUs, this translates to a cost of 11k US$ for a collision,and 45k US$ for a chosen-prefix collision, within the means of academic researchers.Our actual attack required two months of computations using 900 Nvidia GTX 1060GPUs (we paid 75k US$ because GPU prices were higher, and we wasted some time preparing the attack).

It has practical applications:

We chose the PGP/GnuPG Web of Trust as demonstration of our chosen-prefix collision attack against SHA-1. The Web of Trust is a trust model used for PGP that relies on users signing each other’s identity certificate, instead of using a central PKI. For compatibility reasons the legacy branch of GnuPG (version 1.4) still uses SHA-1 by default for identity certification.

Using our SHA-1 chosen-prefix collision, we have created two PGP keys with different UserIDs and colliding certificates: key B is a legitimate key for Bob (to be signed by the Web of Trust), but the signature can be transferred to key A which is a forged key with Alice’s ID. The signature will still be valid because of the collision, but Bob controls key A with the name of Alice, and signed by a third party. Therefore, he can impersonate Alice and sign any document in her name.

From a news article:

The new attack is significant. While SHA1 has been slowly phased out over the past five years, it remains far from being fully deprecated. It’s still the default hash function for certifying PGP keys in the legacy 1.4 version branch of GnuPG, the open-source successor to PGP application for encrypting email and files. Those SHA1-generated signatures were accepted by the modern GnuPG branch until recently, and were only rejected after the researchers behind the new collision privately reported their results.

Git, the world’s most widely used system for managing software development among multiple people, still relies on SHA1 to ensure data integrity. And many non-Web applications that rely on HTTPS encryption still accept SHA1 certificates. SHA1 is also still allowed for in-protocol signatures in the Transport Layer Security and Secure Shell protocols.

Posted on January 8, 2020 at 9:38 AM26 Comments

USB Cable Kill Switch for Laptops

BusKill is designed to wipe your laptop (Linux only) if it is snatched from you in a public place:

The idea is to connect the BusKill cable to your Linux laptop on one end, and to your belt, on the other end. When someone yanks your laptop from your lap or table, the USB cable disconnects from the laptop and triggers a udev script [1, 2, 3] that executes a series of preset operations.

These can be something as simple as activating your screensaver or shutting down your device (forcing the thief to bypass your laptop’s authentication mechanism before accessing any data), but the script can also be configured to wipe the device or delete certain folders (to prevent thieves from retrieving any sensitive data or accessing secure business backends).

Clever idea, but I—and my guess is most people—would be much more likely to stand up from the table, forgetting that the cable was attached, and yanking it out. My problem with pretty much all systems like this is the likelihood of false alarms.

Slashdot article.

EDITED TO ADD (1/14): There are Bluetooth devices that will automatically encrypt a laptop when the device isn’t in proximity. That’s a much better interface than a cable.

Posted on January 7, 2020 at 6:03 AM46 Comments

Friday Squid Blogging: Giant Squid Video from the Gulf of Mexico

Fantastic video:

Scientists had used a specialized camera system developed by Widder called the Medusa, which uses red light undetectable to deep sea creatures and has allowed scientists to discover species and observe elusive ones.

The probe was outfitted with a fake jellyfish that mimicked the invertebrates’ bioluminescent defense mechanism, which can signal to larger predators that a meal may be nearby, to lure the squid and other animals to the camera.

With days to go until the end of the two-week expedition, 100 miles (160 kilometers) southeast of New Orleans, a giant squid took the bait.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Posted on January 3, 2020 at 4:25 PM120 Comments

Chrome Extension Stealing Cryptocurrency Keys and Passwords

A malicious Chrome extension surreptitiously steals Ethereum keys and passwords:

According to Denley, the extension is dangerous to users in two ways. First, any funds (ETH coins and ERC0-based tokens) managed directly inside the extension are at risk.

Denley says that the extension sends the private keys of all wallets created or managed through its interface to a third-party website located at erc20wallet[.]tk.

Second, the extension also actively injects malicious JavaScript code when users navigate to five well-known and popular cryptocurrency management platforms. This code steals login credentials and private keys, data that it’s sent to the same erc20wallet[.]tk third-party website.

Another example of how blockchain requires many single points of trust in order to be secure.

Posted on January 3, 2020 at 6:09 AM15 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.