August 15, 2021
by Bruce Schneier
Fellow and Lecturer, Harvard Kennedy School
A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.
For back issues, or to subscribe, visit Crypto-Gram’s web page.
These same essays and news items appear in the Schneier on Security blog, along with a lively and intelligent comment section. An RSS feed is available.
- Colorado Passes Consumer Privacy Law
- REvil is Off-Line
- Candiru: Another Cyberweapons Arms Manufacturer
- NSO Group Hacked
- Nasty Windows Printer Driver Vulnerability
- Commercial Location Data Used to Out Priest
- Disrupting Ransomware by Disrupting Bitcoin
- Hiding Malware in ML Models
- De-anonymization Story
- AirDropped Gun Photo Causes Terrorist Scare
- Storing Encrypted Photos in Google’s Cloud
- I Am Parting With My Crypto Library
- The European Space Agency Launches Hackable Satellite
- Paragon: Yet Another Cyberweapons Arms Manufacturer
- Zoom Lied about End-to-End Encryption
- Using “Master Faces” to Bypass Face-Recognition Authenticating Systems
- Defeating Microsoft’s Trusted Platform Module
- Apple Adds a Backdoor to iMessage and iCloud Storage
- Cobalt Strike Vulnerability Affects Botnet Servers
- Using AI to Scale Spear Phishing
- Upcoming Speaking Engagements
Here’s a good comparison of the three states’ laws.
Just days after President Biden demanded that President Vladimir V. Putin of Russia shut down ransomware groups attacking American targets, the most aggressive of the groups suddenly went off-line early Tuesday.
Gone was the publicly available “happy blog” the group maintained, listing some of its victims and the group’s earnings from its digital extortion schemes. Internet security groups said the custom-made sites – think of them as virtual conference rooms—where victims negotiated with REvil over how much ransom they would pay to get their data unlocked also disappeared. So did the infrastructure for making payments.
Okay. So either the US took them down, Russia took them down, or they took themselves down.
[2021.07.19] Citizen Lab has identified yet another Israeli company that sells spyware to governments around the world: Candiru.
From the report:
- Candiru is a secretive Israel-based company that sells spyware exclusively to governments. Reportedly, their spyware can infect and monitor iPhones, Androids, Macs, PCs, and cloud accounts.
- Using Internet scanning we identified more than 750 websites linked to Candiru’s spyware infrastructure. We found many domains masquerading as advocacy organizations such as Amnesty International, the Black Lives Matter movement, as well as media companies, and other civil-society themed entities.
- We identified a politically active victim in Western Europe and recovered a copy of Candiru’s Windows spyware.
- Working with Microsoft Threat Intelligence Center (MSTIC) we analyzed the spyware, resulting in the discovery of CVE-2021-31979 and CVE-2021-33771 by Microsoft, two privilege escalation vulnerabilities exploited by Candiru. Microsoft patched both vulnerabilities on July 13th, 2021.
- As part of their investigation, Microsoft observed at least 100 victims in Palestine, Israel, Iran, Lebanon, Yemen, Spain, United Kingdom, Turkey, Armenia, and Singapore. Victims include human rights defenders, dissidents, journalists, activists, and politicians.
- We provide a brief technical overview of the Candiru spyware’s persistence mechanism and some details about the spyware’s functionality.
- Candiru has made efforts to obscure its ownership structure, staffing, and investment partners. Nevertheless, we have been able to shed some light on those areas in this report.
We’re not going to be able to secure the Internet until we deal with the companies that engage in the international cyber-arms trade.
[2021.07.20] NSO Group, the Israeli cyberweapons arms manufacturer behind the Pegasus spyware—used by authoritarian regimes around the world to spy on dissidents, journalists, human rights workers, and others—was hacked. Or, at least, an enormous trove of documents was leaked to journalists.
Most interesting is a list of over 50,000 phone numbers that were being spied on by NSO Group’s software. Why does NSO Group have that list? The obvious answer is that NSO Group provides spyware-as-a-service, and centralizes operations somehow. Nicholas Weaver postulates that “part of the reason that NSO keeps a master list of targeting…is they hand it off to Israeli intelligence.”
This isn’t the first time NSO Group has been in the news. Citizen Lab has been researching and reporting on its actions since 2016. It’s been linked to the Saudi murder of Jamal Khashoggi. It is extensively used by Mexico to spy on—among others—supporters of that country’s soda tax.
NSO Group seems to be a completely deplorable company, so it’s hard to have any sympathy for it. As I previously wrote about another hack of another cyberweapons arms manufacturer: “It’s one thing to have dissatisfied customers. It’s another to have dissatisfied customers with death squads.” I’d like to say that I don’t know how the company will survive this, but—sadly—I think it will.
Finally: here’s a tool that you can use to test if your iPhone or Android is infected with Pegasus. (Note: it’s not easy to use.)
Researchers have released technical details on a high-severity privilege-escalation flaw in HP printer drivers (also used by Samsung and Xerox), which impacts hundreds of millions of Windows machines.
If exploited, cyberattackers could bypass security products; install programs; view, change, encrypt or delete data; or create new accounts with more extensive user rights.
The bug (CVE-2021-3438) has lurked in systems for 16 years, researchers at SentinelOne said, but was only uncovered this year. It carries an 8.8 out of 10 rating on the CVSS scale, making it high-severity.
Look for your printer here, and download the patch if there is one.
EDITED TO ADD (8/13): Here’s a better list of affected HP and Samsung printers.
The news starkly demonstrates not only the inherent power of location data, but how the chance to wield that power has trickled down from corporations and intelligence agencies to essentially any sort of disgruntled, unscrupulous, or dangerous individual. A growing market of data brokers that collect and sell data from countless apps has made it so that anyone with a bit of cash and effort can figure out which phone in a so-called anonymized dataset belongs to a target, and abuse that information.
There is a whole industry devoted to re-identifying anonymized data. This was something that Snowden showed that the NSA could do. Now it’s available to everyone.
[2021.07.26] Ransomware isn’t new; the idea dates back to 1986 with the “Brain” computer virus. Now, it’s become the criminal business model of the internet for two reasons. The first is the realization that no one values data more than its original owner, and it makes more sense to ransom it back to them—sometimes with the added extortion of threatening to make it public—than it does to sell it to anyone else. The second is a safe way of collecting ransoms: bitcoin.
This is where the suggestion to ban cryptocurrencies as a way to “solve” ransomware comes from. Lee Reiners, executive director of the Global Financial Markets Center at Duke Law, proposed this in a recent Wall Street Journal op-ed. Journalist Jacob Silverman made the same proposal in a New Republic essay. Without this payment channel, they write, the major ransomware epidemic is likely to vanish, since the only payment alternatives are suitcases full of cash or the banking system, both of which have severe limitations for criminal enterprises.
It’s the same problem kidnappers have had for centuries. The riskiest part of the operation is collecting the ransom. That’s when the criminal exposes themselves, by telling the payer where to leave the money. Or gives out their banking details. This is how law enforcement tracks kidnappers down and arrests them. The rise of an anonymous, global, distributed money-transfer system outside of any national control is what makes computer ransomware possible.
This problem is made worse by the nature of the criminals. They operate out of countries that don’t have the resources to prosecute cybercriminals, like Nigeria; or protect cybercriminals that only attack outside their borders, like Russia; or use the proceeds as a revenue stream, like North Korea. So even when a particular group is identified, it is often impossible to prosecute. Which leaves the only tools left a combination of successfully blocking attacks (another hard problem) and eliminating the payment channels that the criminals need to turn their attacks into profit.
In this light, banning cryptocurrencies like bitcoin is an obvious solution. But while the solution is conceptually simple, it’s also impossible because—despite its overwhelming problems—there are so many legitimate interests using cryptocurrencies, albeit largely for speculation and not for legal payments.
We suggest an easier alternative: merely disrupt the cryptocurrency markets. Making them harder to use will have the effect of making them less useful as a ransomware payment vehicle, and not just because victims will have more difficulty figuring out how to pay. The reason requires understanding how criminals collect their profits.
Paying a ransom starts with a victim turning a large sum of money into bitcoin and then transferring it to a criminal controlled “account.” Bitcoin is, in itself, useless to the criminal. You can’t actually buy much with bitcoin. It’s more like casino chips, only usable in a single establishment for a single purpose. (Yes, there are companies that “accept” bitcoin, but that is mostly a PR stunt.) A criminal needs to convert the bitcoin into some national currency that he can actually save, spend, invest, or whatever.
This is where it gets interesting. Conceptually, bitcoin combines numbered Swiss bank accounts with public transactions and balances. Anyone can create as many anonymous accounts as they want, but every transaction is posted publicly for the entire world to see. This creates some important challenges for these criminals.
First, the criminal needs to take efforts to conceal the bitcoin. In the old days, criminals used “mixing services“: third parties that would accept bitcoin into one account and then return it (minus a fee) from an unconnected set of accounts. Modern bitcoin tracing tools make this money laundering trick ineffective. Instead, the modern criminal does something called “chain swaps.”
In a chain swap, the criminal transfers the bitcoin to a shady offshore cryptocurrency exchange. These exchanges are notoriously weak about enforcing money laundering laws and—for the most part—don’t have access to the banking system. Once on this alternate exchange, the criminal sells his bitcoin and buys some other cryptocurrency like Ethereum, Dogecoin, Tether, Monero, or one of dozens of others. They then transfer it to another shady offshore exchange and transfer it back into bitcoin. Voila—they now have “clean” bitcoin.
Second, the criminal needs to convert that bitcoin into spendable money. They take their newly cleaned bitcoin and transfer it to yet another exchange, one connected to the banking system. Or perhaps they hire someone else to do this step. These exchanges conduct greater oversight of their customers, but the criminal can use a network of bogus accounts, recruit a bunch of users to act as mules, or simply bribe an employee at the exchange to evade whatever laws there. The end result of this activity is to turn the bitcoin into dollars, euros, or some other easily usable currency.
Both of these steps—the chain swapping and currency conversion—require a large amount of normal activity to keep from standing out. That is, they will be easy for law enforcement to identify unless they are hiding among lots of regular, noncriminal transactions. If speculators stopped buying and selling cryptocurrencies and the market shrunk drastically, these criminal activities would no longer be easy to conceal: there’s simply too much money involved.
This is why disruption will work. It doesn’t require an outright ban to stop these criminals from using bitcoin—just enough sand in the gears in the cryptocurrency space to reduce its size and scope.
How do we do this?
The first mechanism observes that the criminal’s flows have a unique pattern. The overall cryptocurrency space is “zero sum”: Every dollar made was provided by someone else. And the primary legal use of cryptocurrencies involves speculation: people effectively betting on a currency’s future value. So the background speculators are mostly balanced: One bitcoin in results in one bitcoin out. There are exceptions involving offshore exchanges and speculation among different cryptocurrencies, but they’re marginal, and only involve turning one bitcoin into a little more (if a speculator is lucky) or a little less (if unlucky).
Criminals and their victims act differently. Victims are net buyers, turning millions of dollars into bitcoin and never going the other way. Criminals are net sellers, only turning bitcoin into currency. The only other net sellers are the cryptocurrency miners, and they are easy to identify.
Any banked exchange that cares about enforcing money laundering laws must consider all significant net sellers of cryptocurrencies as potential criminals and report them to both in-country and US financial authorities. Any exchange that doesn’t should have its banking forcefully cut.
The US Treasury can ensure these exchanges are cut out of the banking system. By designating a rogue but banked exchange, the Treasury says that it is illegal not only to do business with the exchange but for US banks to do business with the exchange’s bank. As a consequence, the rogue exchange would quickly find its banking options eliminated.
A second mechanism involves the IRS. In 2019, it started demanding information from cryptocurrency exchanges and added a check box to the 1040 form that requires disclosure from those who both buy and sell cryptocurrencies. And while this is intended to target tax evasion, it has the side consequence of disrupting those offshore exchanges criminals rely to launder their bitcoin. Speculation on cryptocurrency is far less attractive since the speculators have to pay taxes but most exchanges don’t help out by filing 1099-Bs that make it easy to calculate the taxes owed.
A third mechanism involves targeting the cryptocurrency Tether. While most cryptocurrencies have values that fluctuate with demand, Tether is a “stablecoin” that is supposedly backed one-to-one with dollars. Of course, it probably isn’t, as its claim to be the seventh largest holder of commercial paper (short-term loans to major businesses) is blatantly untrue. Instead, they appear part of a cycle where new Tether is issued, used to buy cryptocurrencies, and the resulting cryptocurrencies now “back” Tether and drive up the price.
This behavior is clearly that of a “wildcat bank,” an 1800s fraudulent banking style that has long been illegal. Tether also bears a striking similarity to Liberty Reserve, an online currency that the Department of Justice successfully prosecuted for money laundering in 2013. Shutting down Tether would have the side effect of eliminating the value proposition for the exchanges that support chain swapping, since these exchanges need a “stable” value for the speculators to trade against.
There are further possibilities. One involves treating the cryptocurrency miners, those who validate all transactions and add them to the public record, as money transmitters—and subject to the regulations around that business. Another option involves requiring cryptocurrency exchanges to actually deliver the cryptocurrencies into customer-controlled wallets.
Effectively, all cryptocurrency exchanges avoid transferring cryptocurrencies between customers. Instead, they simply record entries in a central database. This makes sense because actual “on chain” transactions can be particularly expensive for cryptocurrencies like bitcoin or Ethereum. If all speculators needed to actually receive their bitcoins, it would make clear that its value proposition as a currency simply doesn’t exist, as the already strained system would grind to a halt.
And, of course, law enforcement can already target criminals’ bitcoin directly. An example of this just occurred, when US law enforcement was able to seize 85% of the $4 million ransom Colonial Pipeline paid to the criminal organization DarkSide. That by the time the seizure occurred the bitcoin lost more than 30% of its value is just one more reminder of how unworkable bitcoin is as a “store of value.”
There is no single silver bullet to disrupt either cryptocurrencies or ransomware. But enough little disruptions, a “death of a thousand cuts” through new and existing regulation, should make bitcoin no longer usable for ransomware. And if there’s no safe way for a criminal to collect the ransom, their business model becomes no longer viable.
This essay was written with Nicholas Weaver, and previously appeared on Slate.com.
[2021.07.27] Interesting research: “EvilModel: Hiding Malware Inside of Neural Network Models.”
Abstract: Delivering malware covertly and detection-evadingly is critical to advanced malware campaigns. In this paper, we present a method that delivers malware covertly and detection-evadingly through neural network models. Neural network models are poorly explainable and have a good generalization ability. By embedding malware into the neurons, malware can be delivered covertly with minor or even no impact on the performance of neural networks. Meanwhile, since the structure of the neural network models remains unchanged, they can pass the security scan of antivirus engines. Experiments show that 36.9MB of malware can be embedded into a 178MB-AlexNet model within 1% accuracy loss, and no suspicious are raised by antivirus engines in VirusTotal, which verifies the feasibility of this method. With the widespread application of artificial intelligence, utilizing neural networks becomes a forwarding trend of malware. We hope this work could provide a referenceable scenario for the defense on neural network-assisted attacks.
Monsignor Jeffrey Burrill was general secretary of the US Conference of Catholic Bishops (USCCB), effectively the highest-ranking priest in the US who is not a bishop, before records of Grindr usage obtained from data brokers was correlated with his apartment, place of work, vacation home, family members’ addresses, and more.
The data that resulted in Burrill’s ouster was reportedly obtained through legal means. Mobile carriers sold—and still sell—location data to brokers who aggregate it and sell it to a range of buyers, including advertisers, law enforcement, roadside services, and even bounty hunters. Carriers were caught in 2018 selling real-time location data to brokers, drawing the ire of Congress. But after carriers issued public mea culpas and promises to reform the practice, investigations have revealed that phone location data is still popping up in places it shouldn’t. This year, T-Mobile even broadened its offerings, selling customers’ web and app usage data to third parties unless people opt out.
The publication that revealed Burrill’s private app usage, The Pillar, a newsletter covering the Catholic Church, did not say exactly where or how it obtained Burrill’s data. But it did say how it de-anonymized aggregated data to correlate Grindr app usage with a device that appears to be Burrill’s phone.
The Pillar says it obtained 24 months’ worth of “commercially available records of app signal data” covering portions of 2018, 2019, and 2020, which included records of Grindr usage and locations where the app was used. The publication zeroed in on addresses where Burrill was known to frequent and singled out a device identifier that appeared at those locations. Key locations included Burrill’s office at the USCCB, his USCCB-owned residence, and USCCB meetings and events in other cities where he was in attendance. The analysis also looked at other locations farther afield, including his family lake house, his family members’ residences, and an apartment in his Wisconsin hometown where he reportedly has lived.
Location data is not anonymous. It cannot be made anonymous. I hope stories like these will teach people that.
[2021.07.29] A teenager on an airplane sent a photo of a replica gun via AirDrop to everyone who had their settings configured to receive unsolicited photos from strangers. This caused a three-hour delay as the plane—still at the gate—was evacuated and searched.
The teen was not allowed to reboard. I can’t find any information about whether he was charged with any of those vague “terrorist threat” crimes.
It’s been a long time since we’ve had one of these sorts of overreactions.
Abstract: Cloud photo services are widely used for persistent, convenient, and often free photo storage, which is especially useful for mobile devices. As users store more and more photos in the cloud, significant privacy concerns arise because even a single compromise of a user’s credentials give attackers unfettered access to all of the user’s photos. We have created Easy Secure Photos (ESP) to enable users to protect their photos on cloud photo services such as Google Photos. ESP introduces a new client-side encryption architecture that includes a novel format-preserving image encryption algorithm, an encrypted thumbnail display mechanism, and a usable key management system. ESP encrypts image data such that the result is still a standard format image like JPEG that is compatible with cloud photo services. ESP efficiently generates and displays encrypted thumbnails for fast and easy browsing of photo galleries from trusted user devices. ESP’s key management makes it simple to authorize multiple user devices to view encrypted image content via a process similar to device pairing, but using the cloud photo service as a QR code communication channel. We have implemented ESP in a popular Android photos app for use with Google Photos and demonstrate that it is easy to use and provides encryption functionality transparently to users, maintains good interactive performance and image quality while providing strong privacy guarantees, and retains the sharing and storage benefits of Google Photos without any changes to the cloud service
[2021.07.30] The time has come for me to find a new home for my (paper) cryptography library. It’s about 150 linear feet of books, conference proceedings, journals, and monographs—mostly from the 1980s, 1990s, and 2000s.
My preference is that it goes to an educational institution, but will consider a corporate or personal home if that’s the only option available. If you think you can break it up and sell it, I’ll consider that as a last resort. New owner pays all packaging and shipping costs, and possibly a purchase price depending on who you are and what you want to do with the library.
If you are interested, please email me. I can send photos.
EDITED TO ADD (8/1): I am talking with two universities and the Internet Archive. It will find a good home. Thank you all for your suggestions.
A sophisticated telecommunications satellite that can be completely repurposed while in space has launched.
Because the satellite can be reprogrammed in orbit, it can respond to changing demands during its lifetime.
The satellite can detect and characterise any rogue emissions, enabling it to respond dynamically to accidental interference or intentional jamming.
We can assume strong encryption, and good key management. Still, seems like a juicy target for other governments.
Paragon’s product will also likely get spyware critics and surveillance experts alike rubbernecking: It claims to give police the power to remotely break into encrypted instant messaging communications, whether that’s WhatsApp, Signal, Facebook Messenger or Gmail, the industry sources said. One other spyware industry executive said it also promises to get longer-lasting access to a device, even when it’s rebooted.
Two industry sources said they believed Paragon was trying to set itself apart further by promising to get access to the instant messaging applications on a device, rather than taking complete control of everything on a phone. One of the sources said they understood that Paragon’s spyware exploits the protocols of end-to-end encrypted apps, meaning it would hack into messages via vulnerabilities in the core ways in which the software operates.
Read that last sentence again: Paragon uses unpatched zero-day exploits in the software to hack messaging apps.
[2021.08.05] The facts aren’t news, but Zoom will pay $85M—to the class-action attorneys, and to users—for lying to users about end-to-end encryption, and for giving user data to Facebook and Google without consent.
The proposed settlement would generally give Zoom users $15 or $25 each and was filed Saturday at US District Court for the Northern District of California. It came nine months after Zoom agreed to security improvements and a “prohibition on privacy and security misrepresentations” in a settlement with the Federal Trade Commission, but the FTC settlement didn’t include compensation for users.
Abstract: A master face is a face image that passes face-based identity-authentication for a large portion of the population. These faces can be used to impersonate, with a high probability of success, any user, without having access to any user-information. We optimize these faces, by using an evolutionary algorithm in the latent embedding space of the StyleGAN face generator. Multiple evolutionary strategies are compared, and we propose a novel approach that employs a neural network in order to direct the search in the direction of promising samples, without adding fitness evaluations. The results we present demonstrate that it is possible to obtain a high coverage of the population (over 40%) with less than 10 master faces, for three leading deep face recognition systems.
Researchers at the security consultancy Dolos Group, hired to test the security of one client’s network, received a new Lenovo computer preconfigured to use the standard security stack for the organization. They received no test credentials, configuration details, or other information about the machine.
They were not only able to get into the BitLocker-encrypted computer, but then use the computer to get into the corporate network.
It’s the “evil maid attack.” It requires physical access to your computer, but you leave it in your hotel room all the time when you go out to dinner.
Original blog post.
[2021.08.10] Apple’s announcement that it’s going to start scanning photos for child abuse material is a big deal. (Here are five news stories.) I have been following the details, and discussing it in several different email lists. I don’t have time right now to delve into the details, but wanted to post something.
There are two main features that the company is planning to install in every Apple device. One is a scanning feature that will scan all photos as they get uploaded into iCloud Photos to see if they match a photo in the database of known child sexual abuse material (CSAM) maintained by the National Center for Missing & Exploited Children (NCMEC). The other feature scans all iMessage images sent or received by child accounts—that is, accounts designated as owned by a minor—for sexually explicit material, and if the child is young enough, notifies the parent when these images are sent or received. This feature can be turned on or off by parents.
This is pretty shocking coming from Apple, which is generally really good about privacy. It opens the door for all sorts of other surveillance, since now that the system is built it can be used for all sorts of other messages. And it breaks end-to-end encryption, despite Apple’s denials:
Does this break end-to-end encryption in Messages?
No. This doesn’t change the privacy assurances of Messages, and Apple never gains access to communications as a result of this feature. Any user of Messages, including those with with communication safety enabled, retains control over what is sent and to whom. If the feature is enabled for the child account, the device will evaluate images in Messages and present an intervention if the image is determined to be sexually explicit. For accounts of children age 12 and under, parents can set up parental notifications which will be sent if the child confirms and sends or views an image that has been determined to be sexually explicit. None of the communications, image evaluation, interventions, or notifications are available to Apple.
Notice Apple changing the definition of “end-to-end encryption.” No longer is the message a private communication between sender and receiver. A third party is alerted if the message meets a certain criteria.
Beware the Four Horsemen of the Information Apocalypse. They’ll scare you into accepting all sorts of insecure systems.
EDITED TO ADD: This is a really good write-up of the problems.
EDITED TO ADD: Alex Stamos comments.
An open letter to Apple criticizing the project.
A leaked Apple memo responding to the criticisms. (What are the odds that Apple did not intend this to leak?)
EDITED TO ADD (8/11): Paul Rosenzweig wrote an excellent policy discussion.
EDITED TO ADD (8/14): Apple released a threat model.
[2021.08.11] Cobalt Strike is a security tool, used by penetration testers to simulate network attackers. But it’s also used by attackers—from criminals to governments—to automate their own attacks. Researchers have found a vulnerability in the product.
The main components of the security tool are the Cobalt Strike client—also known as a Beacon—and the Cobalt Strike team server, which sends commands to infected computers and receives the data they exfiltrate. An attacker starts by spinning up a machine running Team Server that has been configured to use specific “malleability” customizations, such as how often the client is to report to the server or specific data to periodically send.
Then the attacker installs the client on a targeted machine after exploiting a vulnerability, tricking the user or gaining access by other means. From then on, the client will use those customizations to maintain persistent contact with the machine running the Team Server.
The link connecting the client to the server is called the web server thread, which handles communication between the two machines. Chief among the communications are “tasks” servers send to instruct clients to run a command, get a process list, or do other things. The client then responds with a “reply.”
Researchers at security firm SentinelOne recently found a critical bug in the Team Server that makes it easy to knock the server offline. The bug works by sending a server fake replies that “squeeze every bit of available memory from the C2’s web server thread….”
It’s a pretty serious vulnerability, and there’s already a patch available. But—and this is the interesting part—that patch is available to licensed users, which attackers often aren’t. It’ll be a while before that patch filters down to the pirated copies of the software, and that time window gives defenders an opportunity. They can simulate a Cobolt Strike client, and leverage this vulnerability to reply to servers with messages that cause the server to crash.
The researchers used OpenAI’s GPT-3 platform in conjunction with other AI-as-a-service products focused on personality analysis to generate phishing emails tailored to their colleagues’ backgrounds and traits. Machine learning focused on personality analysis aims to be predict a person’s proclivities and mentality based on behavioral inputs. By running the outputs through multiple services, the researchers were able to develop a pipeline that groomed and refined the emails before sending them out. They say that the results sounded “weirdly human” and that the platforms automatically supplied surprising specifics, like mentioning a Singaporean law when instructed to generate content for people living in Singapore.
While they were impressed by the quality of the synthetic messages and how many clicks they garnered from colleagues versus the human-composed ones, the researchers note that the experiment was just a first step. The sample size was relatively small and the target pool was fairly homogenous in terms of employment and geographic region. Plus, both the human-generated messages and those generated by the AI-as-a-service pipeline were created by office insiders rather than outside attackers trying to strike the right tone from afar.
It’s just a matter of time before this is really effective. Combine it with voice and video synthesis, and you have some pretty scary scenarios. The real risk isn’t that AI-generated phishing emails are as good as human-generated ones, it’s that they can be generated at much greater scale.
[2021.08.14] This is a current list of where and when I am scheduled to speak:
- I’m speaking (via Internet) at SHIFT Business Festival in Finland, August 25-26, 2021.
- I’ll be speaking at an Informa event on September 14, 2021. Details to come.
- I’m keynoting CIISec Live—an all-online event—September 15-16, 2021.
- I’m speaking at the Cybersecurity and Data Privacy Law Conference in Plano, Texas, USA, September 22-23, 2021.
The list is maintained on this page.
Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security technology. To subscribe, or to read back issues, see Crypto-Gram’s web page.
You can also read these articles on my blog, Schneier on Security.
Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.
Bruce Schneier is an internationally renowned security technologist, called a security guru by the Economist. He is the author of over one dozen books—including his latest, We Have Root—as well as hundreds of articles, essays, and academic papers. His newsletter and blog are read by over 250,000 people. Schneier is a fellow at the Berkman Klein Center for Internet & Society at Harvard University; a Lecturer in Public Policy at the Harvard Kennedy School; a board member of the Electronic Frontier Foundation, AccessNow, and the Tor Project; and an Advisory Board Member of the Electronic Privacy Information Center and VerifiedVoting.org. He is the Chief of Security Architecture at Inrupt, Inc.
Copyright © 2021 by Bruce Schneier.