Friday Squid Blogging: Underwater Cameras for Observing Squid
Interesting research paper.
As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.
Read my blog posting guidelines here.
Interesting research paper.
As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.
Read my blog posting guidelines here.
Russia’s Sunburst cyberespionage campaign, discovered late last year, impacted more than 100 large companies and US federal agencies, including the Treasury, Energy, Justice, and Homeland Security departments. A crucial part of the Russians’ success was their ability to move through these organizations by compromising cloud and local network identity systems to then access cloud accounts and pilfer emails and files.
Hackers said by the US government to have been working for the Kremlin targeted a widely used Microsoft cloud service that synchronizes user identities. The hackers stole security certificates to create their own identities, which allowed them to bypass safeguards such as multifactor authentication and gain access to Office 365 accounts, impacting thousands of users at the affected companies and government agencies.
It wasn’t the first time cloud services were the focus of a cyberattack, and it certainly won’t be the last. Cloud weaknesses were also critical in a 2019 breach at Capital One. There, an Amazon Web Services cloud vulnerability, compounded by Capital One’s own struggle to properly configure a complex cloud service, led to the disclosure of tens of millions of customer records, including credit card applications, Social Security numbers, and bank account information.
This trend of attacks on cloud services by criminals, hackers, and nation states is growing as cloud computing takes over worldwide as the default model for information technologies. Leaked data is bad enough, but disruption to the cloud, even an outage at a single provider, could quickly cost the global economy billions of dollars a day.
Cloud computing is an important source of risk both because it has quickly supplanted traditional IT and because it concentrates ownership of design choices at a very small number of companies. First, cloud is increasingly the default mode of computing for organizations, meaning ever more users and critical data from national intelligence and defense agencies ride on these technologies. Second, cloud computing services, especially those supplied by the world’s four largest providers—Amazon, Microsoft, Alibaba, and Google—concentrate key security and technology design choices inside a small number of organizations. The consequences of bad decisions or poorly made trade-offs can quickly scale to hundreds of millions of users.
The cloud is everywhere. Some cloud companies provide software as a service, support your Netflix habit, or carry your Slack chats. Others provide computing infrastructure like business databases and storage space. The largest cloud companies provide both.
The cloud can be deployed in several different ways, each of which shift the balance of responsibility for the security of this technology. But the cloud provider plays an important role in every case. Choices the provider makes in how these technologies are designed, built, and deployed influence the user’s security—yet the user has very little influence over them. Then, if Google or Amazon has a vulnerability in their servers—which you are unlikely to know about and have no control over—you suffer the consequences.
The problem is one of economics. On the surface, it might seem that competition between cloud companies gives them an incentive to invest in their users’ security. But several market failures get in the way of that ideal. First, security is largely an externality for these cloud companies, because the losses due to data breaches are largely borne by their users. As long as a cloud provider isn’t losing customers by the droves—which generally doesn’t happen after a security incident—it is incentivized to underinvest in security. Additionally, data shows that investors don’t punish the cloud service companies either: Stock price dips after a public security breach are both small and temporary.
Second, public information about cloud security generally doesn’t share the design trade-offs involved in building these cloud services or provide much transparency about the resulting risks. While cloud companies have to publicly disclose copious amounts of security design and operational information, it can be impossible for consumers to understand which threats the cloud services are taking into account, and how. This lack of understanding makes it hard to assess a cloud service’s overall security. As a result, customers and users aren’t able to differentiate between secure and insecure services, so they don’t base their buying and use decisions on it.
Third, cybersecurity is complex—and even more complex when the cloud is involved. For a customer like a company or government agency, the security dependencies of various cloud and on-premises network systems and services can be subtle and hard to map out. This means that users can’t adequately assess the security of cloud services or how they will interact with their own networks. This is a classic “lemons market” in economics, and the result is that cloud providers provide variable levels of security, as documented by Dan Geer, the chief information security officer for In-Q-Tel, and Wade Baker, a professor at Virginia Tech’s College of Business, when they looked at the prevalence of severe security findings at the top 10 largest cloud providers. Yet most consumers are none the wiser.
The result is a market failure where cloud service providers don’t compete to provide the best security for their customers and users at the lowest cost. Instead, cloud companies take the chance that they won’t get hacked, and past experience tells them they can weather the storm if they do. This kind of decision-making and priority-setting takes place at the executive level, of course, and doesn’t reflect the dedication and technical skill of product engineers and security specialists. The effect of this underinvestment is pernicious, however, by piling on risk that’s largely hidden from users. Widespread adoption of cloud computing carries that risk to an organization’s network, to its customers and users, and, in turn, to the wider internet.
This aggregation of cybersecurity risk creates a national security challenge. Policymakers can help address the challenge by setting clear expectations for the security of cloud services—and for making decisions and design trade-offs about that security transparent. The Biden administration, including newly nominated National Cyber Director Chris Inglis, should lead an interagency effort to work with cloud providers to review their threat models and evaluate the security architecture of their various offerings. This effort to require greater transparency from cloud providers and exert more scrutiny of their security engineering efforts should be accompanied by a push to modernize cybersecurity regulations for the cloud era.
The Federal Risk and Authorization Management Program (FedRAMP), which is the principal US government program for assessing the risk of cloud services and authorizing them for use by government agencies, would be a prime vehicle for these efforts. A recent executive order outlines several steps to make FedRAMP faster and more responsive. But the program is still focused largely on the security of individual services rather than the cloud vendors’ deeper architectural choices and threat models. Congressional action should reinforce and extend the executive order by adding new obligations for vendors to provide transparency about design trade-offs, threat models, and resulting risks. These changes could help transform FedRAMP into a more effective tool of security governance even as it becomes faster and more efficient.
Cloud providers have become important national infrastructure. Not since the heights of the mainframe era between the 1960s and early 1980s has the world witnessed computing systems of such complexity used by so many but designed and created by so few. The security of this infrastructure demands greater transparency and public accountability—if only to match the consequences of its failure.
This essay was written with Trey Herr, and previously appeared in Foreign Policy.
Really good long article about the Chinese hacking of RSA, Inc. They were able to get copies of the seed values to the SecurID authentication token, a harbinger of supply-chain attacks to come.
Apostle seems to be a new strain of malware that destroys data.
In a post published Tuesday, SentinelOne researchers said they assessed with high confidence that based on the code and the servers Apostle reported to, the malware was being used by a newly discovered group with ties to the Iranian government. While a ransomware note the researchers recovered suggested that Apostle had been used against a critical facility in the United Arab Emirates, the primary target was Israel.
This month, the New York state attorney general issued a report on a scheme by “U.S. Companies and Partisans [to] Hack Democracy.” This wasn’t another attempt by Republicans to make it harder for Black people and urban residents to vote. It was a concerted attack on another core element of US democracy —the ability of citizens to express their voice to their political representatives. And it was carried out by generating millions of fake comments and fake emails purporting to come from real citizens.
This attack was detected because it was relatively crude. But artificial intelligence technologies are making it possible to generate genuine-seeming comments at scale, drowning out the voices of real citizens in a tidal wave of fake ones.
As political scientists like Paul Pierson have pointed out, what happens between elections is important to democracy. Politicians shape policies and they make laws. And citizens can approve or condemn what politicians are doing, through contacting their representatives or commenting on proposed rules.
That’s what should happen. But as the New York report shows, it often doesn’t. The big telecommunications companies paid millions of dollars to specialist “AstroTurf” companies to generate public comments. These companies then stole people’s names and email addresses from old files and from hacked data dumps and attached them to 8.5 million public comments and half a million letters to members of Congress. All of them said that they supported the corporations’ position on something called “net neutrality,” the idea that telecommunications companies must treat all Internet content equally and not prioritize any company or service. Three AstroTurf companies—Fluent, Opt-Intelligence and React2Media —agreed to pay nearly $4 million in fines.
The fakes were crude. Many of them were identical, while others were patchworks of simple textual variations: substituting “Federal Communications Commission” and “FCC” for each other, for example.
Next time, though, we won’t be so lucky. New technologies are about to make it far easier to generate enormous numbers of convincing personalized comments and letters, each with its own word choices, expressive style and pithy examples. The people who create fake grass-roots organizations have always been enthusiastic early adopters of technology, weaponizing letters, faxes, emails and Web comments to manufacture the appearance of public support or public outrage.
Take Generative Pre-trained Transformer 3, or GPT-3, an AI model created by OpenAI, a San Francisco based start-up. With minimal prompting, GPT-3 can generate convincing seeming newspaper articles, résumé cover letters, even Harry Potter fan fiction in the style of Ernest Hemingway. It is trivially easy to use these techniques to compose large numbers of public comments or letters to lawmakers.
OpenAI restricts access to GPT-3, but in a recent experiment, researchers used a different text-generation program to submit 1,000 comments in response to a government request for public input on a Medicaid issue. They all sounded unique, like real people advocating a specific policy position. They fooled the Medicaid.gov administrators, who accepted them as genuine concerns from actual human beings. The researchers subsequently identified the comments and asked for them to be removed, so that no actual policy debate would be unfairly biased. Others won’t be so ethical.
When the floodgates open, democratic speech is in danger of drowning beneath a tide of fake letters and comments, tweets and Facebook posts. The danger isn’t just that fake support can be generated for unpopular positions, as happened with net neutrality. It is that public commentary will be completely discredited. This would be bad news for specialist AstroTurf companies, which would have no business model if there isn’t a public that they can pretend to be representing. But it would empower still further other kinds of lobbyists, who at least can prove that they are who they say they are.
We may have a brief window to shore up the flood walls. The most effective response would be to regulate what UCLA sociologist Edward Walker has described as the “grassroots for hire” industry. Organizations that deliberately fabricate citizen voices shouldn’t just be subject to civil fines, but to criminal penalties. Businesses that hire these organizations should be held liable for failures of oversight. It’s impossible to prove or disprove whether telecommunications companies knew their subcontractors would create bogus citizen voices, but a liability standard would at least give such companies an incentive to find out. This is likely to be politically difficult to put in place, though, since so many powerful actors benefit from the status quo.
This essay was written with Henry Farrell, and previously appeared in the Washington Post.
EDITED TO ADD: CSET published an excellent report on AI-generated partisan content. Short summary: it’s pretty good, and will continue to get better. Renee DeRista has also written about this.
This paper is about a lower-tech version of this threat. Also this.
EDITED TO ADD: Another essay on the same topic.
Make sure they’re dead.
As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.
Read my blog posting guidelines here.
This seems to be a new tactic:
Emsisoft has identified two distinct tactics. In the first, hackers encrypt data with ransomware A and then re-encrypt that data with ransomware B. The other path involves what Emsisoft calls a “side-by-side encryption” attack, in which attacks encrypt some of an organization’s systems with ransomware A and others with ransomware B. In that case, data is only encrypted once, but a victim would need both decryption keys to unlock everything. The researchers also note that in this side-by-side scenario, attackers take steps to make the two distinct strains of ransomware look as similar as possible, so it’s more difficult for incident responders to sort out what’s going on.
Bizarro is a new banking trojan that is stealing financial information and crypto wallets.
…the program can be delivered in a couple of ways—either via malicious links contained within spam emails, or through a trojanized app. Using these sneaky methods, trojan operators will implant the malware onto a target device, where it will install a sophisticated backdoor that “contains more than 100 commands and allows the attackers to steal online banking account credentials,” the researchers write.
The backdoor has numerous commands built in to allow manipulation of a targeted individual, including keystroke loggers that allow for harvesting of personal login information. In some instances, the malware can allow criminals to commandeer a victim’s crypto wallet, too.
Research report.
Good investigative reporting on how Apple is participating in and assisting with Chinese censorship and surveillance.
EDITED TO ADD (6/14): Good comentary.
A lot of Russian malware—the malware that targeted the Colonial Pipeline, for example—won’t install on computers with a Cyrillic keyboard installed. Brian Krebs wonders if this could be a useful defense:
In Russia, for example, authorities there generally will not initiate a cybercrime investigation against one of their own unless a company or individual within the country’s borders files an official complaint as a victim. Ensuring that no affiliates can produce victims in their own countries is the easiest way for these criminals to stay off the radar of domestic law enforcement agencies.
[…]
DarkSide, like a great many other malware strains, has a hard-coded do-not-install list of countries which are the principal members of the Commonwealth of Independent States (CIS)—former Soviet satellites that mostly have favorable relations with the Kremlin.
[…]
Simply put, countless malware strains will check for the presence of one of these languages on the system, and if they’re detected the malware will exit and fail to install.
[…]
Will installing one of these languages keep your Windows computer safe from all malware? Absolutely not. There is plenty of malware that doesn’t care where in the world you are. And there is no substitute for adopting a defense-in-depth posture, and avoiding risky behaviors online.
But is there really a downside to taking this simple, free, prophylactic approach? None that I can see, other than perhaps a sinking feeling of capitulation. The worst that could happen is that you accidentally toggle the language settings and all your menu options are in Russian.
Most US critical infrastructure is run by private corporations. This has major security implications, because it’s putting a random power company in—say—Ohio—up against the Russian cybercommand, which isn’t a fair fight.
When this problem is discussed, people regularly quote the statistic that 85% of US critical infrastructure is in private hands. It’s a handy number, and matches our intuition. Still, I have never been able to find a factual basis, or anyone who knows where the number comes from. Paul Rosenzweig investigates, and reaches the same conclusion.
So we don’t know the percentage, but I think we can safely say that it’s a lot.
A classic.
As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.
Read my blog posting guidelines here.
This is a current list of where and when I am scheduled to speak:
The list is maintained on this page.
Modern ransomware has two dimensions: pay to get your data back, and pay not to have your data dumped on the Internet. The DC police are the victims of this ransomware, and the criminals have just posted personnel records—”including the results of psychological assessments and polygraph tests; driver’s license images; fingerprints; social security numbers; dates of birth; and residential, financial, and marriage histories”—for two dozen police officers.
The negotiations don’t seem to be doing well. The criminals want $4M. The DC police offered them $100,000.
The Colonial Pipeline is another current high-profile ransomware victim. (Brian Krebs has some good information on DarkSide, the criminal group behind that attack.) So is Vastaamo, a Finnish mental heal clinic. Criminals contacted the individual patients and demanded payment, and then dumped their personal psychological information online.
An industry group called the Institute for Security and Technology (no, I haven’t heard of it before, either) just released a comprehensive report on combating ransomware. It has a “comprehensive plan of action,” which isn’t much different from anything most of us can propose. Solving this is not easy. Ransomware is big business, made possible by insecure networks that allow criminals to gain access to networks in the first place, and cryptocurrencies that allow for payments that governments cannot interdict. Ransomware has become the most profitable cybercrime business model, and until we solve those two problems, that’s not going to change.
President Biden signed an executive order to improve government cybersecurity, setting new security standards for software sold to the federal government.
For the first time, the United States will require all software purchased by the federal government to meet, within six months, a series of new cybersecurity standards. Although the companies would have to “self-certify,” violators would be removed from federal procurement lists, which could kill their chances of selling their products on the commercial market.
I’m a big fan of these sorts of measures. The US government is a big enough market that vendors will try to comply with procurement regulations, and the improvements will benefit all customers of the software.
EDITED TO ADD (5/16): Good analysis.
I have 80 copies of my 2000 book Beyond Fear available at the very cheap price of $5 plus shipping. Note that there is a 20% chance that your book will have a “BT Counterpane” sticker on the front cover.
Order your signed copy here.
Microsoft researchers just released an open-source automation tool for security testing AI systems: “Counterfit.” Details on their blog.
This is a major story: a probably Russian cybercrime group called DarkSide shut down the Colonial Pipeline in a ransomware attack. The pipeline supplies much of the East Coast. This is the new and improved ransomware attack: the hackers stole nearly 100 gig of data, and are threatening to publish it. The White House has declared a state of emergency and has created a task force to deal with the problem, but it’s unclear what they can do. This is bad; our supply chains are so tightly coupled that this kind of thing can have disproportionate effects.
EDITED TO ADD (5/12): It seems that the billing system was attacked, and not the physical pipeline itself.
This is a newly unclassified NSA history of its reaction to academic cryptography in the 1970s: “NSA Comes Out of the Closet: The Debate over Public Cryptography in the Inman Era,” Cryptographic Quarterly, Spring 1996, author still classified.
A town in Japan built a giant squid statue with its COVID relief grant.
One local told the Chunichi Shimbun newspaper that while the statue may be effective in the long run, the money could have been used for “urgent support,” such as for medical staff and long-term care facilities.
But a spokesperson for the town told Fuji News Network that the statue would be a tourist attraction and part of a long term strategy to help promote Noto’s famous flying squid.
I am impressed by the town’s sense of priorities.
As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.
Read my blog posting guidelines here.
A new draft of an Australian educational curriculum proposes teaching children as young as five cybersecurity:
The proposed curriculum aims to teach five-year-old children—an age at which Australian kids first attend school—not to share information such as date of birth or full names with strangers, and that they should consult parents or guardians before entering personal information online.
Six-and-seven-year-olds will be taught how to use usernames and passwords, and the pitfalls of clicking on pop-up links to competitions.
By the time kids are in third and fourth grade, they’ll be taught how to identify the personal data that may be stored by online services, and how that can reveal their location or identity. Teachers will also discuss “the use of nicknames and why these are important when playing online games.”
By late primary school, kids will be taught to be respectful online, including “responding respectfully to other people’s opinions even if they are different from personal opinions.”
I have mixed feeling about this. Norms around these things are changing so fast, and it’s not likely that we in the older generation will get to dictate what the younger generation does. But these sorts of online privacy conversations are worth having around the same time children learn about privacy in other contexts.
Nice video of a talk by Chris Shore on the history of Colossus.
There’s new research that demonstrates security vulnerabilities in all of the AMD and Intel chips with micro-op caches, including the ones that were specifically engineered to be resistant to the Spectre/Meltdown attacks of three years ago.
The new line of attacks exploits the micro-op cache: an on-chip structure that speeds up computing by storing simple commands and allowing the processor to fetch them quickly and early in the speculative execution process, as the team explains in a writeup from the University of Virginia. Even though the processor quickly realizes its mistake and does a U-turn to go down the right path, attackers can get at the private data while the processor is still heading in the wrong direction.
It seems really difficult to exploit these vulnerabilities. We’ll need some more analysis before we understand what we have to patch and how.
This is an impressive hack:
Security researchers Ralf-Philipp Weinmann of Kunnamon, Inc. and Benedikt Schmotzle of Comsecuris GmbH have found remote zero-click security vulnerabilities in an open-source software component (ConnMan) used in Tesla automobiles that allowed them to compromise parked cars and control their infotainment systems over WiFi. It would be possible for an attacker to unlock the doors and trunk, change seat positions, both steering and acceleration modes—in short, pretty much what a driver pressing various buttons on the console can do. This attack does not yield drive control of the car though.
That last sentence is important.
News article.
The person behind the Bitcoin Fog was identified and arrested. Bitcoin Fog was an anonymization service: for a fee, it mixed a bunch of people’s bitcoins up so that it was hard to figure out where any individual coins came from. It ran for ten years.
Identifying the person behind Bitcoin Fog serves as an illustrative example of how hard it is to be anonymous online in the face of a competent police investigation:
Most remarkable, however, is the IRS’s account of tracking down Sterlingov using the very same sort of blockchain analysis that his own service was meant to defeat. The complaint outlines how Sterlingov allegedly paid for the server hosting of Bitcoin Fog at one point in 2011 using the now-defunct digital currency Liberty Reserve. It goes on to show the blockchain evidence that identifies Sterlingov’s purchase of that Liberty Reserve currency with bitcoins: He first exchanged euros for the bitcoins on the early cryptocurrency exchange Mt. Gox, then moved those bitcoins through several subsequent addresses, and finally traded them on another currency exchange for the Liberty Reserve funds he’d use to set up Bitcoin Fog’s domain.
Based on tracing those financial transactions, the IRS says, it then identified Mt. Gox accounts that used Sterlingov’s home address and phone number, and even a Google account that included a Russian-language document on its Google Drive offering instructions for how to obscure Bitcoin payments. That document described exactly the steps Sterlingov allegedly took to buy the Liberty Reserve funds he’d used.
Sidebar photo of Bruce Schneier by Joe MacInnis.