Page 89

Power LED Side-Channel Attack

This is a clever new side-channel attack:

The first attack uses an Internet-connected surveillance camera to take a high-speed video of the power LED on a smart card reader­—or of an attached peripheral device—­during cryptographic operations. This technique allowed the researchers to pull a 256-bit ECDSA key off the same government-approved smart card used in Minerva. The other allowed the researchers to recover the private SIKE key of a Samsung Galaxy S8 phone by training the camera of an iPhone 13 on the power LED of a USB speaker connected to the handset, in a similar way to how Hertzbleed pulled SIKE keys off Intel and AMD CPUs.

There are lots of limitations:

When the camera is 60 feet away, the room lights must be turned off, but they can be turned on if the surveillance camera is at a distance of about 6 feet. (An attacker can also use an iPhone to record the smart card reader power LED.) The video must be captured for 65 minutes, during which the reader must constantly perform the operation.

[…]

The attack assumes there is an existing side channel that leaks power consumption, timing, or other physical manifestations of the device as it performs a cryptographic operation.

So don’t expect this attack to be recovering keys in the real world anytime soon. But, still, really nice work.

More details from the researchers.

Posted on June 19, 2023 at 6:52 AMView Comments

Friday Squid Blogging: Squid Can Edit Their RNA

This is just crazy:

Scientists don’t yet know for sure why octopuses, and other shell-less cephalopods including squid and cuttlefish, are such prolific editors. Researchers are debating whether this form of genetic editing gave cephalopods an evolutionary leg (or tentacle) up or whether the editing is just a sometimes useful accident. Scientists are also probing what consequences the RNA alterations may have under various conditions.

I sometimes think that cephalopods are aliens that crash-landed on this planet eons ago.

Another article.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Posted on June 16, 2023 at 5:13 PMView Comments

Security and Human Behavior (SHB) 2023

I’m just back from the sixteenth Workshop on Security and Human Behavior, hosted by Alessandro Acquisti at Carnegie Mellon University in Pittsburgh.

SHB is a small, annual, invitational workshop of people studying various aspects of the human side of security, organized each year by Alessandro Acquisti, Ross Anderson, and myself. The fifty or so attendees include psychologists, economists, computer security researchers, criminologists, sociologists, political scientists, designers, lawyers, philosophers, anthropologists, geographers, neuroscientists, business school professors, and a smattering of others. It’s not just an interdisciplinary event; most of the people here are individually interdisciplinary.

Our goal is always to maximize discussion and interaction. We do that by putting everyone on panels, and limiting talks to six to eight minutes, with the rest of the time for open discussion. Short talks limit presenters’ ability to get into the boring details of their work, and the interdisciplinary audience discourages jargon.

For the past decade and a half, this workshop has been the most intellectually stimulating two days of my professional year. It influences my thinking in different and sometimes surprising ways­ 00 and has resulted in some unexpected collaborations.

And that’s what’s valuable. One of the most important outcomes of the event is new collaborations. Over the years, we have seen new interdisciplinary research between people who met at the workshop, and ideas and methodologies move from one field into another based on connections made at the workshop. This is why some of us have been coming back every year for over a decade.

This year’s schedule is here. This page lists the participants and includes links to some of their work. As he does every year, Ross Anderson is live blogging the talks. We are back 100% in person after two years of fully remote and one year of hybrid.

Here are my posts on the first, second, third, fourth, fifth, sixth, seventh, eighth, ninth, tenth, eleventh, twelfth, thirteenth, fourteenth, and fifteenth SHB workshops. Follow those links to find summaries, papers, and occasionally audio/video recordings of the sessions. Ross also maintains a good webpage of psychology and security resources.

It’s actually hard to believe that the workshop has been going on for this long, and that it’s still vibrant. We rotate between organizers, so next year is my turn in Cambridge (the Massachusetts one).

Posted on June 16, 2023 at 3:07 PMView Comments

On the Need for an AI Public Option

Artificial intelligence will bring great benefits to all of humanity. But do we really want to entrust this revolutionary technology solely to a small group of US tech companies?

Silicon Valley has produced no small number of moral disappointments. Google retired its “don’t be evil” pledge before firing its star ethicist. Self-proclaimed “free speech absolutist” Elon Musk bought Twitter in order to censor political speech, retaliate against journalists, and ease access to the platform for Russian and Chinese propagandists. Facebook lied about how it enabled Russian interference in the 2016 US presidential election and paid a public relations firm to blame Google and George Soros instead.

These and countless other ethical lapses should prompt us to consider whether we want to give technology companies further abilities to learn our personal details and influence our day-to-day decisions. Tech companies can already access our daily whereabouts and search queries. Digital devices monitor more and more aspects of our lives: We have cameras in our homes and heartbeat sensors on our wrists sending what they detect to Silicon Valley.

Now, tech giants are developing ever more powerful AI systems that don’t merely monitor you; they actually interact with you—and with others on your behalf. If searching on Google in the 2010s was like being watched on a security camera, then using AI in the late 2020s will be like having a butler. You will willingly include them in every conversation you have, everything you write, every item you shop for, every want, every fear, everything. It will never forget. And, despite your reliance on it, it will be surreptitiously working to further the interests of one of these for-profit corporations.

There’s a reason Google, Microsoft, Facebook, and other large tech companies are leading the AI revolution: Building a competitive large language model (LLM) like the one powering ChatGPT is incredibly expensive. It requires upward of $100 million in computational costs for a single model training run, in addition to access to large amounts of data. It also requires technical expertise, which, while increasingly open and available, remains heavily concentrated in a small handful of companies. Efforts to disrupt the AI oligopoly by funding start-ups are self-defeating as Big Tech profits from the cloud computing services and AI models powering those start-ups—and often ends up acquiring the start-ups themselves.

Yet corporations aren’t the only entities large enough to absorb the cost of large-scale model training. Governments can do it, too. It’s time to start taking AI development out of the exclusive hands of private companies and bringing it into the public sector. The United States needs a government-funded-and-directed AI program to develop widely reusable models in the public interest, guided by technical expertise housed in federal agencies.

So far, the AI regulation debate in Washington has focused on the governance of private-sector activity—which the US Congress is in no hurry to advance. Congress should not only hurry up and push AI regulation forward but also go one step further and develop its own programs for AI. Legislators should reframe the AI debate from one about public regulation to one about public development.

The AI development program could be responsive to public input and subject to political oversight. It could be directed to respond to critical issues such as privacy protection, underpaid tech workers, AI’s horrendous carbon emissions, and the exploitation of unlicensed data. Compared to keeping AI in the hands of morally dubious tech companies, the public alternative is better both ethically and economically. And the switch should take place soon: By the time AI becomes critical infrastructure, essential to large swaths of economic activity and daily life, it will be too late to get started.

Other countries are already there. China has heavily prioritized public investment in AI research and development by betting on a handpicked set of giant companies that are ostensibly private but widely understood to be an extension of the state. The government has tasked Alibaba, Huawei, and others with creating products that support the larger ecosystem of state surveillance and authoritarianism.

The European Union is also aggressively pushing AI development. The European Commission already invests 1 billion euros per year in AI, with a plan to increase that figure to 20 billion euros annually by 2030. The money goes to a continent-wide network of public research labs, universities, and private companies jointly working on various parts of AI. The Europeans’ focus is on knowledge transfer, developing the technology sector, use of AI in public administration, mitigating safety risks, and preserving fundamental rights. The EU also continues to be at the cutting edge of aggressively regulating both data and AI.

Neither the Chinese nor the European model is necessarily right for the United States. State control of private enterprise remains anathema in American political culture and would struggle to gain mainstream traction. The tech companies—and their supporters in both US political parties—are opposed to robust public governance of AI. But Washington can take inspiration from China and Europe’;s long-range planning and leadership on regulation and public investment. With boosters pointing to hundreds of trillions of dollars of global economic value associated with AI, the stakes of international competition are compelling. As in energy and medical research, which have their own federal agencies in the Department of Energy and the National Institutes of Health, respectively, there is a place for AI research and development inside government.

Beside the moral argument against letting private companies develop AI, there’s a strong economic argument in favor of a public option as well. A publicly funded LLM could serve as an open platform for innovation, helping any small business, nonprofit, or individual entrepreneur to build AI-assisted applications.

There’s also a practical argument. Building AI is within public reach because governments don’t need to own and operate the entire AI supply chain. Chip and computer production, cloud data centers, and various value-added applications—such as those that integrate AI with consumer electronics devices or entertainment software—do not need to be publicly controlled or funded.

One reason to be skeptical of public funding for AI is that it might result in a lower quality and slower innovation, given greater ethical scrutiny, political constraints, and fewer incentives due to a lack of market competition. But even if that is the case, it would be worth broader access to the most important technology of the 21st century. And it is by no means certain that public AI has to be at a disadvantage. The open-source community is proof that it’s not always private companies that are the most innovative.

Those who worry about the quality trade-off might suggest a public buyer model, whereby Washington licenses or buys private language models from Big Tech instead of developing them itself. But that doesn’t go far enough to ensure that the tools are aligned with public priorities and responsive to public needs. It would not give the public detailed insight into or control of the inner workings and training procedures for these models, and it would still require strict and complex regulation.

There is political will to take action to develop AI via public, rather than private, funds—but this does not yet equate to the will to create a fully public AI development agency. A task force created by Congress recommended in January a $2.6 billion federal investment in computing and data resources to prime the AI research ecosystem in the United States. But this investment would largely serve to advance the interests of Big Tech, leaving the opportunity for public ownership and oversight unaddressed.

Nonprofit and academic organizations have already created open-access LLMs. While these should be celebrated, they are not a substitute for a public option. Nonprofit projects are still beholden to private interests, even if they are benevolent ones. These private interests can change without public input, as when OpenAI effectively abandoned its nonprofit origins, and we can’t be sure that their founding intentions or operations will survive market pressures, fickle donors, and changes in leadership.

The US government is by no means a perfect beacon of transparency, a secure and responsible store of our data, or a genuine reflection of the public’s interests. But the risks of placing AI development entirely in the hands of demonstrably untrustworthy Silicon Valley companies are too high. AI will impact the public like few other technologies, so it should also be developed by the public.

This essay was written with Nathan Sanders, and appeared in Foreign Policy.

Posted on June 14, 2023 at 7:02 AMView Comments

Identifying the Idaho Killer

The New York Times has a long article on the investigative techniques used to identify the person who stabbed and killed four University of Idaho students.

Pay attention to the techniques:

The case has shown the degree to which law enforcement investigators have come to rely on the digital footprints that ordinary Americans leave in nearly every facet of their lives. Online shopping, car sales, carrying a cellphone, drives along city streets and amateur genealogy all played roles in an investigation that was solved, in the end, as much through technology as traditional sleuthing.

[…]

At that point, investigators decided to try genetic genealogy, a method that until now has been used primarily to solve cold cases, not active murder investigations. Among the growing number of genealogy websites that help people trace their ancestors and relatives via their own DNA, some allow users to select an option that permits law enforcement to compare crime scene DNA samples against the websites’ data.

A distant cousin who has opted into the system can help investigators building a family tree from crime scene DNA to triangulate and identify a potential perpetrator of a crime.

[…]

On Dec. 23, investigators sought and received Mr. Kohberger’s cellphone records. The results added more to their suspicions: His phone was moving around in the early morning hours of Nov. 13, but was disconnected from cell networks ­- perhaps turned off—in the two hours around when the killings occurred.

Posted on June 13, 2023 at 7:03 AMView Comments

AI-Generated Steganography

New research suggests that AIs can produce perfectly secure steganographic images:

Abstract: Steganography is the practice of encoding secret information into innocuous content in such a manner that an adversarial third party would not realize that there is hidden meaning. While this problem has classically been studied in security literature, recent advances in generative models have led to a shared interest among security and machine learning researchers in developing scalable steganography techniques. In this work, we show that a steganography procedure is perfectly secure under Cachin (1998)’s information theoretic-model of steganography if and only if it is induced by a coupling. Furthermore, we show that, among perfectly secure procedures, a procedure is maximally efficient if and only if it is induced by a minimum entropy coupling. These insights yield what are, to the best of our knowledge, the first steganography algorithms to achieve perfect security guarantees with non-trivial efficiency; additionally, these algorithms are highly scalable. To provide empirical validation, we compare a minimum entropy coupling-based approach to three modern baselines—arithmetic coding, Meteor, and adaptive dynamic grouping—using GPT-2, WaveRNN, and Image Transformer as communication channels. We find that the minimum entropy coupling-based approach achieves superior encoding efficiency, despite its stronger security constraints. In aggregate, these results suggest that it may be natural to view information-theoretic steganography through the lens of minimum entropy coupling.

News article.

EDITED TO ADD (6/13): Comments.

Posted on June 12, 2023 at 7:18 AMView Comments

Friday Squid Blogging: Light-Emitting Squid

It’s a Taningia danae:

Their arms are lined with two rows of sharp retractable hooks. And, like most deep-sea squid, they are adorned with light organs called photophores. They have some on the underside of their mantle. There are more facing upward, near one of their eyes. But it’s the photophores at the tip of two stubby arms that are truly unique. The size and shape of lemons­—each nestled within a retractable lid like an eyeball in a socket­—they are by far the largest photophores known to science.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Posted on June 9, 2023 at 5:05 PMView Comments

Operation Triangulation: Zero-Click iPhone Malware

Kaspersky is reporting a zero-click iOS exploit in the wild:

Mobile device backups contain a partial copy of the filesystem, including some of the user data and service databases. The timestamps of the files, folders and the database records allow to roughly reconstruct the events happening to the device. The mvt-ios utility produces a sorted timeline of events into a file called “timeline.csv,” similar to a super-timeline used by conventional digital forensic tools.

Using this timeline, we were able to identify specific artifacts that indicate the compromise. This allowed to move the research forward, and to reconstruct the general infection sequence:

  • The target iOS device receives a message via the iMessage service, with an attachment containing an exploit.
  • Without any user interaction, the message triggers a vulnerability that leads to code execution.
  • The code within the exploit downloads several subsequent stages from the C&C server, that include additional exploits for privilege escalation.
  • After successful exploitation, a final payload is downloaded from the C&C server, that is a fully-featured APT platform.
  • The initial message and the exploit in the attachment is deleted

The malicious toolset does not support persistence, most likely due to the limitations of the OS. The timelines of multiple devices indicate that they may be reinfected after rebooting. The oldest traces of infection that we discovered happened in 2019. As of the time of writing in June 2023, the attack is ongoing, and the most recent version of the devices successfully targeted is iOS 15.7.

No attribution as of yet.

Posted on June 9, 2023 at 7:12 AMView Comments

Paragon Solutions Spyware: Graphite

Paragon Solutions is yet another Israeli spyware company. Their product is called “Graphite,” and is a lot like NSO Group’s Pegasus. And Paragon is working with what seems to be US approval:

American approval, even if indirect, has been at the heart of Paragon’s strategy. The company sought a list of allied nations that the US wouldn’t object to seeing deploy Graphite. People with knowledge of the matter suggested 35 countries are on that list, though the exact nations involved could not be determined. Most were in the EU and some in Asia, the people said.

Remember when NSO Group was banned in the US a year and a half ago? The Drug Enforcement Agency uses Graphite.

We’re never going to reduce the power of these cyberweapons arms merchants by going after them one by one. We need to deal with the whole industry. And we’re not going to do it as long as the democracies of the world use their products as well.

Posted on June 8, 2023 at 7:30 AMView Comments

How Attorneys Are Harming Cybersecurity Incident Response

New paper: “Lessons Lost: Incident Response in the Age of Cyber Insurance and Breach Attorneys“:

Abstract: Incident Response (IR) allows victim firms to detect, contain, and recover from security incidents. It should also help the wider community avoid similar attacks in the future. In pursuit of these goals, technical practitioners are increasingly influenced by stakeholders like cyber insurers and lawyers. This paper explores these impacts via a multi-stage, mixed methods research design that involved 69 expert interviews, data on commercial relationships, and an online validation workshop. The first stage of our study established 11 stylized facts that describe how cyber insurance sends work to a small numbers of IR firms, drives down the fee paid, and appoints lawyers to direct technical investigators. The second stage showed that lawyers when directing incident response often: introduce legalistic contractual and communication steps that slow-down incident response; advise IR practitioners not to write down remediation steps or to produce formal reports; and restrict access to any documents produced.

So, we’re not able to learn from these breaches because the attorneys are limiting what information becomes public. This is where we think about shielding companies from liability in exchange for making breach data public. It’s the sort of thing we do for airplane disasters.

EDITED TO ADD (6/13): A podcast interview with two of the authors.

Posted on June 7, 2023 at 7:06 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.