Blog: July 2019 Archives

Another Attack Against Driverless Cars

In this piece of research, attackers successfully attack a driverless car system—Renault Captur’s “Level 0” autopilot (Level 0 systems advise human drivers but do not directly operate cars)—by following them with drones that project images of fake road signs in 100ms bursts. The time is too short for human perception, but long enough to fool the autopilot’s sensors.

Boing Boing post.

Posted on July 31, 2019 at 6:46 AM43 Comments

Wanted: Cybersecurity Imagery

Eli Sugarman of the Hewlettt Foundation laments about the sorry state of cybersecurity imagery:

The state of cybersecurity imagery is, in a word, abysmal. A simple Google Image search for the term proves the point: It’s all white men in hoodies hovering menacingly over keyboards, green “Matrix”-style 1s and 0s, glowing locks and server racks, or some random combination of those elements—sometimes the hoodie-clad men even wear burglar masks. Each of these images fails to convey anything about either the importance or the complexity of the topic­—or the huge stakes for governments, industry and ordinary people alike inherent in topics like encryption, surveillance and cyber conflict.

I agree that this is a problem. It’s not something I noticed until recently. I work in words. I think in words. I don’t use PowerPoint (or anything similar) when I give presentations. I don’t need visuals.

But recently, I started teaching at the Harvard Kennedy School, and I constantly use visuals in my class. I made those same image searches, and I came up with similarly unacceptable results.

But unlike me, Hewlett is doing something about it. You can help: participate in the Cybersecurity Visuals Challenge.

EDITED TO ADD (8/5): News article. Slashdot thread.

Posted on July 29, 2019 at 6:15 AM44 Comments

Friday Squid Blogging: Humbolt Squid in Mexico Are Getting Smaller

The Humbolt squid are getting smaller:

Rawley and the other researchers found a flurry of factors that drove the jumbo squid’s demise. The Gulf of California historically cycled between warm-water El Niño conditions and cool-water La Niña phases. The warm El Niño waters were inhospitable to jumbo squid­more specifically to the squid’s prey­but subsequent La Niñas would allow squid populations to recover. But recent years have seen a drought of La Niñas, resulting in increasingly and more consistently warm waters. Frawley calls it an “oceanographic drought,” and says that conditions like these will become more and more common with climate change. “But saying this specific instance is climate change is more than we can claim in the scope of our work,” he adds. “I’m not willing to make that connection absolutely.”

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Posted on July 26, 2019 at 4:42 PM66 Comments

Insider Logic Bombs

Add to the “not very smart criminals” file:

According to court documents, Tinley provided software services for Siemens’ Monroeville, PA offices for nearly ten years. Among the work he was asked to perform was the creation of spreadsheets that the company was using to manage equipment orders.

The spreadsheets included custom scripts that would update the content of the file based on current orders stored in other, remote documents, allowing the company to automate inventory and order management.

But while Tinley’s files worked for years, they started malfunctioning around 2014. According to court documents, Tinley planted so-called “logic bombs” that would trigger after a certain date, and crash the files.

Every time the scripts would crash, Siemens would call Tinley, who’d fix the files for a fee.

Posted on July 26, 2019 at 6:05 AM36 Comments

Software Developers and Security

According to a survey: “68% of the security professionals surveyed believe it’s a programmer’s job to write secure code, but they also think less than half of developers can spot security holes.” And that’s a problem.

Nearly half of security pros surveyed, 49%, said they struggle to get developers to make remediation of vulnerabilities a priority. Worse still, 68% of security professionals feel fewer than half of developers can spot security vulnerabilities later in the life cycle. Roughly half of security professionals said they most often found bugs after code is merged in a test environment.

At the same time, nearly 70% of developers said that while they are expected to write secure code, they get little guidance or help. One disgruntled programmer said, “It’s a mess, no standardization, most of my work has never had a security scan.”

Another problem is it seems many companies don’t take security seriously enough. Nearly 44% of those surveyed reported that they’re not judged on their security vulnerabilities.

Posted on July 25, 2019 at 6:17 AM52 Comments

Attorney General William Barr on Encryption Policy

Yesterday, Attorney General William Barr gave a major speech on encryption policy—what is commonly known as “going dark.” Speaking at Fordham University in New York, he admitted that adding backdoors decreases security but that it is worth it.

Some hold this view dogmatically, claiming that it is technologically impossible to provide lawful access without weakening security against unlawful access. But, in the world of cybersecurity, we do not deal in absolute guarantees but in relative risks. All systems fall short of optimality and have some residual risk of vulnerability a point which the tech community acknowledges when they propose that law enforcement can satisfy its requirements by exploiting vulnerabilities in their products. The real question is whether the residual risk of vulnerability resulting from incorporating a lawful access mechanism is materially greater than those already in the unmodified product. The Department does not believe this can be demonstrated.

Moreover, even if there was, in theory, a slight risk differential, its significance should not be judged solely by the extent to which it falls short of theoretical optimality. Particularly with respect to encryption marketed to consumers, the significance of the risk should be assessed based on its practical effect on consumer cybersecurity, as well as its relation to the net risks that offering the product poses for society. After all, we are not talking about protecting the Nation’s nuclear launch codes. Nor are we necessarily talking about the customized encryption used by large business enterprises to protect their operations. We are talking about consumer products and services such as messaging, smart phones, e-mail, and voice and data applications. If one already has an effective level of security say, by way of illustration, one that protects against 99 percent of foreseeable threats is it reasonable to incur massive further costs to move slightly closer to optimality and attain a 99.5 percent level of protection? A company would not make that expenditure; nor should society. Here, some argue that, to achieve at best a slight incremental improvement in security, it is worth imposing a massive cost on society in the form of degraded safety. This is untenable. If the choice is between a world where we can achieve a 99 percent assurance against cyber threats to consumers, while still providing law enforcement 80 percent of the access it might seek; or a world, on the other hand, where we have boosted our cybersecurity to 99.5 percent but at a cost reducing law enforcements [sic] access to zero percent the choice for society is clear.

I think this is a major change in government position. Previously, the FBI, the Justice Department and so on had claimed that backdoors for law enforcement could be added without any loss of security. They maintained that technologists just need to figure out how: ­an approach we have derisively named “nerd harder.”

With this change, we can finally have a sensible policy conversation. Yes, adding a backdoor increases our collective security because it allows law enforcement to eavesdrop on the bad guys. But adding that backdoor also decreases our collective security because the bad guys can eavesdrop on everyone. This is exactly the policy debate we should be having­—not the fake one about whether or not we can have both security and surveillance.

Barr makes the point that this is about “consumer cybersecurity,” and not “nuclear launch codes.” This is true, but ignores the huge amount of national security-related communications between those two poles. The same consumer communications and computing devices are used by our lawmakers, CEOs, legislators, law enforcement officers, nuclear power plant operators, election officials and so on. There’s no longer a difference between consumer tech and government tech—it’s all the same tech.

Barr also says:

Further, the burden is not as onerous as some make it out to be. I served for many years as the general counsel of a large telecommunications concern. During my tenure, we dealt with these issues and lived through the passage and implementation of CALEA the Communications Assistance for Law Enforcement Act. CALEA imposes a statutory duty on telecommunications carriers to maintain the capability to provide lawful access to communications over their facilities. Companies bear the cost of compliance but have some flexibility in how they achieve it, and the system has by and large worked. I therefore reserve a heavy dose of skepticism for those who claim that maintaining a mechanism for lawful access would impose an unreasonable burden on tech firms especially the big ones. It is absurd to think that we would preserve lawful access by mandating that physical telecommunications facilities be accessible to law enforcement for the purpose of obtaining content, while allowing tech providers to block law enforcement from obtaining that very content.

That telecommunications company was GTE­which became Verizon. Barr conveniently ignores that CALEA-enabled phone switches were used to spy on government officials in Greece in 2003—which seems to have been an NSA operation—and on a variety of people in Italy in 2006. Moreover, in 2012 every CALEA-enabled switch sold to the Defense Department had security vulnerabilities. (I wrote about all this, and more, in 2013.)

The final thing I noticed about the speech is that is it not about iPhones and data at rest. It is about communications: ­data in transit. The “going dark” debate has bounced back and forth between those two aspects for decades. It seems to be bouncing once again.

I hope that Barr’s latest speech signals that we can finally move on from the fake security vs. privacy debate, and to the real security vs. security debate. I know where I stand on that: As computers continue to permeate every aspect of our lives, society, and critical infrastructure, it is much more important to ensure that they are secure from everybody—even at the cost of law-enforcement access—than it is to allow access at the cost of security. Barr is wrong, it kind of is like these systems are protecting nuclear launch codes.

This essay previously appeared on Lawfare.com.

EDITED TO ADD: More news articles.

EDITED TO ADD (7/28): Gen. Hayden comments.

EDITED TO ADD (7/30): Good response by Robert Graham.

Posted on July 24, 2019 at 6:43 AM70 Comments

Science Fiction Writers Helping Imagine Future Threats

The French army is going to put together a team of science fiction writers to help imagine future threats.

Leaving aside the question of whether science fiction writers are better or worse at envisioning nonfictional futures, this isn’t new. The US Department of Homeland Security did the same thing over a decade ago, and I wrote about it back then:

A couple of years ago, the Department of Homeland Security hired a bunch of science fiction writers to come in for a day and think of ways terrorists could attack America. If our inability to prevent 9/11 marked a failure of imagination, as some said at the time, then who better than science fiction writers to inject a little imagination into counterterrorism planning?

I discounted the exercise at the time, calling it “embarrassing.” I never thought that 9/11 was a failure of imagination. I thought, and still think, that 9/11 was primarily a confluence of three things: the dual failure of centralized coordination and local control within the FBI, and some lucky breaks on the part of the attackers. More imagination leads to more movie-plot threats—which contributes to overall fear and overestimation of the risks. And that doesn’t help keep us safe at all.

Science fiction writers are creative, and creativity helps in any future scenario brainstorming. But please, keep the people who actually know science and technology in charge.

Last month, at the 2009 Homeland Security Science & Technology Stakeholders Conference in Washington D.C., science fiction writers helped the attendees think differently about security. This seems like a far better use of their talents than imagining some of the zillions of ways terrorists can attack America.

Posted on July 23, 2019 at 6:27 AM43 Comments

Hackers Expose Russian FSB Cyberattack Projects

More nation-state activity in cyberspace, this time from Russia:

Per the different reports in Russian media, the files indicate that SyTech had worked since 2009 on a multitude of projects since 2009 for FSB unit 71330 and for fellow contractor Quantum. Projects include:

  • Nautilus—a project for collecting data about social media users (such as Facebook, MySpace, and LinkedIn).
  • Nautilus-S—a project for deanonymizing Tor traffic with the help of rogue Tor servers.
  • Reward—a project to covertly penetrate P2P networks, like the one used for torrents.
  • Mentor—a project to monitor and search email communications on the servers of Russian companies.
  • Hope—a project to investigate the topology of the Russian internet and how it connects to other countries’ network.
  • Tax-3—a project for the creation of a closed intranet to store the information of highly-sensitive state figures, judges, and local administration officials, separate from the rest of the state’s IT networks.

BBC Russia, who received the full trove of documents, claims there were other older projects for researching other network protocols such as Jabber (instant messaging), ED2K (eDonkey), and OpenFT (enterprise file transfer).

Other files posted on the Digital Revolution Twitter account claimed that the FSB was also tracking students and pensioners.

Posted on July 22, 2019 at 6:17 AM19 Comments

Identity Theft on the Job Market

Identity theft is getting more subtle: “My job application was withdrawn by someone pretending to be me“:

When Mr Fearn applied for a job at the company he didn’t hear back.

He said the recruitment team said they’d get back to him by Friday, but they never did.

At first, he assumed he was unsuccessful, but after emailing his contact there, it turned out someone had created a Gmail account in his name and asked the company to withdraw his application.

Mr Fearn said the talent assistant told him they were confused because he had apparently emailed them to withdraw his application on Wednesday.

“They forwarded the email, which was sent from an account using my name.”

He said he felt “really shocked and violated” to find out that someone had created an email account in his name just to tarnish his chances of getting a role.

This is about as low-tech as it gets. It’s trivially simple for me to open a new Gmail account using a random first and last name. But because people innately trust email, it works.

Posted on July 18, 2019 at 8:21 AM52 Comments

Zoom Vulnerability

The Zoom conferencing app has a vulnerability that allows someone to remotely take over the computer’s camera.

It’s a bad vulnerability, made worse by the fact that it remains even if you uninstall the Zoom app:

This vulnerability allows any website to forcibly join a user to a Zoom call, with their video camera activated, without the user’s permission.

On top of this, this vulnerability would have allowed any webpage to DOS (Denial of Service) a Mac by repeatedly joining a user to an invalid call.

Additionally, if you’ve ever installed the Zoom client and then uninstalled it, you still have a localhost web server on your machine that will happily re-install the Zoom client for you, without requiring any user interaction on your behalf besides visiting a webpage. This re-install ‘feature’ continues to work to this day.

Zoom didn’t take the vulnerability seriously:

This vulnerability was originally responsibly disclosed on March 26, 2019. This initial report included a proposed description of a ‘quick fix’ Zoom could have implemented by simply changing their server logic. It took Zoom 10 days to confirm the vulnerability. The first actual meeting about how the vulnerability would be patched occurred on June 11th, 2019, only 18 days before the end of the 90-day public disclosure deadline. During this meeting, the details of the vulnerability were confirmed and Zoom’s planned solution was discussed. However, I was very easily able to spot and describe bypasses in their planned fix. At this point, Zoom was left with 18 days to resolve the vulnerability. On June 24th after 90 days of waiting, the last day before the public disclosure deadline, I discovered that Zoom had only implemented the ‘quick fix’ solution originally suggested.

This is why we disclose vulnerabilities. Now, finally, Zoom is taking this seriously and fixing it for real.

EDITED TO ADD (8/8): Apple silently released a macOS update that removes the Zoom webserver if the app is not present.

Posted on July 16, 2019 at 12:54 PM19 Comments

Palantir's Surveillance Service for Law Enforcement

Motherboard got its hands on Palantir’s Gotham user’s manual, which is used by the police to get information on people:

The Palantir user guide shows that police can start with almost no information about a person of interest and instantly know extremely intimate details about their lives. The capabilities are staggering, according to the guide:

  • If police have a name that’s associated with a license plate, they can use automatic license plate reader data to find out where they’ve been, and when they’ve been there. This can give a complete account of where someone has driven over any time period.
  • With a name, police can also find a person’s email address, phone numbers, current and previous addresses, bank accounts, social security number(s), business relationships, family relationships, and license information like height, weight, and eye color, as long as it’s in the agency’s database.
  • The software can map out a person’s family members and business associates of a suspect, and theoretically, find the above information about them, too.

All of this information is aggregated and synthesized in a way that gives law enforcement nearly omniscient knowledge over any suspect they decide to surveil.

Read the whole article—it has a lot of details. This seems like a commercial version of the NSA’s XKEYSCORE.

Boing Boing post.

Meanwhile:

The FBI wants to gather more information from social media. Today, it issued a call for contracts for a new social media monitoring tool. According to a request-for-proposals (RFP), it’s looking for an “early alerting tool” that would help it monitor terrorist groups, domestic threats, criminal activity and the like.

The tool would provide the FBI with access to the full social media profiles of persons-of-interest. That could include information like user IDs, emails, IP addresses and telephone numbers. The tool would also allow the FBI to track people based on location, enable persistent keyword monitoring and provide access to personal social media history. According to the RFP, “The mission-critical exploitation of social media will enable the Bureau to detect, disrupt, and investigate an ever growing diverse range of threats to U.S. National interests.”

Posted on July 15, 2019 at 6:12 AM62 Comments

Friday Squid Blogging: When the Octopus and Squid Lost Their Shells

Cephalopod ancestors once had shells. When did they lose them?

With the molecular clock technique, which allowed him to use DNA to map out the evolutionary history of the cephalopods, he found that today’s cuttlefish, squids and octopuses began to appear 160 to 100 million years ago, during the so-called Mesozoic Marine Revolution.

During the revolution, underwater life underwent a rapid change, including a burst in fish diversity. Some predators became better suited for crushing shellfish, while some smaller fish became faster and more agile.

“There’s a continual arms race between the prey and the predators,” said Mr. Tanner. “The shells are getting smaller, and the squids are getting faster.”

The evolutionary pressures favored being nimble over being armored, and cephalopods started to lose their shells, according to Mr. Tanner. The adaptation allowed them to outcompete their shelled relatives for fast food, and they were able to better evade predators. They were also able to keep up with competitors seeking the same prey.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Posted on July 12, 2019 at 4:32 PM63 Comments

Presidential Candidate Andrew Yang Has Quantum Encryption Policy

At least one presidential candidate has a policy about quantum computing and encryption.

It has two basic planks. One: fund quantum-resistant encryption standards. (Note: NIST is already doing this.) Two, fund quantum computing. (Unlike many far more pressing computer security problems, the market seems to be doing this on its own quite nicely.)

Okay, so not the greatest policy—but at least one candidate has a policy. Do any of the other candidates have anything else in this area?

Yang has also talked about blockchain: ”

“I believe that blockchain needs to be a big part of our future,” Yang told a crowded room at the Consensus conference in New York, where he gave a keynote address Wednesday. “If I’m in the White House, oh boy are we going to have some fun in terms of the crypto currency community.”

Okay, so that’s not so great, either. But again, I don’t think anyone else talks about this.

Note: this is not an invitation to talk more general politics. Not even an invitation to explain how good or bad Andrew Yang’s chances are. Or anyone else’s. Please.

Posted on July 12, 2019 at 5:36 AM31 Comments

Resetting Your GE Smart Light Bulb

If you need to reset the software in your GE smart light bulb—firmware version 2.8 or later—just follow these easy instructions:

Start with your bulb off for at least 5 seconds.

  1. Turn on for 8 seconds
  2. Turn off for 2 seconds
  3. Turn on for 8 seconds
  4. Turn off for 2 seconds
  5. Turn on for 8 seconds
  6. Turn off for 2 seconds
  7. Turn on for 8 seconds
  8. Turn off for 2 seconds
  9. Turn on for 8 seconds
  10. Turn off for 2 seconds
  11. Turn on

Bulb will flash on and off 3 times if it has been successfully reset.

Welcome to the future!

Posted on July 11, 2019 at 6:24 AM73 Comments

Details of the Cloud Hopper Attacks

Reuters has a long article on the Chinese government APT attack called Cloud Hopper. It was much bigger than originally reported.

The hacking campaign, known as “Cloud Hopper,” was the subject of a U.S. indictment in December that accused two Chinese nationals of identity theft and fraud. Prosecutors described an elaborate operation that victimized multiple Western companies but stopped short of naming them. A Reuters report at the time identified two: Hewlett Packard Enterprise and IBM.

Yet the campaign ensnared at least six more major technology firms, touching five of the world’s 10 biggest tech service providers.

Also compromised by Cloud Hopper, Reuters has found: Fujitsu, Tata Consultancy Services, NTT Data, Dimension Data, Computer Sciences Corporation and DXC Technology. HPE spun-off its services arm in a merger with Computer Sciences Corporation in 2017 to create DXC.

Waves of hacking victims emanate from those six plus HPE and IBM: their clients. Ericsson, which competes with Chinese firms in the strategically critical mobile telecoms business, is one. Others include travel reservation system Sabre, the American leader in managing plane bookings, and the largest shipbuilder for the U.S. Navy, Huntington Ingalls Industries, which builds America’s nuclear submarines at a Virginia shipyard.

Posted on July 10, 2019 at 5:51 AM17 Comments

Cardiac Biometric

MIT Technology Review is reporting about an infrared laser device that can identify people by their unique cardiac signature at a distance:

A new device, developed for the Pentagon after US Special Forces requested it, can identify people without seeing their face: instead it detects their unique cardiac signature with an infrared laser. While it works at 200 meters (219 yards), longer distances could be possible with a better laser. “I don’t want to say you could do it from space,” says Steward Remaly, of the Pentagon’s Combatting Terrorism Technical Support Office, “but longer ranges should be possible.”

Contact infrared sensors are often used to automatically record a patient’s pulse. They work by detecting the changes in reflection of infrared light caused by blood flow. By contrast, the new device, called Jetson, uses a technique known as laser vibrometry to detect the surface movement caused by the heartbeat. This works though typical clothing like a shirt and a jacket (though not thicker clothing such as a winter coat).

[…]

Remaly’s team then developed algorithms capable of extracting a cardiac signature from the laser signals. He claims that Jetson can achieve over 95% accuracy under good conditions, and this might be further improved. In practice, it’s likely that Jetson would be used alongside facial recognition or other identification methods.

Wenyao Xu of the State University of New York at Buffalo has also developed a remote cardiac sensor, although it works only up to 20 meters away and uses radar. He believes the cardiac approach is far more robust than facial recognition. “Compared with face, cardiac biometrics are more stable and can reach more than 98% accuracy,” he says.

I have my usual questions about false positives vs false negatives, how stable the biometric is over time, and whether it works better or worse against particular sub-populations. But interesting nonetheless.

Posted on July 8, 2019 at 12:38 PM21 Comments

Applied Cryptography is Banned in Oregon Prisons

My Applied Cryptography is on a list of books banned in Oregon prisons. It’s not me—and it’s not cryptography—it’s that the prisons ban books that teach people to code. The subtitle is “Algorithms, Protocols, and Source Code in C”—and that’s the reason.

My more recent Cryptography Engineering is a much better book for prisoners, anyway.

Posted on July 5, 2019 at 1:52 PM17 Comments

Research on Human Honesty

New research from Science: “Civic honesty around the globe“:

Abstract: Civic honesty is essential to social capital and economic development, but is often in conflict with material self-interest. We examine the trade-off between honesty and self-interest using field experiments in 355 cities spanning 40 countries around the globe. We turned in over 17,000 lost wallets with varying amounts of money at public and private institutions, and measured whether recipients contacted the owner to return the wallets. In virtually all countries citizens were more likely to return wallets that contained more money. Both non-experts and professional economists were unable to predict this result. Additional data suggest our main findings can be explained by a combination of altruistic concerns and an aversion to viewing oneself as a thief, which increase with the material benefits of dishonesty.

I am surprised, too.

Posted on July 5, 2019 at 6:15 AM24 Comments

US Journalist Detained When Returning to US

Pretty horrible story of a US journalist who had his computer and phone searched at the border when returning to the US from Mexico.

After I gave him the password to my iPhone, Moncivias spent three hours reviewing hundreds of photos and videos and emails and calls and texts, including encrypted messages on WhatsApp, Signal, and Telegram. It was the digital equivalent of tossing someone’s house: opening cabinets, pulling out drawers, and overturning furniture in hopes of finding something—anything—illegal. He read my communications with friends, family, and loved ones. He went through my correspondence with colleagues, editors, and sources. He asked about the identities of people who have worked with me in war zones. He also went through my personal photos, which I resented. Consider everything on your phone right now. Nothing on mine was spared.

Pomeroy, meanwhile, searched my laptop. He browsed my emails and my internet history. He looked through financial spreadsheets and property records and business correspondence. He was able to see all the same photos and videos as Moncivias and then some, including photos I thought I had deleted.

The EFF has extensive information and advice about device searches at the US border, including a travel guide:

If you are a U.S. citizen, border agents cannot stop you from entering the country, even if you refuse to unlock your device, provide your device password, or disclose your social media information. However, agents may escalate the encounter if you refuse. For example, agents may seize your devices, ask you intrusive questions, search your bags more intensively, or increase by many hours the length of detention. If you are a lawful permanent resident, agents may raise complicated questions about your continued status as a resident. If you are a foreign visitor, agents may deny you entry.

The most important piece of advice is to think about this all beforehand, and plan accordingly.

Posted on July 4, 2019 at 6:38 AM52 Comments

Digital License Plates

They’re a thing:

Developers say digital plates utilize “advanced telematics”—to collect tolls, pay for parking and send out Amber Alerts when a child is abducted. They also help recover stolen vehicles by changing the display to read “Stolen,” thereby alerting everyone within eyeshot.

This makes no sense to me. The numbers are static. License plates being low-tech are a feature, not a bug.

Posted on July 3, 2019 at 6:28 AM40 Comments

Google Releases Basic Homomorphic Encryption Tool

Google has released an open-source cryptographic tool: Private Join and Compute. From a Wired article:

Private Join and Compute uses a 1970s methodology known as “commutative encryption” to allow data in the data sets to be encrypted with multiple keys, without it mattering which order the keys are used in. This is helpful for multiparty computation, where you need to apply and later peel away multiple layers of encryption without affecting the computations performed on the encrypted data. Crucially, Private Join and Compute also uses methods first developed in the ’90s that enable a system to combine two encrypted data sets, determine what they have in common, and then perform mathematical computations directly on this encrypted, unreadable data through a technique called homomorphic encryption.

True homomorphic encryption isn’t possible, and my guess is that it will never be feasible for most applications. But limited application tricks like this have been around for decades, and sometimes they’re useful.

Boing Boing article.

Posted on July 2, 2019 at 6:24 AM16 Comments

Yubico Security Keys with a Crypto Flaw

Wow, is this an embarrassing bug:

Yubico is recalling a line of security keys used by the U.S. government due to a firmware flaw. The company issued a security advisory today that warned of an issue in YubiKey FIPS Series devices with firmware versions 4.4.2 and 4.4.4 that reduced the randomness of the cryptographic keys it generates. The security keys are used by thousands of federal employees on a daily basis, letting them securely log-on to their devices by issuing one-time passwords.

The problem in question occurs after the security key powers up. According to Yubico, a bug keeps “some predictable content” inside the device’s data buffer that could impact the randomness of the keys generated. Security keys with ECDSA signatures are in particular danger. A total of 80 of the 256 bits generated by the key remain static, meaning an attacker who gains access to several signatures could recreate the private key.

Boing Boing post.

EDITED TO ADD (6/12): From Microsoft TechNet Security Guidance blog (in 2014): “Why We’re Not Recommending ‘FIPS Mode’ Anymore.

Posted on July 1, 2019 at 5:55 AM48 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.