Entries Tagged "cybersecurity"

Page 4 of 29

New iOS Security Feature Makes It Harder for Police to Unlock Seized Phones

Everybody is reporting about a new security iPhone security feature with iOS 18: if the phone hasn’t been used for a few days, it automatically goes into its “Before First Unlock” state and has to be rebooted.

This is a really good security feature. But various police departments don’t like it, because it makes it harder for them to unlock suspects’ phones.

Posted on November 14, 2024 at 7:05 AMView Comments

Criminals Exploiting FBI Emergency Data Requests

I’ve been writing about the problem with lawful-access backdoors in encryption for decades now: that as soon as you create a mechanism for law enforcement to bypass encryption, the bad guys will use it too.

Turns out the same thing is true for non-technical backdoors:

The advisory said that the cybercriminals were successful in masquerading as law enforcement by using compromised police accounts to send emails to companies requesting user data. In some cases, the requests cited false threats, like claims of human trafficking and, in one case, that an individual would “suffer greatly or die” unless the company in question returns the requested information.

The FBI said the compromised access to law enforcement accounts allowed the hackers to generate legitimate-looking subpoenas that resulted in companies turning over usernames, emails, phone numbers, and other private information about their users.

Posted on November 12, 2024 at 7:05 AMView Comments

Roger Grimes on Prioritizing Cybersecurity Advice

This is a good point:

Part of the problem is that we are constantly handed lists…list of required controls…list of things we are being asked to fix or improve…lists of new projects…lists of threats, and so on, that are not ranked for risks. For example, we are often given a cybersecurity guideline (e.g., PCI-DSS, HIPAA, SOX, NIST, etc.) with hundreds of recommendations. They are all great recommendations, which if followed, will reduce risk in your environment.

What they do not tell you is which of the recommended things will have the most impact on best reducing risk in your environment. They do not tell you that one, two or three of these things…among the hundreds that have been given to you, will reduce more risk than all the others.

[…]

The solution?

Here is one big one: Do not use or rely on un-risk-ranked lists. Require any list of controls, threats, defenses, solutions to be risk-ranked according to how much actual risk they will reduce in the current environment if implemented.

[…]

This specific CISA document has at least 21 main recommendations, many of which lead to two or more other more specific recommendations. Overall, it has several dozen recommendations, each of which individually will likely take weeks to months to fulfill in any environment if not already accomplished. Any person following this document is…rightly…going to be expected to evaluate and implement all those recommendations. And doing so will absolutely reduce risk.

The catch is: There are two recommendations that WILL DO MORE THAN ALL THE REST ADDED TOGETHER TO REDUCE CYBERSECURITY RISK most efficiently: patching and using multifactor authentication (MFA). Patching is listed third. MFA is listed eighth. And there is nothing to indicate their ability to significantly reduce cybersecurity risk as compared to the other recommendations. Two of these things are not like the other, but how is anyone reading the document supposed to know that patching and using MFA really matter more than all the rest?

Posted on October 31, 2024 at 11:43 AMView Comments

IronNet Has Shut Down

After retiring in 2014 from an uncharacteristically long tenure running the NSA (and US CyberCommand), Keith Alexander founded a cybersecurity company called IronNet. At the time, he claimed that it was based on IP he developed on his own time while still in the military. That always troubled me. Whatever ideas he had, they were developed on public time using public resources: he shouldn’t have been able to leave military service with them in his back pocket.

In any case, it was never clear what those ideas were. IronNet never seemed to have any special technology going for it. Near as I could tell, its success was entirely based on Alexander’s name.

Turns out there was nothing there. After some crazy VC investments and an IPO with a $3 billion “unicorn” valuation, the company has shut its doors. It went bankrupt a year ago—ceasing operations and firing everybody—and reemerged as a private company. It now seems to be gone for good, not having found anyone willing to buy it.

And—wow—the recriminations are just starting.

Last September the never-profitable company announced it was shutting down and firing its employees after running out of money, providing yet another example of a tech firm that faltered after failing to deliver on overhyped promises.

The firm’s crash has left behind a trail of bitter investors and former employees who remain angry at the company and believe it misled them about its financial health.

IronNet’s rise and fall also raises questions about the judgment of its well-credentialed leaders, a who’s who of the national security establishment. National security experts, former employees and analysts told The Associated Press that the firm collapsed, in part, because it engaged in questionable business practices, produced subpar products and services, and entered into associations that could have left the firm vulnerable to meddling by the Kremlin.

“I’m honestly ashamed that I was ever an executive at that company,” said Mark Berly, a former IronNet vice president. He said the company’s top leaders cultivated a culture of deceit “just like Theranos,” the once highly touted blood-testing firm that became a symbol of corporate fraud.

There has been one lawsuit. Presumably there will be more. I’m sure Alexander got plenty rich off his NSA career.

Posted on October 11, 2024 at 7:08 AMView Comments

Deebot Robot Vacuums Are Using Photos and Audio to Train Their AI

An Australian news agency is reporting that robot vacuum cleaners from the Chinese company Deebot are surreptitiously taking photos and recording audio, and sending that data back to the vendor to train their AIs.

Ecovacs’s privacy policy—available elsewhere in the app—allows for blanket collection of user data for research purposes, including:

  • The 2D or 3D map of the user’s house generated by the device
  • Voice recordings from the device’s microphone
  • Photos or videos recorded by the device’s camera

It also states that voice recordings, videos and photos that are deleted via the app may continue to be held and used by Ecovacs.

No word on whether the recorded audio is being used to train the vacuum in some way, or whether it is being used to train a LLM.

Slashdot thread.

Posted on October 10, 2024 at 7:00 AMView Comments

Python Developers Targeted with Malware During Fake Job Interviews

Interesting social engineering attack: luring potential job applicants with fake recruiting pitches, trying to convince them to download malware. From a news article:

These particular attacks from North Korean state-funded hacking team Lazarus Group are new, but the overall malware campaign against the Python development community has been running since at least August of 2023, when a number of popular open source Python tools were maliciously duplicated with added malware. Now, though, there are also attacks involving “coding tests” that only exist to get the end user to install hidden malware on their system (cleverly hidden with Base64 encoding) that allows remote execution once present. The capacity for exploitation at that point is pretty much unlimited, due to the flexibility of Python and how it interacts with the underlying OS.

Posted on September 17, 2024 at 7:02 AMView Comments

On the Cyber Safety Review Board

When an airplane crashes, impartial investigatory bodies leap into action, empowered by law to unearth what happened and why. But there is no such empowered and impartial body to investigate CrowdStrike’s faulty update that recently unfolded, ensnarling banks, airlines, and emergency services to the tune of billions of dollars. We need one. To be sure, there is the White House’s Cyber Safety Review Board. On March 20, the CSRB released a report into last summer’s intrusion by a Chinese hacking group into Microsoft’s cloud environment, where it compromised the U.S. Department of Commerce, State Department, congressional offices, and several associated companies. But the board’s report—well-researched and containing some good and actionable recommendations—shows how it suffers from its lack of subpoena power and its political unwillingness to generalize from specific incidents to the broader industry.

Some background: The CSRB was established in 2021, by executive order, to provide an independent analysis and assessment of significant cyberattacks against the United States. The goal was to pierce the corporate confidentiality that often surrounds such attacks and to provide the entire security community with lessons and recommendations. The more we all know about what happened, the better we can all do next time. It’s the same thinking that led to the formation of the National Transportation Safety Board, but for cyberattacks and not plane crashes.

But the board immediately failed to live up to its mission. It was founded in response to the Russian cyberattack on the U.S. known as SolarWinds. Although it was specifically tasked with investigating that incident, it did not—for reasons that remain unclear.

So far, the board has published three reports. They offered only simplistic recommendations. In the first investigation, on Log4J, the CSRB exhorted companies to patch their systems faster and more often. In the second, on Lapsus$, the CSRB told organizations not to use SMS-based two-factor authentication (it’s vulnerable to SIM-swapping attacks). These two recommendations are basic cybersecurity hygiene, and not something we need an investigation to tell us.

The most recent report—on China’s penetration of Microsoft—is much better. This time, the CSRB gave us an extensive analysis of Microsoft’s security failures and placed blame for the attack’s success squarely on their shoulders. Its recommendations were also more specific and extensive, addressing Microsoft’s board and leaders specifically and the industry more generally. The report describes how Microsoft stopped rotating cryptographic keys in early 2021, reducing the security of the systems affected in the hack. The report suggests that if the company had set up an automated or manual key rotation system, or a way to alert teams about the age of their keys, it could have prevented the attack on its systems. The report also looked at how Microsoft’s competitors—think Google, Oracle, and Amazon Web Services—handle this issue, offering insights on how similar companies avoid mistakes.

Yet there are still problems, with the report itself and with the environment in which it was produced.

First, the public report cites a large number of anonymous sources. While the report lays blame for the breach on Microsoft’s lax security culture, it is actually quite deferential to Microsoft; it makes special mention of the company’s cooperation. If the board needed to make trades to get information that would only be provided if people were given anonymity, this should be laid out more explicitly for the sake of transparency. More importantly, the board seems to have conflict-of-interest issues arising from the fact that the investigators are corporate executives and heads of government agencies who have full-time jobs.

Second: Unlike the NTSB, the CSRB lacks subpoena power. This is, at least in part, out of fear that the conflicted tech executives and government employees would use the power in an anticompetitive fashion. As a result, the board must rely on wheedling and cooperation for its fact-finding. While the DHS press release said, “Microsoft fully cooperated with the Board’s review,” the next company may not be nearly as cooperative, and we do not know what was not shared with the CSRB.

One of us, Tarah, recently testified on this topic before the U.S. Senate’s Homeland Security and Governmental Affairs Committee, and the senators asking questions seemed genuinely interested in how to fix the CSRB’s extreme slowness and lack of transparency in the two reports they’d issued so far.

It’s a hard task. The CSRB’s charter comes from Executive Order 14208, which is why—unlike the NTSB—it doesn’t have subpoena power. Congress needs to codify the CSRB in law and give it the subpoena power it so desperately needs.

Additionally, the CSRB’s reports don’t provide useful guidance going forward. For example, is the Microsoft report provides no mapping of the company’s security problems to any government standards that could have prevented them. In this case, the problem is that there are no standards overseen by NIST—the organization in charge of cybersecurity standards—for key rotation. It would have been better for the report to have said that explicitly. The cybersecurity industry needs NIST standards to give us a compliance floor below which any organization is explicitly failing to provide due care. The report condemns Microsoft for not rotating an internal encryption key for seven years, when its standard internally was four years. However, for the last several years, automated key rotation more on the order of once a month or even more frequently has become the expected industry guideline.

A guideline, however, is not a standard or regulation. It’s just a strongly worded suggestion. In this specific case, the report doesn’t offer guidance on how often keys should be rotated. In essence, the CSRB report said that Microsoft should feel very bad about the fact that they did not rotate their keys more often—but did not explain the logic, give an actual baseline of how often keys should be rotated, or provide any statistical or survey data to support why that timeline is appropriate. Automated certificate rotation such as that provided by public free service Let’s Encrypt has revolutionized encrypted-by-default communications, and expectations in the cybersecurity industry have risen to match. Unfortunately, the report only discusses Microsoft proprietary keys by brand name, instead of having a larger discussion of why public key infrastructure exists or what the best practices should be.

More generally, because the CSRB reports so far have failed to generalize their findings with transparent and thorough research that provides real standards and expectations for the cybersecurity industry, we—policymakers, industry leaders, the U.S. public—find ourselves filling in the gaps. Individual experts are having to provide anecdotal and individualized interpretations of what their investigations might imply for companies simply trying to learn what their actual due care responsibilities are.

It’s as if no one is sure whether boiling your drinking water or nailing a horseshoe up over the door is statistically more likely to decrease the incidence of cholera. Sure, a lot of us think that boiling your water is probably best, but no one is saying that with real science. No one is saying how long you have to boil your water for, or if any water sources more likely to carry illness. And until there are real numbers and general standards, our educated opinions are on an equal footing with horseshoes and hope.

It should not be the job of cybersecurity experts, even us, to generate lessons from CSRB reports based on our own opinions. This is why we continue to ask the CSRB to provide generalizable standards which either are based on or call for NIST standardization. We want proscriptive and descriptive reports of incidents: see, for example, the UK GAO report for the WannaCry ransomware, which remains a gold standard of government cybersecurity incident investigation reports.

We need and deserve more than one-off anecdotes about how one company didn’t do security well and should do it better in future.  Let’s start treating cybersecurity like the equivalent of public safety and get some real lessons learned.

This essay was written with Tarah Wheeler, and was published on Defense One.

Posted on August 6, 2024 at 7:01 AMView Comments

Providing Security Updates to Automobile Software

Auto manufacturers are just starting to realize the problems of supporting the software in older models:

Today’s phones are able to receive updates six to eight years after their purchase date. Samsung and Google provide Android OS updates and security updates for seven years. Apple halts servicing products seven years after they stop selling them.

That might not cut it in the auto world, where the average age of cars on US roads is only going up. A recent report found that cars and trucks just reached a new record average age of 12.6 years, up two months from 2023. That means the car software hitting the road today needs to work­—and maybe even improve—­beyond 2036. The average length of smartphone ownership is just 2.8 years.

I wrote about this in 2018, in Click Here to Kill Everything, talking about patching as a security mechanism:

This won’t work with more durable goods. We might buy a new DVR every 5 or 10 years, and a refrigerator every 25 years. We drive a car we buy today for a decade, sell it to someone else who drives it for another decade, and that person sells it to someone who ships it to a Third World country, where it’s resold yet again and driven for yet another decade or two. Go try to boot up a 1978 Commodore PET computer, or try to run that year’s VisiCalc, and see what happens; we simply don’t know how to maintain 40-year-old [consumer] software.

Consider a car company. It might sell a dozen different types of cars with a dozen different software builds each year. Even assuming that the software gets updated only every two years and the company supports the cars for only two decades, the company needs to maintain the capability to update 20 to 30 different software versions. (For a company like Bosch that supplies automotive parts for many different manufacturers, the number would be more like 200.) The expense and warehouse size for the test vehicles and associated equipment would be enormous. Alternatively, imagine if car companies announced that they would no longer support vehicles older than five, or ten, years. There would be serious environmental consequences.

We really don’t have a good solution here. Agile updates is how we maintain security in a world where new vulnerabilities arise all the time, and we don’t have the economic incentive to secure things properly from the start.

Posted on July 30, 2024 at 7:07 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.