Entries Tagged "vulnerabilities"

Page 1 of 38

New Spectre-Like Attacks

There’s new research that demonstrates security vulnerabilities in all of the AMD and Intel chips with micro-op caches, including the ones that were specifically engineered to be resistant to the Spectre/Meltdown attacks of three years ago.

Details:

The new line of attacks exploits the micro-op cache: an on-chip structure that speeds up computing by storing simple commands and allowing the processor to fetch them quickly and early in the speculative execution process, as the team explains in a writeup from the University of Virginia. Even though the processor quickly realizes its mistake and does a U-turn to go down the right path, attackers can get at the private data while the processor is still heading in the wrong direction.

It seems really difficult to exploit these vulnerabilities. We’ll need some more analysis before we understand what we have to patch and how.

More news.

Posted on May 5, 2021 at 10:35 AMView Comments

Tesla Remotely Hacked from a Drone

This is an impressive hack:

Security researchers Ralf-Philipp Weinmann of Kunnamon, Inc. and Benedikt Schmotzle of Comsecuris GmbH have found remote zero-click security vulnerabilities in an open-source software component (ConnMan) used in Tesla automobiles that allowed them to compromise parked cars and control their infotainment systems over WiFi. It would be possible for an attacker to unlock the doors and trunk, change seat positions, both steering and acceleration modes — in short, pretty much what a driver pressing various buttons on the console can do. This attack does not yield drive control of the car though.

That last sentence is important.

News article.

Posted on May 4, 2021 at 9:41 AMView Comments

Serious MacOS Vulnerability Patched

Apple just patched a MacOS vulnerability that bypassed malware checks.

The flaw is akin to a front entrance that’s barred and bolted effectively, but with a cat door at the bottom that you can easily toss a bomb through. Apple mistakenly assumed that applications will always have certain specific attributes. Owens discovered that if he made an application that was really just a script—code that tells another program what do rather than doing it itself—and didn’t include a standard application metadata file called “info.plist,” he could silently run the app on any Mac. The operating system wouldn’t even give its most basic prompt: “This is an application downloaded from the Internet. Are you sure you want to open it?”

More.

Posted on April 30, 2021 at 7:38 AMView Comments

Security Vulnerabilities in Cellebrite

Moxie Marlinspike has an intriguing blog post about Cellebrite, a tool used by police and others to break into smartphones. Moxie got his hands on one of the devices, which seems to be a pair of Windows software packages and a whole lot of connecting cables.

According to Moxie, the software is riddled with vulnerabilities. (The one example he gives is that it uses FFmpeg DLLs from 2012, and have not been patched with the 100+ security updates since then.)

…we found that it’s possible to execute arbitrary code on a Cellebrite machine simply by including a specially formatted but otherwise innocuous file in any app on a device that is subsequently plugged into Cellebrite and scanned. There are virtually no limits on the code that can be executed.

This means that Cellebrite has one — or many — remote code execution bugs, and that a specially designed file on the target phone can infect Cellebrite.

For example, by including a specially formatted but otherwise innocuous file in an app on a device that is then scanned by Cellebrite, it’s possible to execute code that modifies not just the Cellebrite report being created in that scan, but also all previous and future generated Cellebrite reports from all previously scanned devices and all future scanned devices in any arbitrary way (inserting or removing text, email, photos, contacts, files, or any other data), with no detectable timestamp changes or checksum failures. This could even be done at random, and would seriously call the data integrity of Cellebrite’s reports into question.

That malicious file could, for example, insert fabricated evidence or subtly alter the evidence it copies from a phone. It could even write that fabricated/altered evidence back to the phone so that from then on, even an uncorrupted version of Cellebrite will find the altered evidence on that phone.

Finally, Moxie suggests that future versions of Signal will include such a file, sometimes:

Files will only be returned for accounts that have been active installs for some time already, and only probabilistically in low percentages based on phone number sharding.

The idea, of course, is that a defendant facing Cellebrite evidence in court can claim that the evidence is tainted.

I have no idea how effective this would be in court. Or whether this runs foul of the Computer Fraud and Abuse Act in the US. (Is it okay to booby-trap your phone?) A colleague from the UK says that this would not be legal to do under the Computer Misuse Act, although it’s hard to blame the phone owner if he doesn’t even know it’s happening.

Posted on April 27, 2021 at 6:57 AMView Comments

When AIs Start Hacking

If you don’t have enough to worry about already, consider a world where AIs are hackers.

Hacking is as old as humanity. We are creative problem solvers. We exploit loopholes, manipulate systems, and strive for more influence, power, and wealth. To date, hacking has exclusively been a human activity. Not for long.

As I lay out in a report I just published, artificial intelligence will eventually find vulnerabilities in all sorts of social, economic, and political systems, and then exploit them at unprecedented speed, scale, and scope. After hacking humanity, AI systems will then hack other AI systems, and humans will be little more than collateral damage.

Okay, maybe this is a bit of hyperbole, but it requires no far-future science fiction technology. I’m not postulating an AI “singularity,” where the AI-learning feedback loop becomes so fast that it outstrips human understanding. I’m not assuming intelligent androids. I’m not assuming evil intent. Most of these hacks don’t even require major research breakthroughs in AI. They’re already happening. As AI gets more sophisticated, though, we often won’t even know it’s happening.

AIs don’t solve problems like humans do. They look at more types of solutions than us. They’ll go down complex paths that we haven’t considered. This can be an issue because of something called the explainability problem. Modern AI systems are essentially black boxes. Data goes in one end, and an answer comes out the other. It can be impossible to understand how the system reached its conclusion, even if you’re a programmer looking at the code.

In 2015, a research group fed an AI system called Deep Patient health and medical data from some 700,000 people, and tested whether it could predict diseases. It could, but Deep Patient provides no explanation for the basis of a diagnosis, and the researchers have no idea how it comes to its conclusions. A doctor either can either trust or ignore the computer, but that trust will remain blind.

While researchers are working on AI that can explain itself, there seems to be a trade-off between capability and explainability. Explanations are a cognitive shorthand used by humans, suited for the way humans make decisions. Forcing an AI to produce explanations might be an additional constraint that could affect the quality of its decisions. For now, AI is becoming more and more opaque and less explainable.

Separately, AIs can engage in something called reward hacking. Because AIs don’t solve problems in the same way people do, they will invariably stumble on solutions we humans might never have anticipated­ — and some will subvert the intent of the system. That’s because AIs don’t think in terms of the implications, context, norms, and values we humans share and take for granted. This reward hacking involves achieving a goal but in a way the AI’s designers neither wanted nor intended.

Take a soccer simulation where an AI figured out that if it kicked the ball out of bounds, the goalie would have to throw the ball in and leave the goal undefended. Or another simulation, where an AI figured out that instead of running, it could make itself tall enough to cross a distant finish line by falling over it. Or the robot vacuum cleaner that instead of learning to not bump into things, it learned to drive backwards, where there were no sensors telling it it was bumping into things. If there are problems, inconsistencies, or loopholes in the rules, and if those properties lead to an acceptable solution as defined by the rules, then AIs will find these hacks.

We learned about this hacking problem as children with the story of King Midas. When the god Dionysus grants him a wish, Midas asks that everything he touches turns to gold. He ends up starving and miserable when his food, drink, and daughter all turn to gold. It’s a specification problem: Midas programmed the wrong goal into the system.

Genies are very precise about the wording of wishes, and can be maliciously pedantic. We know this, but there’s still no way to outsmart the genie. Whatever you wish for, he will always be able to grant it in a way you wish he hadn’t. He will hack your wish. Goals and desires are always underspecified in human language and thought. We never describe all the options, or include all the applicable caveats, exceptions, and provisos. Any goal we specify will necessarily be incomplete.

While humans most often implicitly understand context and usually act in good faith, we can’t completely specify goals to an AI. And AIs won’t be able to completely understand context.

In 2015, Volkswagen was caught cheating on emissions control tests. This wasn’t AI — human engineers programmed a regular computer to cheat — but it illustrates the problem. They programmed their engine to detect emissions control testing, and to behave differently. Their cheat remained undetected for years.

If I asked you to design a car’s engine control software to maximize performance while still passing emissions control tests, you wouldn’t design the software to cheat without understanding that you were cheating. This simply isn’t true for an AI. It will think “out of the box” simply because it won’t have a conception of the box. It won’t understand that the Volkswagen solution harms others, undermines the intent of the emissions control tests, and is breaking the law. Unless the programmers specify the goal of not behaving differently when being tested, an AI might come up with the same hack. The programmers will be satisfied, the accountants ecstatic. And because of the explainability problem, no one will realize what the AI did. And yes, knowing the Volkswagen story, we can explicitly set the goal to avoid that particular hack. But the lesson of the genie is that there will always be unanticipated hacks.

How realistic is AI hacking in the real world? The feasibility of an AI inventing a new hack depends a lot on the specific system being modeled. For an AI to even start on optimizing a problem, let alone hacking a completely novel solution, all of the rules of the environment must be formalized in a way the computer can understand. Goals — known in AI as objective functions — need to be established. And the AI needs some sort of feedback on how well it’s doing so that it can improve.

Sometimes this is simple. In chess, the rules, objective, and feedback — did you win or lose? — are all precisely specified. And there’s no context to know outside of those things that would muddy the waters. This is why most of the current examples of goal and reward hacking come from simulated environments. These are artificial and constrained, with all of the rules specified to the AI. The inherent ambiguity in most other systems ends up being a near-term security defense against AI hacking.

Where this gets interesting are systems that are well specified and almost entirely digital. Think about systems of governance like the tax code: a series of algorithms, with inputs and outputs. Think about financial systems, which are more or less algorithmically tractable.

We can imagine equipping an AI with all of the world’s laws and regulations, plus all the world’s financial information in real time, plus anything else we think might be relevant; and then giving it the goal of “maximum profit.” My guess is that this isn’t very far off, and that the result will be all sorts of novel hacks.

But advances in AI are discontinuous and counterintuitive. Things that seem easy turn out to be hard, and things that seem hard turn out to be easy. We don’t know until the breakthrough occurs.

When AIs start hacking, everything will change. They won’t be constrained in the same ways, or have the same limits, as people. They’ll change hacking’s speed, scale, and scope, at rates and magnitudes we’re not ready for. AI text generation bots, for example, will be replicated in the millions across social media. They will be able to engage on issues around the clock, sending billions of messages, and overwhelm any actual online discussions among humans. What we will see as boisterous political debate will be bots arguing with other bots. They’ll artificially influence what we think is normal, what we think others think.

The increasing scope of AI systems also makes hacks more dangerous. AIs are already making important decisions about our lives, decisions we used to believe were the exclusive purview of humans: Who gets parole, receives bank loans, gets into college, or gets a job. As AI systems get more capable, society will cede more — and more important — decisions to them. Hacks of these systems will become more damaging.

What if you fed an AI the entire US tax code? Or, in the case of a multinational corporation, the entire world’s tax codes? Will it figure out, without being told, that it’s smart to incorporate in Delaware and register your ship in Panama? How many loopholes will it find that we don’t already know about? Dozens? Thousands? We have no idea.

While we have societal systems that deal with hacks, those were developed when hackers were humans, and reflect human speed, scale, and scope. The IRS cannot deal with dozens — let alone thousands — of newly discovered tax loopholes. An AI that discovers unanticipated but legal hacks of financial systems could upend our markets faster than we could recover.

As I discuss in my report, while hacks can be used by attackers to exploit systems, they can also be used by defenders to patch and secure systems. So in the long run, AI hackers will favor the defense because our software, tax code, financial systems, and so on can be patched before they’re deployed. Of course, the transition period is dangerous because of all the legacy rules that will be hacked. There, our solution has to be resilience.

We need to build resilient governing structures that can quickly and effectively respond to the hacks. It won’t do any good if it takes years to update the tax code, or if a legislative hack becomes so entrenched that it can’t be patched for political reasons. This is a hard problem of modern governance. It also isn’t a substantially different problem than building governing structures that can operate at the speed and complexity of the information age.

What I’ve been describing is the interplay between human and computer systems, and the risks inherent when the computers start doing the part of humans. This, too, is a more general problem than AI hackers. It’s also one that technologists and futurists are writing about. And while it’s easy to let technology lead us into the future, we’re much better off if we as a society decide what technology’s role in our future should be.

This is all something we need to figure out now, before these AIs come online and start hacking our world.

This essay previously appeared on Wired.com

Posted on April 26, 2021 at 6:06 AMView Comments

Accellion Supply Chain Hack

A vulnerability in the Accellion file-transfer program is being used by criminal groups to hack networks worldwide.

There’s much in the article about when Accellion knew about the vulnerability, when it alerted its customers, and when it patched its software.

The governor of New Zealand’s central bank, Adrian Orr, says Accellion failed to warn it after first learning in mid-December that the nearly 20-year-old FTA application — using antiquated technology and set for retirement — had been breached.

Despite having a patch available on Dec. 20, Accellion did not notify the bank in time to prevent its appliance from being breached five days later, the bank said.

CISA alert.

EDITED TO ADD (4/14): It appears spy plane details were leaked after the vendor didn’t pay the ransom.

Posted on March 23, 2021 at 6:32 AMView Comments

Easy SMS Hijacking

Vice is reporting on a cell phone vulnerability caused by commercial SMS services. One of the things these services permit is text message forwarding. It turns out that with a little bit of anonymous money — in this case, $16 off an anonymous prepaid credit card — and a few lies, you can forward the text messages from any phone to any other phone.

For businesses, sending text messages to hundreds, thousands, or perhaps millions of customers can be a laborious task. Sakari streamlines that process by letting business customers import their own number. A wide ecosystem of these companies exist, each advertising their own ability to run text messaging for other businesses. Some firms say they only allow customers to reroute messages for business landlines or VoIP phones, while others allow mobile numbers too.

Sakari offers a free trial to anyone wishing to see what the company’s dashboard looks like. The cheapest plan, which allows customers to add a phone number they want to send and receive texts as, is where the $16 goes. Lucky225 provided Motherboard with screenshots of Sakari’s interface, which show a red “+” symbol where users can add a number.

While adding a number, Sakari provides the Letter of Authorization for the user to sign. Sakari’s LOA says that the user should not conduct any unlawful, harassing, or inappropriate behaviour with the text messaging service and phone number.

But as Lucky225 showed, a user can just sign up with someone else’s number and receive their text messages instead.

This is much easier than SMS hijacking, and causes the same security vulnerabilities. Too many networks use SMS as an authentication mechanism.

Once the hacker is able to reroute a target’s text messages, it can then be trivial to hack into other accounts associated with that phone number. In this case, the hacker sent login requests to Bumble, WhatsApp, and Postmates, and easily accessed the accounts.

Don’t focus too much on the particular company in this article.

But Sakari is only one company. And there are plenty of others available in this overlooked industry.

Tuketu said that after one provider cut-off their access, “it took us two minutes to find another.”

Slashdot thread. And Cory Doctorow’s comments.

Posted on March 19, 2021 at 6:21 AMView Comments

Exploiting Spectre Over the Internet

Google has demonstrated exploiting the Spectre CPU attack remotely over the web:

Today, we’re sharing proof-of-concept (PoC) code that confirms the practicality of Spectre exploits against JavaScript engines. We use Google Chrome to demonstrate our attack, but these issues are not specific to Chrome, and we expect that other modern browsers are similarly vulnerable to this exploitation vector. We have developed an interactive demonstration of the attack available at https://leaky.page/ ; the code and a more detailed writeup are published on Github here.

The demonstration website can leak data at a speed of 1kB/s when running on Chrome 88 on an Intel Skylake CPU. Note that the code will likely require minor modifications to apply to other CPUs or browser versions; however, in our tests the attack was successful on several other processors, including the Apple M1 ARM CPU, without any major changes.

Posted on March 18, 2021 at 6:17 AMView Comments

On Not Fixing Old Vulnerabilities

How is this even possible?

…26% of companies Positive Technologies tested were vulnerable to WannaCry, which was a threat years ago, and some even vulnerable to Heartbleed. “The most frequent vulnerabilities detected during automated assessment date back to 2013-­2017, which indicates a lack of recent software updates,” the reported stated.

26%!? One in four networks?

Even if we assume that the report is self-serving to the company that wrote it, and that the statistic is not generally representative, this is still a disaster. The number should be 0%.

WannaCry was a 2017 cyberattack, based on a NSA-discovered and Russia-stolen-and-published Windows vulnerability. It primarily affects older, no-longer-supported products like Windows 7. If we can’t keep our systems secure from these vulnerabilities, how are we ever going to secure them from new threats?

Posted on March 9, 2021 at 6:16 AMView Comments

1 2 3 38

Sidebar photo of Bruce Schneier by Joe MacInnis.