Blog: January 2019 Archives

Security Flaws in Children's Smart Watches

A year ago, the Norwegian Consumer Council published an excellent security analysis of children’s GPS-connected smart watches. The security was terrible. Not only could parents track the children, anyone else could also track the children.

A recent analysis checked if anything had improved after that torrent of bad press. Short answer: no.

Guess what: a train wreck. Anyone could access the entire database, including real time child location, name, parents details etc. Not just Gator watches either—the same back end covered multiple brands and tens of thousands of watches

The Gator web backend was passing the user level as a parameter. Changing that value to another number gave super admin access throughout the platform. The system failed to validate that the user had the appropriate permission to take admin control!

This means that an attacker could get full access to all account information and all watch information. They could view any user of the system and any device on the system, including its location. They could manipulate everything and even change users’ emails/passwords to lock them out of their watch.

In fairness, upon our reporting of the vulnerability to them, Gator got it fixed in 48 hours.

This is a lesson in the limits of naming and shaming: publishing vulnerabilities in an effort to get companies to improve their security. If a company is specifically named, it is likely to improve the specific vulnerability described. But that is unlikely to translate into improved security practices in the future. If an industry, or product category, is named generally, nothing is likely to happen. This is one of the reasons I am a proponent of regulation.

News article.

EDITED TO ADD (2/13): The EU has acted in a similar case.

Posted on January 31, 2019 at 10:30 AM16 Comments

Security Analysis of the LIFX Smart Light Bulb

The security is terrible:

In a very short limited amount of time, three vulnerabilities have been discovered:

  • Wifi credentials of the user have been recovered (stored in plaintext into the flash memory).
  • No security settings. The device is completely open (no secure boot, no debug interface disabled, no flash encryption).
  • Root certificate and RSA private key have been extracted.

Boing Boing post.

Posted on January 30, 2019 at 10:00 AM30 Comments

iPhone FaceTime Vulnerability

This is kind of a crazy iPhone vulnerability: it’s possible to call someone on FaceTime and listen on their microphone—and see from their camera—before they accept the call.

This is definitely an embarrassment, and Apple was right to disable Group FaceTime until it’s fixed. But it’s hard to imagine how an adversary can operationalize this in any useful way.

New York governor Andrew M. Cuomo wrote: “The FaceTime bug is an egregious breach of privacy that puts New Yorkers at risk.” Kinda, I guess.

EDITED TO ADD (1/30): This bug/vulnerability was first discovered by a 14-year-old, whose mother tried to alert Apple with no success.

Posted on January 29, 2019 at 1:12 PM20 Comments

Japanese Government Will Hack Citizens' IoT Devices

The Japanese government is going to run penetration tests against all the IoT devices in their country, in an effort to (1) figure out what’s insecure, and (2) help consumers secure them:

The survey is scheduled to kick off next month, when authorities plan to test the password security of over 200 million IoT devices, beginning with routers and web cameras. Devices in people’s homes and on enterprise networks will be tested alike.

[…]

The Japanese government’s decision to log into users’ IoT devices has sparked outrage in Japan. Many have argued that this is an unnecessary step, as the same results could be achieved by just sending a security alert to all users, as there’s no guarantee that the users found to be using default or easy-to-guess passwords would change their passwords after being notified in private.

However, the government’s plan has its technical merits. Many of today’s IoT and router botnets are being built by hackers who take over devices with default or easy-to-guess passwords.

Hackers can also build botnets with the help of exploits and vulnerabilities in router firmware, but the easiest way to assemble a botnet is by collecting the ones that users have failed to secure with custom passwords.

Securing these devices is often a pain, as some expose Telnet or SSH ports online without the users’ knowledge, and for which very few users know how to change passwords. Further, other devices also come with secret backdoor accounts that in some cases can’t be removed without a firmware update.

I am interested in the results of this survey. Japan isn’t very different from other industrialized nations in this regard, so their findings will be general. I am less optimistic about the country’s ability to secure all of this stuff—especially before the 2020 Summer Olympics.

Posted on January 28, 2019 at 1:40 PM18 Comments

Hacking the GCHQ Backdoor

Last week, I evaluated the security of a recent GCHQ backdoor proposal for communications systems. Furthering the debate, Nate Cardozo and Seth Schoen of EFF explain how this sort of backdoor can be detected:

In fact, we think when the ghost feature is active­—silently inserting a secret eavesdropping member into an otherwise end-to-end encrypted conversation in the manner described by the GCHQ authors­—it could be detected (by the target as well as certain third parties) with at least four different techniques: binary reverse engineering, cryptographic side channels, network-traffic analysis, and crash log analysis. Further, crash log analysis could lead unrelated third parties to find evidence of the ghost in use, and it’s even possible that binary reverse engineering could lead researchers to find ways to disable the ghost capability on the client side. It should be obvious that none of these possibilities are desirable for law enforcement or society as a whole. And while we’ve theorized some types of mitigations that might make the ghost less detectable by particular techniques, they could also impose considerable costs to the network when deployed at the necessary scale, as well as creating new potential security risks or detection methods.

Other critiques of the system were written by Susan Landau and Matthew Green.

EDITED TO ADD (1/26): Good commentary on how to defeat the backdoor detection.

EDITED TO ADD (3/1): Another good essay on the security risks of this back door.

Posted on January 25, 2019 at 6:08 AM44 Comments

Military Carrier Pigeons in the Era of Electronic Warfare

They have advantages:

Pigeons are certainly no substitute for drones, but they provide a low-visibility option to relay information. Considering the storage capacity of microSD memory cards, a pigeon’s organic characteristics provide front line forces a relatively clandestine mean to transport gigabytes of video, voice, or still imagery and documentation over considerable distance with zero electromagnetic emissions or obvious detectability to radar. These decidedly low-technology options prove difficult to detect and track. Pigeons cannot talk under interrogation, although they are not entirely immune to being held under suspicion of espionage. Within an urban environment, a pigeon has even greater potential to blend into the local avian population, further compounding detection.

The author points out that both France and China still maintain a small number of pigeons in case electronic communications are disrupted.

And there’s an existing RFC.

EDITED TO ADD (2/13): The Russian military is still using pigeons.

Posted on January 24, 2019 at 6:38 AM37 Comments

The Evolution of Darknets

This is interesting:

To prevent the problems of customer binding, and losing business when darknet markets go down, merchants have begun to leave the specialized and centralized platforms and instead ventured to use widely accessible technology to build their own communications and operational back-ends.

Instead of using websites on the darknet, merchants are now operating invite-only channels on widely available mobile messaging systems like Telegram. This allows the merchant to control the reach of their communication better and be less vulnerable to system take-downs. To further stabilize the connection between merchant and customer, repeat customers are given unique messaging contacts that are independent of shared channels and thus even less likely to be found and taken down. Channels are often operated by automated bots that allow customers to inquire about offers and initiate the purchase, often even allowing a fully bot-driven experience without human intervention on the merchant’s side.

[…]

The other major change is the use of “dead drops” instead of the postal system which has proven vulnerable to tracking and interception. Now, goods are hidden in publicly accessible places like parks and the location is given to the customer on purchase. The customer then goes to the location and picks up the goods. This means that delivery becomes asynchronous for the merchant, he can hide a lot of product in different locations for future, not yet known, purchases. For the client the time to delivery is significantly shorter than waiting for a letter or parcel shipped by traditional means – he has the product in his hands in a matter of hours instead of days. Furthermore this method does not require for the customer to give any personally identifiable information to the merchant, which in turn doesn’t have to safeguard it anymore. Less data means less risk for everyone.

The use of dead drops also significantly reduces the risk of the merchant to be discovered by tracking within the postal system. He does not have to visit any easily to surveil post office or letter box, instead the whole public space becomes his hiding territory.

Cryptocurrencies are still the main means of payment, but due to the higher customer-binding, and vetting process by the merchant, escrows are seldom employed. Usually only multi-party transactions between customer and merchant are established, and often not even that.

[…]

Other than allowing much more secure and efficient business for both sides of the transaction, this has also lead to changes in the organizational structure of merchants:

Instead of the flat hierarchies witnessed with darknet markets, merchants today employ hierarchical structures again. These consist of procurement layer, sales layer, and distribution layer. The people constituting each layer usually do not know the identity of the higher layers nor are ever in personal contact with them. All interaction is digital—messaging systems and cryptocurrencies again, product moves only through dead drops.

The procurement layer purchases product wholesale and smuggles it into the region. It is then sold for cryptocurrency to select people that operate the sales layer. After that transaction the risks of both procurement and sales layer are isolated.

The sales layer divides the product into smaller units and gives the location of those dead drops to the distribution layer. The distribution layer then divides the product again and places typical sales quantities into new dead drops. The location of these dead drops is communicated to the sales layer which then sells these locations to the customers through messaging systems.

To prevent theft by the distribution layer, the sales layer randomly tests dead drops by tasking different members of the distribution layer with picking up product from a dead drop and hiding it somewhere else, after verification of the contents. Usually each unit of product is tagged with a piece of paper containing a unique secret word which is used to prove to the sales layer that a dead drop was found. Members of the distribution layer have to post security – in the form of cryptocurrency – to the sales layer, and they lose part of that security with every dead drop that fails the testing, and with every dead drop they failed to test. So far, no reports of using violence to ensure performance of members of these structures has become known.

This concept of using messaging, cryptocurrency and dead drops even within the merchant structure allows for the members within each layer being completely isolated from each other, and not knowing anything about higher layers at all. There is no trace to follow if a distribution layer member is captured while servicing a dead drop. He will often not even be distinguishable from a regular customer. This makes these structures extremely secure against infiltration, takeover and capture. They are inherently resilient.

[…]

It is because of the use of dead drops and hierarchical structures that we call this kind of organization a Dropgang.

Posted on January 23, 2019 at 6:20 AM35 Comments

Hacking Construction Cranes

Construction cranes are vulnerable to hacking:

In our research and vulnerability discoveries, we found that weaknesses in the controllers can be (easily) taken advantage of to move full-sized machines such as cranes used in construction sites and factories. In the different attack classes that we’ve outlined, we were able to perform the attacks quickly and even switch on the controlled machine despite an operator’s having issued an emergency stop (e-stop).

The core of the problem lies in how, instead of depending on wireless, standard technologies, these industrial remote controllers rely on proprietary RF protocols, which are decades old and are primarily focused on safety at the expense of security. It wasn’t until the arrival of Industry 4.0, as well as the continuing adoption of the industrial internet of things (IIoT), that industries began to acknowledge the pressing need for security.

News article. Report.

Posted on January 22, 2019 at 5:59 AM19 Comments

Clever Smartphone Malware Concealment Technique

This is clever:

Malicious apps hosted in the Google Play market are trying a clever trick to avoid detection—they monitor the motion-sensor input of an infected device before installing a powerful banking trojan to make sure it doesn’t load on emulators researchers use to detect attacks.

The thinking behind the monitoring is that sensors in real end-user devices will record motion as people use them. By contrast, emulators used by security researchers­—and possibly Google employees screening apps submitted to Play­—are less likely to use sensors. Two Google Play apps recently caught dropping the Anubis banking malware on infected devices would activate the payload only when motion was detected first. Otherwise, the trojan would remain dormant.

Posted on January 21, 2019 at 6:47 AM20 Comments

Evaluating the GCHQ Exceptional Access Proposal

The so-called Crypto Wars have been going on for 25 years now. Basically, the FBI—and some of their peer agencies in the UK, Australia, and elsewhere—argue that the pervasive use of civilian encryption is hampering their ability to solve crimes and that they need the tech companies to make their systems susceptible to government eavesdropping. Sometimes their complaint is about communications systems, like voice or messaging apps. Sometimes it’s about end-user devices. On the other side of this debate is pretty much all technologists working in computer security and cryptography, who argue that adding eavesdropping features fundamentally makes those systems less secure.

A recent entry in this debate is a proposal by Ian Levy and Crispin Robinson, both from the UK’s GCHQ (the British signals-intelligence agency—basically, its NSA). It’s actually a positive contribution to the discourse around backdoors; most of the time government officials broadly demand that the tech companies figure out a way to meet their requirements, without providing any details. Levy and Robinson write:

In a world of encrypted services, a potential solution could be to go back a few decades. It’s relatively easy for a service provider to silently add a law enforcement participant to a group chat or call. The service provider usually controls the identity system and so really decides who’s who and which devices are involved—they’re usually involved in introducing the parties to a chat or call. You end up with everything still being end-to-end encrypted, but there’s an extra ‘end’ on this particular communication. This sort of solution seems to be no more intrusive than the virtual crocodile clips that our democratically elected representatives and judiciary authorise today in traditional voice intercept solutions and certainly doesn’t give any government power they shouldn’t have.

On the surface, this isn’t a big ask. It doesn’t affect the encryption that protects the communications. It only affects the authentication that assures people of whom they are talking to. But it’s no less dangerous a backdoor than any others that have been proposed: It exploits a security vulnerability rather than fixing it, and it opens all users of the system to exploitation of that same vulnerability by others.

In a blog post, cryptographer Matthew Green summarized the technical problems with this GCHQ proposal. Basically, making this backdoor work requires not only changing the cloud computers that oversee communications, but it also means changing the client program on everyone’s phone and computer. And that change makes all of those systems less secure. Levy and Robinson make a big deal of the fact that their backdoor would only be targeted against specific individuals and their communications, but it’s still a general backdoor that could be used against anybody.

The basic problem is that a backdoor is a technical capability—a vulnerability—that is available to anyone who knows about it and has access to it. Surrounding that vulnerability is a procedural system that tries to limit access to that capability. Computers, especially internet-connected computers, are inherently hackable, limiting the effectiveness of any procedures. The best defense is to not have the vulnerability at all.

That old physical eavesdropping system Levy and Robinson allude to also exploits a security vulnerability. Because telephone conversations were unencrypted as they passed through the physical wires of the phone system, the police were able to go to a switch in a phone company facility or a junction box on the street and manually attach alligator clips to a specific pair and listen in to what that phone transmitted and received. It was a vulnerability that anyone could exploit—not just the police—but was mitigated by the fact that the phone company was a monolithic monopoly, and physical access to the wires was either difficult (inside a phone company building) or obvious (on the street at a junction box).

The functional equivalent of physical eavesdropping for modern computer phone switches is a requirement of a 1994 U.S. law called CALEA—and similar laws in other countries. By law, telephone companies must engineer phone switches that the government can eavesdrop, mirroring that old physical system with computers. It is not the same thing, though. It doesn’t have those same physical limitations that make it more secure. It can be administered remotely. And it’s implemented by a computer, which makes it vulnerable to the same hacking that every other computer is vulnerable to.

This isn’t a theoretical problem; these systems have been subverted. The most public incident dates from 2004 in Greece. Vodafone Greece had phone switches with the eavesdropping feature mandated by CALEA. It was turned off by default in the Greek phone system, but the NSA managed to surreptitiously turn it on and use it to eavesdrop on the Greek prime minister and over 100 other high-ranking dignitaries.

There’s nothing distinct about a phone switch that makes it any different from other modern encrypted voice or chat systems; any remotely administered backdoor system will be just as vulnerable. Imagine a chat program added this GCHQ backdoor. It would have to add a feature that added additional parties to a chat from somewhere in the system—and not by the people at the endpoints. It would have to suppress any messages alerting users to another party being added to that chat. Since some chat programs, like iMessage and Signal, automatically send such messages, it would force those systems to lie to their users. Other systems would simply never implement the “tell me who is in this chat conversation” feature­which amounts to the same thing.

And once that’s in place, every government will try to hack it for its own purposes­—just as the NSA hacked Vodafone Greece. Again, this is nothing new. In 2010, China successfully hacked the back-door mechanism Google put in place to meet law-enforcement requests. In 2015, someone—we don’t know who—hacked an NSA backdoor in a random-number generator used to create encryption keys, changing the parameters so they could also eavesdrop on the communications. There are certainly other stories that haven’t been made public.

Simply adding the feature erodes public trust. If you were a dissident in a totalitarian country trying to communicate securely, would you want to use a voice or messaging system that is known to have this sort of backdoor? Who would you bet on, especially when the cost of losing the bet might be imprisonment or worse: the company that runs the system, or your country’s government intelligence agency? If you were a senior government official, or the head of a large multinational corporation, or the security manager or a critical technician at a power plant, would you want to use this system?

Of course not.

Two years ago, there was a rumor of a WhatsApp backdoor. The details are complicated, and calling it a backdoor or a vulnerability is largely inaccurate—but the resultant confusion caused some people to abandon the encrypted messaging service.

Trust is fragile, and transparency is essential to trust. And while Levy and Robinson state that “any exceptional access solution should not fundamentally change the trust relationship between a service provider and its users,” this proposal does exactly that. Communications companies could no longer be honest about what their systems were doing, and we would have no reason to trust them if they tried.

In the end, all of these exceptional access mechanisms, whether they exploit existing vulnerabilities that should be closed or force vendors to open new ones, reduce the security of the underlying system. They reduce our reliance on security technologies we know how to do well—cryptography—to computer security technologies we are much less good at. Even worse, they replace technical security measures with organizational procedures. Whether it’s a database of master keys that could decrypt an iPhone or a communications switch that orchestrates who is securely chatting with whom, it is vulnerable to attack. And it will be attacked.

The foregoing discussion is a specific example of a broader discussion that we need to have, and it’s about the attack/defense balance. Which should we prioritize? Should we design our systems to be open to attack, in which case they can be exploited by law enforcement—and others? Or should we design our systems to be as secure as possible, which means they will be better protected from hackers, criminals, foreign governments and—unavoidably—law enforcement as well?

This discussion is larger than the FBI’s ability to solve crimes or the NSA’s ability to spy. We know that foreign intelligence services are targeting the communications of our elected officials, our power infrastructure, and our voting systems. Do we really want some foreign country penetrating our lawful-access backdoor in the same way the NSA penetrated Greece’s?

I have long maintained that we need to adopt a defense-dominant strategy: We should prioritize our need for security over our need for surveillance. This is especially true in the new world of physically capable computers. Yes, it will mean that law enforcement will have a harder time eavesdropping on communications and unlocking computing devices. But law enforcement has other forensic techniques to collect surveillance data in our highly networked world. We’d be much better off increasing law enforcement’s technical ability to investigate crimes in the modern digital world than we would be to weaken security for everyone. The ability to surreptitiously add ghost users to a conversation is a vulnerability, and it’s one that we would be better served by closing than exploiting.

This essay originally appeared on Lawfare.com.

EDITED TO ADD (1/30): More commentary.

Posted on January 18, 2019 at 5:54 AM46 Comments

Prices for Zero-Day Exploits Are Rising

Companies are willing to pay ever-increasing amounts for good zero-day exploits against hard-to-break computers and applications:

On Monday, market-leading exploit broker Zerodium said it would pay up to $2 million for zero-click jailbreaks of Apple’s iOS, $1.5 million for one-click iOS jailbreaks, and $1 million for exploits that take over secure messaging apps WhatsApp and iMessage. Previously, Zerodium was offering $1.5 million, $1 million, and $500,000 for the same types of exploits respectively. The steeper prices indicate not only that the demand for these exploits continues to grow, but also that reliably compromising these targets is becoming increasingly hard.

Note that these prices are for offensive uses of the exploit. Zerodium—and others—sell exploits to companies who make surveillance tools and cyber-weapons for governments. Many companies have bug bounty programs for those who want the exploit used for defensive purposes—i.e., fixed—but they pay orders of magnitude less. This is a problem.

Back in 2014, Dan Geer said that that the US should corner the market on software vulnerabilities:

“There is no doubt that the U.S. Government could openly corner the world vulnerability market,” said Geer, “that is, we buy them all and we make them all public. Simply announce ‘Show us a competing bid, and we’ll give you [10 times more].’ Sure, there are some who will say ‘I hate Americans; I sell only to Ukrainians,’ but because vulnerability finding is increasingly automation-assisted, the seller who won’t sell to the Americans knows that his vulns can be rediscovered in due course by someone who will sell to the Americans who will tell everybody, thus his need to sell his product before it outdates is irresistible.”

I don’t know about the 10x, but in theory he’s right. There’s no other way to solve this.

Posted on January 17, 2019 at 6:33 AM29 Comments

El Chapo's Encryption Defeated by Turning His IT Consultant

Impressive police work:

In a daring move that placed his life in danger, the I.T. consultant eventually gave the F.B.I. his system’s secret encryption keys in 2011 after he had moved the network’s servers from Canada to the Netherlands during what he told the cartel’s leaders was a routine upgrade.

A Dutch article says that it’s a BlackBerry system.

El Chapo had his IT person install “…spyware called FlexiSPY on the ‘special phones’ he had given to his wife, Emma Coronel Aispuro, as well as to two of his lovers, including one who was a former Mexican lawmaker.” That same software was used by the FBI when his IT person turned over the keys. Yet again we learn the lesson that a backdoor can be used against you.

And it doesn’t have to be with the IT person’s permission. A good intelligence agency can use the IT person’s authorizations without his knowledge or consent. This is why the NSA hunts sysadmins.

Slashdot thread. Hacker News thread. Boing Boing post.

EDITED TO ADD (2/12): Good information here.

Posted on January 16, 2019 at 6:53 AM21 Comments

Alex Stamos on Content Moderation and Security

Former Facebook CISO Alex Stamos argues that increasing political pressure on social media platforms to moderate content will give them a pretext to turn all end-to-end crypto off—which would be more profitable for them and bad for society.

If we ask tech companies to fix ancient societal ills that are now reflected online with moderation, then we will end up with huge, democratically-unaccountable organizations controlling our lives in ways we never intended. And those ills will still exist below the surface.

Posted on January 15, 2019 at 5:55 AM26 Comments

Why Internet Security Is So Bad

I recently read two different essays that make the point that while Internet security is terrible, it really doesn’t affect people enough to make it an issue.

This is true, and is something I worry will change in a world of physically capable computers. Automation, autonomy, and physical agency will make computer security a matter of life and death, and not just a matter of data.

Posted on January 14, 2019 at 11:13 AM33 Comments

Using a Fake Hand to Defeat Hand-Vein Biometrics

Nice work:

One attraction of a vein based system over, say, a more traditional fingerprint system is that it may be typically harder for an attacker to learn how a user’s veins are positioned under their skin, rather than lifting a fingerprint from a held object or high quality photograph, for example.

But with that said, Krissler and Albrecht first took photos of their vein patterns. They used a converted SLR camera with the infrared filter removed; this allowed them to see the pattern of the veins under the skin.

“It’s enough to take photos from a distance of five meters, and it might work to go to a press conference and take photos of them,” Krissler explained. In all, the pair took over 2,500 pictures to over 30 days to perfect the process and find an image that worked.

They then used that image to make a wax model of their hands which included the vein detail.

Slashdot thread.

Posted on January 11, 2019 at 6:38 AM37 Comments

Security Vulnerabilities in Cell Phone Systems

Good essay on the inherent vulnerabilities in the cell phone standards and the market barriers to fixing them.

So far, industry and policymakers have largely dragged their feet when it comes to blocking cell-site simulators and SS7 attacks. Senator Ron Wyden, one of the few lawmakers vocal about this issue, sent a letter in August encouraging the Department of Justice to “be forthright with federal courts about the disruptive nature of cell-site simulators.” No response has ever been published.

The lack of action could be because it is a big task—there are hundreds of companies and international bodies involved in the cellular network. The other reason could be that intelligence and law enforcement agencies have a vested interest in exploiting these same vulnerabilities. But law enforcement has other effective tools that are unavailable to criminals and spies. For example, the police can work directly with phone companies, serving warrants and Title III wiretap orders. In the end, eliminating these vulnerabilities is just as valuable for law enforcement as it is for everyone else.

As it stands, there is no government agency that has the power, funding and mission to fix the problems. Large companies such as AT&T, Verizon, Google and Apple have not been public about their efforts, if any exist.

Posted on January 10, 2019 at 5:52 AM26 Comments

Machine Learning to Detect Software Vulnerabilities

No one doubts that artificial intelligence (AI) and machine learning (ML) will transform cybersecurity. We just don’t know how, or when. While the literature generally focuses on the different uses of AI by attackers and defenders ­ and the resultant arms race between the two ­ I want to talk about software vulnerabilities.

All software contains bugs. The reason is basically economic: The market doesn’t want to pay for quality software. With a few exceptions, such as the space shuttle, the market prioritizes fast and cheap over good. The result is that any large modern software package contains hundreds or thousands of bugs.

Some percentage of bugs are also vulnerabilities, and a percentage of those are exploitable vulnerabilities, meaning an attacker who knows about them can attack the underlying system in some way. And some percentage of those are discovered and used. This is why your computer and smartphone software is constantly being patched; software vendors are fixing bugs that are also vulnerabilities that have been discovered and are being used.

Everything would be better if software vendors found and fixed all bugs during the design and development process, but, as I said, the market doesn’t reward that kind of delay and expense. AI, and machine learning in particular, has the potential to forever change this trade-off.

The problem of finding software vulnerabilities seems well-suited for ML systems. Going through code line by line is just the sort of tedious problem that computers excel at, if we can only teach them what a vulnerability looks like. There are challenges with that, of course, but there is already a healthy amount of academic literature on the topic—and research is continuing. There’s every reason to expect ML systems to get better at this as time goes on, and some reason to expect them to eventually become very good at it.

Finding vulnerabilities can benefit both attackers and defenders, but it’s not a fair fight. When an attacker’s ML system finds a vulnerability in software, the attacker can use it to compromise systems. When a defender’s ML system finds the same vulnerability, he or she can try to patch the system or program network defenses to watch for and block code that tries to exploit it.

But when the same system is in the hands of a software developer who uses it to find the vulnerability before the software is ever released, the developer fixes it so it can never be used in the first place. The ML system will probably be part of his or her software design tools and will automatically find and fix vulnerabilities while the code is still in development.

Fast-forward a decade or so into the future. We might say to each other, “Remember those years when software vulnerabilities were a thing, before ML vulnerability finders were built into every compiler and fixed them before the software was ever released? Wow, those were crazy years.” Not only is this future possible, but I would bet on it.

Getting from here to there will be a dangerous ride, though. Those vulnerability finders will first be unleashed on existing software, giving attackers hundreds if not thousands of vulnerabilities to exploit in real-world attacks. Sure, defenders can use the same systems, but many of today’s Internet of Things systems have no engineering teams to write patches and no ability to download and install patches. The result will be hundreds of vulnerabilities that attackers can find and use.

But if we look far enough into the horizon, we can see a future where software vulnerabilities are a thing of the past. Then we’ll just have to worry about whatever new and more advanced attack techniques those AI systems come up with.

This essay previously appeared on SecurityIntelligence.com.

Posted on January 8, 2019 at 6:13 AM56 Comments

New Attack Against Electrum Bitcoin Wallets

This is clever:

How the attack works:

  • Attacker added tens of malicious servers to the Electrum wallet network.
  • Users of legitimate Electrum wallets initiate a Bitcoin transaction.
  • If the transaction reaches one of the malicious servers, these servers reply with an error message that urges users to download a wallet app update from a malicious website (GitHub repo).
  • User clicks the link and downloads the malicious update.
  • When the user opens the malicious Electrum wallet, the app asks the user for a two-factor authentication (2FA) code. This is a red flag, as these 2FA codes are only requested before sending funds, and not at wallet startup.
  • The malicious Electrum wallet uses the 2FA code to steal the user’s funds and transfer them to the attacker’s Bitcoin addresses.

The problem here is that Electrum servers are allowed to trigger popups with custom text inside users’ wallets.

Posted on January 7, 2019 at 6:13 AM26 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.