Entries Tagged "locks"

Page 2 of 11

Class Breaks

There’s a concept from computer security known as a class break. It’s a particular security vulnerability that breaks not just one system, but an entire class of systems. Examples might be a vulnerability in a particular operating system that allows an attacker to take remote control of every computer that runs on that system’s software. Or a vulnerability in Internet-enabled digital video recorders and webcams that allow an attacker to recruit those devices into a massive botnet.

It’s a particular way computer systems can fail, exacerbated by the characteristics of computers and software. It only takes one smart person to figure out how to attack the system. Once he does that, he can write software that automates his attack. He can do it over the Internet, so he doesn’t have to be near his victim. He can automate his attack so it works while he sleeps. And then he can pass the ability to someone­ — or to lots of people — ­without the skill. This changes the nature of security failures, and completely upends how we need to defend against them.

An example: Picking a mechanical door lock requires both skill and time. Each lock is a new job, and success at one lock doesn’t guarantee success with another of the same design. Electronic door locks, like the ones you now find in hotel rooms, have different vulnerabilities. An attacker can find a flaw in the design that allows him to create a key card that opens every door. If he publishes his attack software, not just the attacker, but anyone can now open every lock. And if those locks are connected to the Internet, attackers could potentially open door locks remotely — ­they could open every door lock remotely at the same time. That’s a class break.

It’s how computer systems fail, but it’s not how we think about failures. We still think about automobile security in terms of individual car thieves manually stealing cars. We don’t think of hackers remotely taking control of cars over the Internet. Or, remotely disabling every car over the Internet. We think about voting fraud as unauthorized individuals trying to vote. We don’t think about a single person or organization remotely manipulating thousands of Internet-connected voting machines.

In a sense, class breaks are not a new concept in risk management. It’s the difference between home burglaries and fires, which happen occasionally to different houses in a neighborhood over the course of the year, and floods and earthquakes, which either happen to everyone in the neighborhood or no one. Insurance companies can handle both types of risk, but they are inherently different. The increasing computerization of everything is moving us from a burglary/fire risk model to a flood/earthquake model, which a given threat either affects everyone in town or doesn’t happen at all.

But there’s a key difference between floods/earthquakes and class breaks in computer systems: the former are random natural phenomena, while the latter is human-directed. Floods don’t change their behavior to maximize their damage based on the types of defenses we build. Attackers do that to computer systems. Attackers examine our systems, looking for class breaks. And once one of them finds one, they’ll exploit it again and again until the vulnerability is fixed.

As we move into the world of the Internet of Things, where computers permeate our lives at every level, class breaks will become increasingly important. The combination of automation and action at a distance will give attackers more power and leverage than they have ever had before. Security notions like the precautionary principle­ — where the potential of harm is so great that we err on the side of not deploying a new technology without proofs of security — will become more important in a world where an attacker can open all of the door locks or hack all of the power plants. It’s not an inherently less secure world, but it’s a differently secure world. It’s a world where driverless cars are much safer than people-driven cars, until suddenly they’re not. We need to build systems that assume the possibility of class breaks — and maintain security despite them.

This essay originally appeared on Edge.org as part of their annual question. This year it was: “What scientific term or concept ought to be more widely known?

Posted on January 3, 2017 at 6:50 AMView Comments

Exploiting Google Maps for Fraud

The New York Times has a long article on fraudulent locksmiths. The scam is a basic one: quote a low price on the phone, but charge much more once you show up and do the work. But the method by which the scammers get victims is new. They exploit Google’s crowdsourced system for identifying businesses on their maps. The scammers convince Google that they have a local address, which Google displays to its users who are searching for local businesses.

But they involve chicanery with two platforms: Google My Business, essentially the company’s version of the Yellow Pages, and Map Maker, which is Google’s crowdsourced online map of the world. The latter allows people around the planet to log in to the system and input data about streets, companies and points of interest.

Both Google My Business and Map Maker are a bit like Wikipedia, insofar as they are largely built and maintained by millions of contributors. Keeping the system open, with verification, gives countless businesses an invaluable online presence. Google officials say that the system is so good that many local companies do not bother building their own websites. Anyone who has ever navigated using Google Maps knows the service is a technological wonder.

But the very quality that makes Google’s systems accessible to companies that want to be listed makes them vulnerable to pernicious meddling.

“This is what you get when you rely on crowdsourcing for all your ‘up to date’ and ‘relevant’ local business content,” Mr. Seely said. “You get people who contribute meaningful content, and you get people who abuse the system.”

The scam is growing:

Lead gens have their deepest roots in locksmithing, but the model has migrated to an array of services, including garage door repair, carpet cleaning, moving and home security. Basically, they surface in any business where consumers need someone in the vicinity to swing by and clean, fix, relocate or install something.

What’s interesting to me are the economic incentives involved:

Only Google, it seems, can fix Google. The company is trying, its representatives say, by, among other things, removing fake information quickly and providing a “Report a Problem” tool on the maps. After looking over the fake Locksmith Force building, a bunch of other lead-gen advertisers in Phoenix and that Mountain View operation with more than 800 websites, Google took action.

Not only has the fake Locksmith Force building vanished from Google Maps, but the company no longer turns up in a “locksmith Phoenix” search. At least not in the first 20 pages. Nearly all the other spammy locksmiths pointed out to Google have disappeared from results, too.

“We’re in a constant arms race with local business spammers who, unfortunately, use all sorts of tricks to try to game our system and who’ve been a thorn in the Internet’s side for over a decade,” a Google spokesman wrote in an email. “As spammers change their techniques, we’re continually working on new, better ways to keep them off Google Search and Maps. There’s work to do, and we want to keep doing better.”

There was no mention of a stronger verification system or a beefed-up spam team at Google. Without such systemic solutions, Google’s critics say, the change to local results will not rise even to the level of superficial.

And that’s Google’s best option, really. It’s not the one losing money from these scammers, so it’s not motivated to fix the problem. Unless the problem rises to the level of affecting user trust in the entire system, it’s just going to do superficial things.

This is exactly the sort of market failure that government regulation needs to fix.

Posted on February 8, 2016 at 6:52 AMView Comments

TSA Master Keys

Someone recently noticed a Washington Post story on the TSA that originally contained a detailed photograph of all the TSA master keys. It’s now blurred out of the Washington Post story, but the image is still floating around the Internet. The whole thing neatly illustrates one of the main problems with backdoors, whether in cryptographic systems or physical systems: they’re fragile.

Nicholas Weaver wrote:

TSA “Travel Sentry” luggage locks contain a disclosed backdoor which is similar in spirit to what Director Comey desires for encrypted phones. In theory, only the Transportation Security Agency or other screeners should be able to open a TSA lock using one of their master keys. All others, notably baggage handlers and hotel staff, should be unable to surreptitiously open these locks.

Unfortunately for everyone, a TSA agent and the Washington Post revealed the secret. All it takes to duplicate a physical key is a photograph, since it is the pattern of the teeth, not the key itself, that tells you how to open the lock. So by simply including a pretty picture of the complete spread of TSA keys in the Washington Post’s paean to the TSA, the Washington Post enabled anyone to make their own TSA keys.

So the TSA backdoor has failed: we must assume any adversary can open any TSA “lock”. If you want to at least know your luggage has been tampered with, forget the TSA lock and use a zip-tie or tamper-evident seal instead, or attach a real lock and force the TSA to use their bolt cutters.

It’s the third photo on this page, reproduced here. There’s also this set of photos. Get your copy now, in case they disappear.

Reddit thread. BoingBoing post. Engadget article.

EDITED TO ADD (9/10): Someone has published a set of CAD files so you can make your own master keys.

Posted on September 8, 2015 at 6:02 AMView Comments

Regularities in Android Lock Patterns

Interesting:

Marte Løge, a 2015 graduate of the Norwegian University of Science and Technology, recently collected and analyzed almost 4,000 ALPs as part of her master’s thesis. She found that a large percentage of them­ — 44 percent­ — started in the top left-most node of the screen. A full 77 percent of them started in one of the four corners. The average number of nodes was about five, meaning there were fewer than 9,000 possible pattern combinations. A significant percentage of patterns had just four nodes, shrinking the pool of available combinations to 1,624. More often than not, patterns moved from left to right and top to bottom, another factor that makes guessing easier.

EDITED TO ADD (9/10): Similar research on this sort of thing.

Posted on August 26, 2015 at 6:24 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.