Entries Tagged "safes"

Page 2 of 2

The Story of the World's Largest Diamond Heist

Read the whole thing:

He took the elevator, descending two floors underground to a small, claustrophobic room—the vault antechamber. A 3-ton steel vault door dominated the far wall. It alone had six layers of security. There was a combination wheel with numbers from 0 to 99. To enter, four numbers had to be dialed, and the digits could be seen only through a small lens on the top of the wheel. There were 100 million possible combinations.

Power tools wouldn’t do the trick. The door was rated to withstand 12 hours of nonstop drilling. Of course, the first vibrations of a drill bit would set off the embedded seismic alarm anyway.

The door was monitored by a pair of abutting metal plates, one on the door itself and one on the wall just to the right. When armed, the plates formed a magnetic field. If the door were opened, the field would break, triggering an alarm. To disarm the field, a code had to be typed into a nearby keypad. Finally, the lock required an almost-impossible-to-duplicate foot-long key.

During business hours, the door was actually left open, leaving only a steel grate to prevent access. But Notarbartolo had no intention of muscling his way in when people were around and then shooting his way out. Any break-in would have to be done at night, after the guards had locked down the vault, emptied the building, and shuttered the entrances with steel roll-gates. During those quiet midnight hours, nobody patrolled the interior—the guards trusted their technological defenses.

Notarbartolo pressed a buzzer on the steel grate. A guard upstairs glanced at the videofeed, recognized Notarbartolo, and remotely unlocked the steel grate. Notarbartolo stepped inside the vault.

It was silent—he was surrounded by thick concrete walls. The place was outfitted with motion, heat, and light detectors. A security camera transmitted his movements to the guard station, and the feed was recorded on videotape. The safe-deposit boxes themselves were made of steel and copper and required a key and combination to open. Each box had 17,576 possible combinations.

Notarbartolo went through the motions of opening and closing his box and then walked out. The vault was one of the hardest targets he’d ever seen.

Definitely a movie plot.

Posted on March 12, 2009 at 6:36 AMView Comments

Lego Safe

Nice:

You might think that a Lego safe would be easy to open. Maybe just remove a few bricks and you’re in. But that’s not the case with this thing, the cutting edge of Lego safe technology. The safe weighs 14 pounds and has a motion detecting alarm so it can’t be moved without creating a huge ruckus.

Posted on November 21, 2008 at 1:07 PMView Comments

Separating Data Ownership and Device Ownership

Consider two different security problems. In the first, you store your valuables in a safe in your basement. The threat is burglars, of course. But the safe is yours, and the house is yours, too. You control access to the safe, and probably have an alarm system.

The second security problem is similar, but you store your valuables in someone else’s safe. Even worse, it’s someone you don’t trust. He doesn’t know the combination, but he controls access to the safe. He can try to break in at his leisure. He can transport the safe anyplace he needs to. He can use whatever tools he wants. In the first case, the safe needs to be secure, but it’s still just a part of your overall home security. In the second case, the safe is the only security device you have.

This second security problem might seem contrived, but it happens regularly in our information society: Data controlled by one person is stored on a device controlled by another. Think of a stored-value smart card: If the person owning the card can break the security, he can add money to the card. Think of a DRM system: Its security depends on the person owning the computer not being able to get at the insides of the DRM security. Think of the RFID chip on a passport. Or a postage meter. Or SSL traffic being sent over a public network.

These systems are difficult to secure, and not just because you give your attacker the device and let him utilize whatever time, equipment and expertise he needs to break it. It’s difficult to secure because breaks are generally “class breaks.” The expert who figures out how to do it can build hardware—or write software—to do it automatically. Only one person needs to break a given DRM system; the software can break every other device in the same class.

This means that the security needs to be secure not against the average attacker, but against the smartest, most motivated and best funded attacker.

I was reminded of this problem earlier this month, when researchers announced a new attack (.pdf) against implementations of the RSA cryptosystem. The attack exploits the fact that different operations take different times on modern CPUs. By closely monitoring—and actually affecting—the CPU during an RSA operation, an attacker can recover the key. The most obvious applications for this attack are DRM systems that try to use a protected partition in the CPU to prevent the computer’s owner from learning the DRM system’s cryptographic keys.

These sorts of attacks are not new. In 1995, researchers discovered they could recover cryptographic keys by comparing relative timings on chips. In later years, both power and radiation were used to break cryptosystems. I called these “side-channel attacks,” because they made use of information other than the plaintext and ciphertext. And where are they most useful? To recover secrets from smart cards.

Whenever I see security systems with this data/device separation, I try to solve the security problem by removing the separation. This means completely redesigning the system and the security assumptions behind it.

Compare a stored-value card with a debit card. In the former case, the card owner can create money by changing the value on the card. For this system to be secure, the card needs to be protected by a variety of security countermeasures. In the latter case, there aren’t any secrets on the card. Your bank doesn’t care that you can read the account number off the front of the card, or the data off the magnetic stripe off the back—the real data, and the security, are in the bank’s databases.

Or compare a DRM system with a financial model that doesn’t care about copying. The former is impossible to secure, the latter easy.

While common in digital systems, this kind of security problem isn’t limited to them. Last month, the province of Ontario started investigating insider fraud in their scratch-and-win lottery systems, after the CBC aired allegations that people selling the tickets are able to figure out which tickets are winners, and not sell them. It’s the same problem: the owners of the data on the tickets—the lottery commission—tried to keep that data secret from those who had physical control of the tickets. And they failed.

Compare that with a traditional drawing-at-the-end-of-the-week lottery system. The attack isn’t possible, because there are no secrets on the tickets for an attacker to learn.

Separating data ownership and device ownership doesn’t mean that security is impossible, only much more difficult. You can buy a safe so strong that you can lock your valuables in it and give it to your attacker—with confidence. I’m not so sure you can design a smart card that keeps secrets from its owner, or a DRM system that works on a general-purpose computer—especially because of the problem of class breaks. But in all cases, the best way to solve the security problem is not to have it in the first place.

This essay originally appeared on Wired.com.

EDITED TO ADD (12/1): I completely misunderstood the lottery problem in Ontario. The frauds reported were perpetrated by lottery machine operators at convenience stores and the like stealing end-of-week draw tickets from unsuspecting customers. The customer would hand their ticket over the counter to be scanned to see if it was a winner. The clerk (knowing what the winning numbers actually were) would palm a non-winning ticket into the machine, inform the customer “sorry better luck next time” and claim the prize on their own at a later date.

Nice scam, but nothing to do with the point of this essay.

Posted on November 30, 2006 at 6:36 AMView Comments

Safecracking

Matt Blaze has written an excellent paper: “Safecracking for the computer scientist.”

It has completely pissed off the locksmithing community.

There is a reasonable debate to be had about secrecy versus full disclosure, but a lot of these comments are just mean. Blaze is not being dishonest. His results are not trivial. I believe that the physical security community has a lot to learn from the computer security community, and that the computer security community has a lot to learn from the physical security community. Blaze’s work in physical security has important lessons for computer security—and, as it turns out, physical security—notwithstanding these people’s attempt to trivialize it in their efforts to attack him.

Posted on January 14, 2005 at 8:18 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.