Entries Tagged "theft"

Page 21 of 21

Torah Security

According to Jewish law, Torahs must be identical. When you make a copy, you cannot change or add a single character. That means you can’t write “Property of….” You can’t add a serial number. You can’t make any kind of identifying marks.

This turns out to be a problem when Torahs are stolen; it’s impossible to identify that they’re stolen goods.

Now there’s a method of identifying Torahs without violating Jewish law:

Called the Universal Torah Registry, the system works like this: A synagogue mails in a form with their contact information and the number of Torahs they want to place in the system, and the registry sends back a computer-coded template for each scroll. The 3.5- by 8-inch template resembles an IBM punch card, with eight holes arranged so their position relative to one another describes a unique identification number in a proprietary code.

A rabbi uses the template to perforate the coded pattern into the margins of the scroll with a tiny needle. To keep an enterprising thief from swapping the perforated segment with a section from another stolen scroll in some kind of twisted Torah chop shop, the registry recommends applying the code to 10 different segments of the scroll. Pollack says the code contains self-authentication features that keep a thief from invalidating it by just adding an extra hole in an arbitrary location.

Posted on June 13, 2005 at 1:28 PMView Comments

Police Foil Bank Electronic Theft

From the BBC:

Police in London say they have foiled one of the biggest attempted bank thefts in Britain.

The plan was to steal £220m ($423m) from the London offices of the Japanese bank Sumitomo Mitsui.

Computer experts are believed to have tried to transfer the money electronically after hacking into the bank’s systems.

Not a lot of detail here, but it seems that the thieves got in using a keyboard recorder. It’s the simple attacks that you have to worry about….

Posted on April 4, 2005 at 12:51 PMView Comments

Smart Water

No, really. It’s liquid with a unique identifier that is linked to a particular owner.

Forensic Coding combined with microdot technology.

SmartWater has been designed to protect household property and motor vehicles. Each bottle of SmartWater solution contains a unique forensic code, which is assigned to a household or vehicle.

An additional feature of SmartWater Instant is the inclusion tiny micro-dot particles which enable Police to quickly identify the true owner of the property.

The idea is for me to paint this stuff on my valuables as proof of ownership. I think a better idea would be for me to paint it on your valuables, and then call the police.

Posted on February 10, 2005 at 9:20 AMView Comments

Burglars and "Feeling Secure"

From Confessions of a Master Jewel Thief by Bill Mason (Villard, 2003):

Nothing works more in a thief’s favor than people feeling secure. That’s why places that are heavily alarmed and guarded can sometimes be the easiest targets. The single most important factor in security—more than locks, alarms, sensors, or armed guards—is attitude. A building protected by nothing more than a cheap combination lock but inhabited by people who are alert and risk-aware is much safer than one with the world’s most sophisticated alarm system whose tenants assume they’re living in an impregnable fortress.

The author, a burglar, found that luxury condos were an excellent target. Although they had much more security technology than other buildings, they were vulnerable because no one believed a thief could get through the lobby.

Posted on December 17, 2004 at 9:21 AMView Comments

An Impressive Car Theft

The armored Mercedes belonging to the CEO of DaimlerChrysler has been stolen:

The black company car, which is worth about 800,000 euros ($1 million), disappeared on the night of Oct. 26, police spokesman Klaus-Peter Arand said in a telephone interview. The limousine, which sports a 12-cylinder engine and is equipped with a broadcasting device to help retrieve the car, hasn’t yet been found, the police said.

There are two types of thieves, whether they be car thieves or otherwise. First, there are the thieves that want a car, any car. And second, there are the thieves that want one particular car. Against the first type, any security measure that makes your car harder to steal than the car next to it is good enough. Against the second type, even a sophisticated GPS tracking system might not be enough.

Posted on December 1, 2004 at 11:01 AMView Comments

Computer Security and Liability

Information insecurity is costing us billions. We pay for it in theft: information theft, financial theft. We pay for it in productivity loss, both when networks stop working and in the dozens of minor security inconveniences we all have to endure. We pay for it when we have to buy security products and services to reduce those other two losses. We pay for security, year after year.

The problem is that all the money we spend isn’t fixing the problem. We’re paying, but we still end up with insecurities.

The problem is insecure software. It’s bad design, poorly implemented features, inadequate testing and security vulnerabilities from software bugs. The money we spend on security is to deal with the effects of insecure software.

And that’s the problem. We’re not paying to improve the security of the underlying software. We’re paying to deal with the problem rather than to fix it.

The only way to fix this problem is for vendors to fix their software, and they won’t do it until it’s in their financial best interests to do so.

Today, the costs of insecure software aren’t borne by the vendors that produce the software. In economics, this is known as an externality, the cost of a decision that’s borne by people other than those making the decision.

There are no real consequences to the vendors for having bad security or low-quality software. Even worse, the marketplace often rewards low quality. More precisely, it rewards additional features and timely release dates, even if they come at the expense of quality.

If we expect software vendors to reduce features, lengthen development cycles and invest in secure software development processes, it needs to be in their financial best interests to do so. If we expect corporations to spend significant resources on their own network security—especially the security of their customers—it also needs to be in their financial best interests.

Liability law is a way to make it in those organizations’ best interests. Raising the risk of liability raises the costs of doing it wrong and therefore increases the amount of money a CEO is willing to spend to do it right. Security is risk management; liability fiddles with the risk equation.

Basically, we have to tweak the risk equation so the CEO cares about actually fixing the problem, and putting pressure on his balance sheet is the best way to do that.

Clearly, this isn’t all or nothing. There are many parties involved in a typical software attack. There’s the company that sold the software with the vulnerability in the first place. There’s the person who wrote the attack tool. There’s the attacker himself, who used the tool to break into a network. There’s the owner of the network, who was entrusted with defending that network. One hundred percent of the liability shouldn’t fall on the shoulders of the software vendor, just as 100% shouldn’t fall on the attacker or the network owner. But today, 100% of the cost falls directly on the network owner, and that just has to stop.

We will always pay for security. If software vendors have liability costs, they’ll pass those on to us. It might not be cheaper than what we’re paying today. But as long as we’re going to pay, we might as well pay to fix the problem. Forcing the software vendor to pay to fix the problem and then pass those costs on to us means that the problem might actually get fixed.

Liability changes everything. Currently, there is no reason for a software company not to offer feature after feature after feature. Liability forces software companies to think twice before changing something. Liability forces companies to protect the data they’re entrusted with. Liability means that those in the best position to fix the problem are actually responsible for the problem.

Information security isn’t a technological problem. It’s an economics problem. And the way to improve information technology is to fix the economics problem. Do that, and everything else will follow.

This essay originally appeared in Computerworld.

An interesting rebuttal of this piece is here.

Posted on November 3, 2004 at 3:00 PMView Comments

Computer Security and Liability

Information insecurity is costing us billions. We pay for it in theft: information theft, financial theft. We pay for it in productivity loss, both when networks stop working and in the dozens of minor security inconveniences we all have to endure. We pay for it when we have to buy security products and services to reduce those other two losses. We pay for security, year after year.

The problem is that all the money we spend isn’t fixing the problem. We’re paying, but we still end up with insecurities.

The problem is insecure software. It’s bad design, poorly implemented features, inadequate testing and security vulnerabilities from software bugs. The money we spend on security is to deal with the effects of insecure software.

And that’s the problem. We’re not paying to improve the security of the underlying software. We’re paying to deal with the problem rather than to fix it.

The only way to fix this problem is for vendors to fix their software, and they won’t do it until it’s in their financial best interests to do so.

Today, the costs of insecure software aren’t borne by the vendors that produce the software. In economics, this is known as an externality, the cost of a decision that’s borne by people other than those making the decision.

There are no real consequences to the vendors for having bad security or low-quality software. Even worse, the marketplace often rewards low quality. More precisely, it rewards additional features and timely release dates, even if they come at the expense of quality.

If we expect software vendors to reduce features, lengthen development cycles and invest in secure software development processes, it needs to be in their financial best interests to do so. If we expect corporations to spend significant resources on their own network security—especially the security of their customers—it also needs to be in their financial best interests.

Liability law is a way to make it in those organizations’ best interests. Raising the risk of liability raises the costs of doing it wrong and therefore increases the amount of money a CEO is willing to spend to do it right. Security is risk management; liability fiddles with the risk equation.

Basically, we have to tweak the risk equation so the CEO cares about actually fixing the problem, and putting pressure on his balance sheet is the best way to do that.

Clearly, this isn’t all or nothing. There are many parties involved in a typical software attack. There’s the company that sold the software with the vulnerability in the first place. There’s the person who wrote the attack tool. There’s the attacker himself, who used the tool to break into a network. There’s the owner of the network, who was entrusted with defending that network. One hundred percent of the liability shouldn’t fall on the shoulders of the software vendor, just as 100% shouldn’t fall on the attacker or the network owner. But today, 100% of the cost falls directly on the network owner, and that just has to stop.

We will always pay for security. If software vendors have liability costs, they’ll pass those on to us. It might not be cheaper than what we’re paying today. But as long as we’re going to pay, we might as well pay to fix the problem. Forcing the software vendor to pay to fix the problem and then pass those costs on to us means that the problem might actually get fixed.

Liability changes everything. Currently, there is no reason for a software company not to offer feature after feature after feature. Liability forces software companies to think twice before changing something. Liability forces companies to protect the data they’re entrusted with. Liability means that those in the best position to fix the problem are actually responsible for the problem.

Information security isn’t a technological problem. It’s an economics problem. And the way to improve information technology is to fix the economics problem. Do that, and everything else will follow.

This essay originally appeared in Computerworld.

An interesting rebuttal of this piece is here.

Posted on November 3, 2004 at 3:00 PMView Comments

1 19 20 21

Sidebar photo of Bruce Schneier by Joe MacInnis.