Entries Tagged "obscurity"

Page 2 of 3

Hacking Cars Through Wireless Tire-Pressure Sensors

Still minor, but this kind of thing is only going to get worse:

The new research shows that other systems in the vehicle are similarly insecure. The tire pressure monitors are notable because they’re wireless, allowing attacks to be made from adjacent vehicles. The researchers used equipment costing $1,500, including radio sensors and special software, to eavesdrop on, and interfere with, two different tire pressure monitoring systems.

The pressure sensors contain unique IDs, so merely eavesdropping enabled the researchers to identify and track vehicles remotely. Beyond this, they could alter and forge the readings to cause warning lights on the dashboard to turn on, or even crash the ECU completely.

More:

Now, Ishtiaq Rouf at the USC and other researchers have found a vulnerability in the data transfer mechanisms between CANbus controllers and wireless tyre pressure monitoring sensors which allows misleading data to be injected into a vehicle’s system and allows remote recording of the movement profiles of a specific vehicle. The sensors, which are compulsory for new cars in the US (and probably soon in the EU), each communicate individually with the vehicle’s on-board electronics. Although a loss of pressure can also be detected via differences in the rotational speed of fully inflated and partially inflated tyres on the same axle, such indirect methods are now prohibited in the US.

Paper here. This is a previous paper on automobile computer security.

EDITED TO ADD (8/25): This is a better article.

Posted on August 17, 2010 at 6:42 AMView Comments

The Chaocipher

The Chaocipher is a mechanical encryption algorithm invented in 1918. No one was able to reverse-engineer the algorithm, given sets of plaintexts and ciphertexts—at least, nobody publicly. On the other hand, I don’t know how many people tried, or even knew about the algorithm. I’d never heard of it before now. Anyway, for the first time, the algorithm has been revealed. Of course, it’s not able to stand up to computer cryptanalysis.

Posted on July 13, 2010 at 7:21 AMView Comments

Security Through Obscurity

Sometimes security through obscurity works:

Yes, the New York Police Department provided an escort, but during more than eight hours on Saturday, one of the great hoards of coins and currency on the planet, worth hundreds of millions of dollars, was utterly unalarmed as it was bumped through potholes, squeezed by double-parked cars and slowed by tunnel-bound traffic during the trip to its fortresslike new vault a mile to the north.

In the end, the move did not become a caper movie.

“The idea was to make this as inconspicuous as possible,” said Ute Wartenberg Kagan, executive director of the American Numismatic Society. “It had to resemble a totally ordinary office move.”

[…]

Society staff members were pledged to secrecy about the timing of the move, and “we didn’t tell our movers what the cargo was until the morning of,” said James McVeigh, operations manager of Time Moving and Storage Inc. of Manhattan, referring to the crew of 20 workers.

From my book Beyond Fear, pp. 211-12:

At 3,106 carats, a little under a pound and a half, the Cullinan Diamond was the largest uncut diamond ever discovered. It was extracted from the earth at the Premier Mine, near Pretoria, South Africa, in 1905. Appreciating the literal enormity of the find, the Transvaal government bought the diamond as a gift for King Edward VII. Transporting the stone to England was a huge security problem, of course, and there was much debate on how best to do it. Detectives were sent from London to guard it on its journey. News leaked that a certain steamer was carrying it, and the presence of the detectives confirmed this. But the diamond on that steamer was a fake. Only a few people knew of the real plan; they packed the Cullinan in a small box, stuck a three-shilling stamp on it, and sent it to England anonymously by unregistered parcel post.

This is a favorite story of mine. Not only can we analyze the complex security system intended to transport the diamond from continent to continent—the huge number of trusted people involved, making secrecy impossible; the involved series of steps with their associated seams, giving almost any organized gang numerous opportunities to pull off a theft—but we can contrast it with the sheer beautiful simplicity of the actual transportation plan. Whoever came up with it was really thinking—and thinking originally, boldly, and audaciously.

This kind of counterintuitive security is common in the world of gemstones. On 47th Street in New York, in Antwerp, in London: People walk around all the time with millions of dollars’ worth of gems in their pockets. The gemstone industry has formal guidelines: If the value of the package is under a specific amount, use the U.S. Mail. If it is over that amount but under another amount, use Federal Express. The Cullinan was again transported incognito; the British Royal Navy escorted an empty box across the North Sea to Amsterdam—where the diamond would be cut—while famed diamond cutter Abraham Asscher actually carried it in his pocket from London via train and night ferry to Amsterdam.

Posted on June 18, 2008 at 1:13 PMView Comments

Transporting a $1.9M Rare Coin

Excellent story of security by obscurity:

Feigenbaum put the dime, encased in a 3-inch-square block of plastic, in his pocket and, accompanied by a security guard, drove in an ordinary sedan directly to San Jose airport to catch the red-eye to Newark.

The overnight flight, he said, was the only way to make sure the dime would be in New York by the time the buyer’s bank opened in the morning. People who pay $1.9 million for dimes do not like to be kept waiting for them.

Feigenbaum had purchased a coach ticket, to avoid suspicion, but found himself upgraded to first class. That was a worry, because people in flip-flops, T-shirts and grubby jeans do not regularly ride in first class. But it would have been more suspicious to decline a free upgrade. So Feigenbaum forced himself to sit in first class, where he found himself to be the only passenger in flip-flops.

He was too nervous to sleep, he said. He did not watch the in-flight movie, which was “Firehouse Dog.” He turned down a Reuben sandwich and sensibly declined all offers of alcoholic beverages.

Shortly after boarding the plane, he transferred the dime from his pants pocket to his briefcase.

“I was worried that the dime might fall out of my pocket while I was sitting down,” Feigenbaum said.

All across the country, Feigenbaum kept checking to make sure the dime was safe by reaching into his briefcase to feel for it. Feigenbaum did not actually take the dime out of his briefcase, as it is suspicious to stare at dimes.

This isn’t the first time security through obscurity was employed to transport a very small and very valuable object. From Beyond Fear, pp 211-212:

At 3,106 carats, a little under a pound and a half, the Cullinan Diamond was the largest uncut diamond ever discovered. It was extracted from the earth at the Premier Mine, near Pretoria, South Africa, in 1905. Appreciating the literal enormity of the find, the Transvaal government bought the diamond as a gift for King Edward VII. Transporting the stone to England was a huge security problem, of course, and there was much debate on how best to do it. Detectives were sent from London to guard it on its journey. News leaked that a certain steamer was carrying it, and the presence of the detectives confirmed this. But the diamond on that steamer was a fake. Only a few people knew of the real plan; they packed the Cullinan in a small box, stuck a three-shilling stamp on it, and sent it to England anonymously by unregistered parcel post.

Like all security measures, security by obscurity has its place. I wrote a lot more about the general concepts in this 2002 essay.

Posted on July 30, 2007 at 4:30 PMView Comments

Does Secrecy Help Protect Personal Information?

Personal information protection is an economic problem, not a security problem. And the problem can be easily explained: The organizations we trust to protect our personal information do not suffer when information gets exposed. On the other hand, individuals who suffer when personal information is exposed don’t have the capability to protect that information.

There are actually two problems here: Personal information is easy to steal, and it’s valuable once stolen. We can’t solve one problem without solving the other. The solutions aren’t easy, and you’re not going to like them.

First, fix the economic problem. Credit card companies make more money extending easy credit and making it trivial for customers to use their cards than they lose from fraud. They won’t improve their security as long as you (and not they) are the one who suffers from identity theft. It’s the same for banks and brokerages: As long as you’re the one who suffers when your account is hacked, they don’t have any incentive to fix the problem. And data brokers like ChoicePoint are worse; they don’t suffer if they reveal your information. You don’t have a business relationship with them; you can’t even switch to a competitor in disgust.

Credit card security works as well as it does because the 1968 Truth in Lending Law limits consumer liability for fraud to $50. If the credit card companies could pass fraud losses on to the consumers, they would be spending far less money to stop those losses. But once Congress forced them to suffer the costs of fraud, they invented all sorts of security measures—real-time transaction verification, expert systems patrolling the transaction database and so on—to prevent fraud. The lesson is clear: Make the party in the best position to mitigate the risk responsible for the risk. What this will do is enable the capitalist innovation engine. Once it’s in the financial interest of financial institutions to protect us from identity theft, they will.

Second, stop using personal information to authenticate people. Watch how credit cards work. Notice that the store clerk barely looks at your signature, or how you can use credit cards remotely where no one can check your signature. The credit card industry learned decades ago that authenticating people has only limited value. Instead, they put most of their effort into authenticating the transaction, and they’re much more secure because of it.

This won’t solve the problem of securing our personal information, but it will greatly reduce the threat. Once the information is no longer of value, you only have to worry about securing the information from voyeurs rather than the more common—and more financially motivated—fraudsters.

And third, fix the other economic problem: Organizations that expose our personal information aren’t hurt by that exposure. We need a comprehensive privacy law that gives individuals ownership of their personal information and allows them to take action against organizations that don’t care for it properly.

“Passwords” like credit card numbers and mother’s maiden name used to work, but we’ve forever left the world where our privacy comes from the obscurity of our personal information and the difficulty others have in accessing it. We need to abandon security systems that are based on obscurity and difficulty, and build legal protections to take over where technological advances have left us exposed.

This essay appeared in the January issue of Information Security, as the second half of a point/counterpoint with Marcus Ranum. Here’s his half.

Posted on May 14, 2007 at 12:24 PMView Comments

Ultimate Secure Home

Wow:

For Sale By Owner – The Ultimate Secure Home:

Strategically located in the awesome San Juan mountains of Southwest Colorado, this patented steel-reinforced concrete earth home was built to withstand almost any natural or man-made disaster you can name. It is more secure, safe, and functional than any conventional house could ever be, yet still has a level of comfort that one might not expect to find in an underground home.

The list of features starts out reasonable, but the description of how it was built and why just kept getting more surreal.

And, of course:

The exact location of the house will only be revealed to serious, pre-screened, and financially pre-qualified prospective buyers at an appropriate time. The owner believes that keeping the exact location secret to the general public is an important part of the home’s security.

What’s your vote? Real or hoax?

Posted on September 12, 2006 at 7:29 AMView Comments

Galileo Satellite Code Cracked

Anyone know more?

Members of Cornell’s Global Positioning System (GPS) Laboratory have cracked the so-called pseudo random number (PRN) codes of Europe’s first global navigation satellite, despite efforts to keep the codes secret. That means free access for consumers who use navigation devices—including handheld receivers and systems installed in vehicles—that need PRNs to listen to satellites.

Security by obscurity: it doesn’t work, and it’s a royal pain to recover when it fails.

Posted on July 11, 2006 at 11:30 AMView Comments

OpenDocument Format and the State of Massachusetts

OpenDocument format (ODF) is an alternative to the Microsoft document, spreadsheet, and etc. file formats. (Here’s the homepage for the ODF standard; it’ll put you to sleep, I promise you.)

So far, nothing here is relevant to this blog. Except that Microsoft, with its proprietary Office document format, is spreading rumors that ODF is somehow less secure.

This, from the company that allows Office documents to embed arbitrary Visual Basic programs?

Yes, there is a way to embed scripts in ODF; this seems to be what Microsoft is pointing to. But at least ODF has a clean and open XML format, which allows layered security and the ability to remove scripts as needed. This is much more difficult in the binary Microsoft formats that effectively hide embedded programs.

Microsoft’s claim that the the open ODF is inherently less secure than the proprietary Office format is essentially an argument for security through obscurity. ODF is no less secure than current .doc and other proprietary formats, and may be—marginally, at least—more secure.

This document document from the ODF people says it nicely:

There is no greater security risk, no greater ability to “manipulate code” or gain access to content using ODF than alternative document formats. Security should be addressed through policy decisions on information sharing, regardless of document format. Security exposures caused by programmatic extensions such as the visual basic macros that can be imbedded in Microsoft Office documents are well known and notorious, but there is nothing distinct about ODF that makes it any more or less vulnerable to security risks than any other format specification. The many engineers working to enhance the ODF specification are working to develop techniques to mitigate any exposure that may exist through these extensions.

This whole thing has heated up because Massachusetts recently required public records be held in OpenDocument format, which has put Microsoft into a bit of a tizzy. (Here are two commentaries on the security of that move.) I don’t know if it’s why Microsoft is submitting its Office Document Formats to ECMA for “open standardization,” but I’m sure it’s part of the reason.

Posted on December 7, 2005 at 2:21 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.