Entries Tagged "insiders"

Page 7 of 9

Separating Data Ownership and Device Ownership

Consider two different security problems. In the first, you store your valuables in a safe in your basement. The threat is burglars, of course. But the safe is yours, and the house is yours, too. You control access to the safe, and probably have an alarm system.

The second security problem is similar, but you store your valuables in someone else’s safe. Even worse, it’s someone you don’t trust. He doesn’t know the combination, but he controls access to the safe. He can try to break in at his leisure. He can transport the safe anyplace he needs to. He can use whatever tools he wants. In the first case, the safe needs to be secure, but it’s still just a part of your overall home security. In the second case, the safe is the only security device you have.

This second security problem might seem contrived, but it happens regularly in our information society: Data controlled by one person is stored on a device controlled by another. Think of a stored-value smart card: If the person owning the card can break the security, he can add money to the card. Think of a DRM system: Its security depends on the person owning the computer not being able to get at the insides of the DRM security. Think of the RFID chip on a passport. Or a postage meter. Or SSL traffic being sent over a public network.

These systems are difficult to secure, and not just because you give your attacker the device and let him utilize whatever time, equipment and expertise he needs to break it. It’s difficult to secure because breaks are generally “class breaks.” The expert who figures out how to do it can build hardware—or write software—to do it automatically. Only one person needs to break a given DRM system; the software can break every other device in the same class.

This means that the security needs to be secure not against the average attacker, but against the smartest, most motivated and best funded attacker.

I was reminded of this problem earlier this month, when researchers announced a new attack (.pdf) against implementations of the RSA cryptosystem. The attack exploits the fact that different operations take different times on modern CPUs. By closely monitoring—and actually affecting—the CPU during an RSA operation, an attacker can recover the key. The most obvious applications for this attack are DRM systems that try to use a protected partition in the CPU to prevent the computer’s owner from learning the DRM system’s cryptographic keys.

These sorts of attacks are not new. In 1995, researchers discovered they could recover cryptographic keys by comparing relative timings on chips. In later years, both power and radiation were used to break cryptosystems. I called these “side-channel attacks,” because they made use of information other than the plaintext and ciphertext. And where are they most useful? To recover secrets from smart cards.

Whenever I see security systems with this data/device separation, I try to solve the security problem by removing the separation. This means completely redesigning the system and the security assumptions behind it.

Compare a stored-value card with a debit card. In the former case, the card owner can create money by changing the value on the card. For this system to be secure, the card needs to be protected by a variety of security countermeasures. In the latter case, there aren’t any secrets on the card. Your bank doesn’t care that you can read the account number off the front of the card, or the data off the magnetic stripe off the back—the real data, and the security, are in the bank’s databases.

Or compare a DRM system with a financial model that doesn’t care about copying. The former is impossible to secure, the latter easy.

While common in digital systems, this kind of security problem isn’t limited to them. Last month, the province of Ontario started investigating insider fraud in their scratch-and-win lottery systems, after the CBC aired allegations that people selling the tickets are able to figure out which tickets are winners, and not sell them. It’s the same problem: the owners of the data on the tickets—the lottery commission—tried to keep that data secret from those who had physical control of the tickets. And they failed.

Compare that with a traditional drawing-at-the-end-of-the-week lottery system. The attack isn’t possible, because there are no secrets on the tickets for an attacker to learn.

Separating data ownership and device ownership doesn’t mean that security is impossible, only much more difficult. You can buy a safe so strong that you can lock your valuables in it and give it to your attacker—with confidence. I’m not so sure you can design a smart card that keeps secrets from its owner, or a DRM system that works on a general-purpose computer—especially because of the problem of class breaks. But in all cases, the best way to solve the security problem is not to have it in the first place.

This essay originally appeared on Wired.com.

EDITED TO ADD (12/1): I completely misunderstood the lottery problem in Ontario. The frauds reported were perpetrated by lottery machine operators at convenience stores and the like stealing end-of-week draw tickets from unsuspecting customers. The customer would hand their ticket over the counter to be scanned to see if it was a winner. The clerk (knowing what the winning numbers actually were) would palm a non-winning ticket into the machine, inform the customer “sorry better luck next time” and claim the prize on their own at a later date.

Nice scam, but nothing to do with the point of this essay.

Posted on November 30, 2006 at 6:36 AMView Comments

Attacking Bank-Card PINs

Research paper by Omer Berkman and Odelia Moshe Ostrovsky: “The Unbearable Lightness of PIN Cracking“:

Abstract. We describe new attacks on the financial PIN processing API. The attacks apply to switches as well as to verification facilities. The attacks are extremely severe allowing an attacker to expose customer PINs by executing only one or two API calls per exposed PIN. One of the attacks uses only the translate function which is a required function in every switch. The other attacks abuse functions that are used to allow customers to select their PINs online. Some of the attacks can be applied on a switch even though the attacked functions require issuer’s keys which do not exist on a switch. This is particularly disturbing as it was widely believed that functions requiring issuer’s keys cannot do any harm if the respective keys are unavailable.

Basically, the paper describes an inherent flaw with the way ATM PINs are encrypted and transmitted on the international financial networks, making them vulnerable to attack from malicious insiders in a bank.

One of the most disturbing aspects of the attack is that you’re only as secure as the most untrusted bank on the network. Instead of just having to trust your own issuer bank that they have good security against insider fraud, you have to trust every other financial institution on the network as well. An insider at another bank can crack your ATM PIN if you withdraw money from any of the other bank’s ATMs.

The authors tell me that they’ve contacted the major credit card companies and banks with this information, and haven’t received much of a response. They believe it is now time to alert the public.

Posted on November 17, 2006 at 7:31 AMView Comments

Insider Identity Theft

CEO arrested for stealing the identities of his employees:

Terrence D. Chalk, 44, of White Plains was arraigned in federal court in White Plains, along with his nephew, Damon T. Chalk, 35, after an FBI investigation turned up the curious lending and spending habits. The pair are charged with submitting some $1 million worth of credit applications using the names and personal information—names, addresses and social security numbers—of some of Compulinx’s 50 employees. According to federal prosecutors, the employees’ information was used without their knowledge; the Chalks falsely represented to the lending institutions, in writing and in face-to-face meetings, that the employees were actually officers of the company.

Posted on November 2, 2006 at 12:15 PMView Comments

New Diebold Vulnerability

Ed Felten and his team at Princeton have analyzed a Diebold machine:

This paper presents a fully independent security study of a Diebold AccuVote-TS voting machine, including its hardware and software. We obtained the machine from a private party. Analysis of the machine, in light of real election procedures, shows that it is vulnerable to extremely serious attacks. For example, an attacker who gets physical access to a machine or its removable memory card for as little as one minute could install malicious code; malicious code on a machine could steal votes undetectably, modifying all records, logs, and counters to be consistent with the fraudulent vote count it creates. An attacker could also create malicious code that spreads automatically and silently from machine to machine during normal election activities—a voting-machine virus. We have constructed working demonstrations of these attacks in our lab. Mitigating these threats will require changes to the voting machine’s hardware and software and the adoption of more rigorous election procedures.

(Executive summary. Full paper. FAQ. Video demonstration.)

Salon said:

Diebold has repeatedly disputed the findings then as speculation. But the Princeton study appears to demonstrate conclusively that a single malicious person could insert a virus into a machine and flip votes. The study also reveals a number of other vulnerabilities, including that voter access cards used on Diebold systems could be created inexpensively on a personal laptop computer, allowing people to vote as many times as they wish.

More news stories.

Posted on September 14, 2006 at 3:32 PMView Comments

Is There Strategic Software?

If you define “critical infrastructure” as “things essential for the functioning of a society and economy,” then software is critical infrastructure. For many companies and individuals, if their computers stop working, they stop working.

It’s a situation that snuck up on us. Everyone knew that the software that flies 747s or targets cruise missiles was critical, but who thought of the airlines’ weight and balance computers, or the operating system running the databases and spreadsheets that determine which cruise missiles get shipped where?

And over the years, common, off-the-shelf, personal- and business-grade software has been used for more and more critical applications. Today we find ourselves in a situation where a well-positioned flaw in Windows, Cisco routers or Apache could seriously affect the economy.

It’s perfectly rational to assume that some programmers—a tiny minority I’m sure—are deliberately adding vulnerabilities and back doors into the code they write. I’m actually kind of amazed that back doors secretly added by the CIA/NSA, MI5, the Chinese, Mossad and others don’t conflict with each other. Even if these groups aren’t infiltrating software companies with back doors, you can be sure they’re scouring products for vulnerabilities they can exploit, if necessary. On the other hand, we’re already living in a world where dozens of new flaws are discovered in common software products weekly, and the economy is humming along. But we’re not talking about this month’s worm from Asia or new phishing software from the Russian mafia—we’re talking national intelligence organizations. “Infowar” is an overhyped term, but the next war will have a cyberspace component, and these organizations wouldn’t be doing their jobs if they weren’t preparing for it.

Marcus is 100 percent correct when he says it’s simply too late to do anything about it. The software industry is international, and no country can start demanding domestic-only software and expect to get anywhere. Nor would that actually solve the problem, which is more about the allegiance of millions of individual programmers than which country they happen to inhabit.

So, what to do? The key here is to remember the real problem: current commercial software practices are not secure enough to reliably detect and delete deliberately inserted malicious code. Once you understand this, you’ll drop the red herring arguments that led to CheckPoint not being able to buy Sourcefire and concentrate on the real solution: defense in depth.

In theory, security software are after-the-fact kludges because the underlying OS and apps are riddled with vulnerabilities. If your software were written properly, you wouldn’t need a firewall—right?

If we were to get serious about critical infrastructure, we’d recognize it’s all critical and start building security software to protect it. We’d build our security based on the principles of safe failure; we’d assume security would fail and make sure it’s OK when it does. We’d use defense in depth and compartmentalization to minimize the effects of failure. Basically, we’d do everything we’re supposed to do now to secure our networks.

It’d be expensive, probably prohibitively so. Maybe it would be easier to continue to ignore the problem, or at least manage geopolitics so that no national military wants to take us down.

This is the second half of a point/counterpoint I did with Marcus Ranum (here’s his half) for the September 2006 issue of Information Security Magazine.

Posted on September 12, 2006 at 10:38 AMView Comments

Employee Theft at Australian Mint

You’d think a national mint would have better security against insiders.

But Justice Connolly also criticised security at the mint, saying he was amazed a theft on this scale could happen.

The court heard Grzeskowiac, 48, of the southern Canberra suburb of Monash, simply scooped coins from the production line into his pockets before transferring them to his boots or lunchbox in a toilet cubicle.

Over a 10-month period he walked out with an average of $600 a day.

Justice Connolly expressed astonishment that the mint’s security procedures were so lax.

“I find it hard to believe that 150 coins could be concealed in each boot and a person could still walk through the security system,” he said.

Justice Connolly also said he was amazed the mint could give no indication of just how many coins had actually gone missing.

“I would like to think those working at the other mint factory printing $100 notes might be subject to a better system of security,” he said.

Posted on June 27, 2006 at 7:45 AMView Comments

Aligning Interest with Capability

Have you ever been to a retail store and seen this sign on the register: “Your purchase free if you don’t get a receipt”? You almost certainly didn’t see it in an expensive or high-end store. You saw it in a convenience store, or a fast-food restaurant. Or maybe a liquor store. That sign is a security device, and a clever one at that. And it illustrates a very important rule about security: it works best when you align interests with capability.

If you’re a store owner, one of your security worries is employee theft. Your employees handle cash all day, and dishonest ones will pocket some of it for themselves. The history of the cash register is mostly a history of preventing this kind of theft. Early cash registers were just boxes with a bell attached. The bell rang when an employee opened the box, alerting the store owner—who was presumably elsewhere in the store—that an employee was handling money.

The register tape was an important development in security against employee theft. Every transaction is recorded in write-only media, in such a way that it’s impossible to insert or delete transactions. It’s an audit trail. Using that audit trail, the store owner can count the cash in the drawer, and compare the amount with what the register. Any discrepancies can be docked from the employee’s paycheck.

If you’re a dishonest employee, you have to keep transactions off the register. If someone hands you money for an item and walks out, you can pocket that money without anyone being the wiser. And, in fact, that’s how employees steal cash in retail stores.

What can the store owner do? He can stand there and watch the employee, of course. But that’s not very efficient; the whole point of having employees is so that the store owner can do other things. The customer is standing there anyway, but the customer doesn’t care one way or another about a receipt.

So here’s what the employer does: he hires the customer. By putting up a sign saying “Your purchase free if you don’t get a receipt,” the employer is getting the customer to guard the employee. The customer makes sure the employee gives him a receipt, and employee theft is reduced accordingly.

There is a general rule in security to align interest with capability. The customer has the capability of watching the employee; the sign gives him the interest.

In Beyond Fear I wrote about ATM fraud; you can see the same mechanism at work:

“When ATM cardholders in the US complained about phantom withdrawals from their accounts, the courts generally held that the banks had to prove fraud. Hence, the banks’ agenda was to improve security and keep fraud low, because they paid the costs of any fraud. In the UK, the reverse was true: The courts generally sided with the banks and assumed that any attempts to repudiate withdrawals were cardholder fraud, and the cardholder had to prove otherwise. This caused the banks to have the opposite agenda; they didn’t care about improving security, because they were content to blame the problems on the customers and send them to jail for complaining. The result was that in the US, the banks improved ATM security to forestall additional losses—most of the fraud actually was not the cardholder’s fault—while in the UK, the banks did nothing.”

The banks had the capability to improve security. In the US, they also had the interest. But in the UK, only the customer had the interest. It wasn’t until the UK courts reversed themselves and aligned interest with capability that ATM security improved.

Computer security is no different. For years I have argued in favor of software liabilities. Software vendors are in the best position to improve software security; they have the capability. But, unfortunately, they don’t have much interest. Features, schedule, and profitability are far more important. Software liabilities will change that. They’ll align interest with capability, and they’ll improve software security.

One last story… In Italy, tax fraud used to be a national hobby. (It may still be; I don’t know.) The government was tired of retail stores not reporting sales and paying taxes, so they passed a law regulating the customers. Any customer having just purchased an item and stopped within a certain distance of a retail store, has to produce a receipt or they would be fined. Just as in the “Your purchase free if you don’t get a receipt” story, the law turned the customers into tax inspectors. They demanded receipts from merchants, which in turn forced the merchants to create a paper audit trail for the purchase and pay the required tax.

This was a great idea, but it didn’t work very well. Customers, especially tourists, didn’t like to be stopped by police. People started demanding that the police prove they just purchased the item. Threatening people with fines if they didn’t guard merchants wasn’t as effective an enticement as offering people a reward if they didn’t get a receipt.

Interest must be aligned with capability, but you need to be careful how you generate interest.

This essay originally appeared on Wired.com.

Posted on June 1, 2006 at 6:27 AMView Comments

Diebold Doesn't Get It

This quote sums up nicely why Diebold should not be trusted to secure election machines:

David Bear, a spokesman for Diebold Election Systems, said the potential risk existed because the company’s technicians had intentionally built the machines in such a way that election officials would be able to update their systems in years ahead.

“For there to be a problem here, you’re basically assuming a premise where you have some evil and nefarious election officials who would sneak in and introduce a piece of software,” he said. “I don’t believe these evil elections people exist.”

If you can’t get the threat model right, you can’t hope to secure the system.

Posted on May 22, 2006 at 3:22 PMView Comments

Kent Robbery

Something like 50 million pounds was stolen from a banknote storage depot in the UK. BBC has a good chronology of the theft.

The Times writes:

Large-scale cash robbery was once a technical challenge: drilling through walls, short-circuiting alarms, gagging guards and stationing the get-away car. Today, the weak points in the banks’ defences are not grilles and vaults, but human beings. Stealing money is now partly a matter of psychology. The success of the Tonbridge robbers depended on terrifying Mr Dixon into opening the doors. They had studied their victim. They knew the route he took home, and how he would respond when his wife and child were in mortal danger. It did not take gelignite to blow open the vaults; it took fear, in the hostage technique known as “tiger kidnapping”, so called because of the predatory stalking that precedes it. Tiger kidnapping is the point where old-fashioned crime meets modern terrorism.

Posted on February 27, 2006 at 12:26 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.