Entries Tagged "trust"

Page 14 of 15

Security Through Begging

From TechDirt:

Last summer, the surprising news came out that Japanese nuclear secrets leaked out, after a contractor was allowed to connect his personal virus-infested computer to the network at a nuclear power plant. The contractor had a file sharing app on his laptop as well, and suddenly nuclear secrets were available to plenty of kids just trying to download the latest hit single. It’s only taken about nine months for the government to come up with its suggestion on how to prevent future leaks of this nature: begging all Japanese citizens not to use file sharing systems—so that the next time this happens, there won’t be anyone on the network to download such documents.

Even if their begging works, it solves the wrong problem. Sad.

EDITED TO ADD (3/22): Another article.

Posted on March 20, 2006 at 2:01 PMView Comments

Caller ID Spoofing

What’s worse than a bad authentication system? A bad authentication system that people have learned to trust. According to the Associated Press:

In the last few years, Caller ID spoofing has become much easier. Millions of people have Internet telephone equipment that can be set to make any number appear on a Caller ID system. And several Web sites have sprung up to provide Caller ID spoofing services, eliminating the need for any special hardware.

For instance, Spoofcard.com sells a virtual “calling card” for $10 that provides 60 minutes of talk time. The user dials a toll-free number, then keys in the destination number and the Caller ID number to display.

Near as anyone can tell, this is perfectly legal. (Although the FCC is investigating.)

The applications for Caller ID spoofing are not limited to fooling people. There’s real fraud that can be committed:

Lance James, chief scientist at security company Secure Science Corp., said Caller ID spoofing Web sites are used by people who buy stolen credit card numbers. They will call a service such as Western Union, setting Caller ID to appear to originate from the card holder’s home, and use the credit card number to order cash transfers that they then pick up.

Exposing a similar vulnerability, Caller ID is used by credit-card companies to authenticate newly issued cards. The recipients are generally asked to call from their home phones to activate their cards.

And, of course, harmful pranks:

In one case, SWAT teams surrounded a building in New Brunswick, N.J., last year after police received a call from a woman who said she was being held hostage in an apartment. Caller ID was spoofed to appear to come from the apartment.

It’s also easy to break into a cell phone voice mailbox using spoofing, because many systems are set to automatically grant entry to calls from the owner of the account. Stopping that requires setting a PIN code or password for the mailbox.

I have never been a fan of Caller ID. My phone number is configured to block Caller ID on outgoing calls. The number of phone numbers that refuse to accept my calls is growing, however.

Posted on March 3, 2006 at 7:10 AM

Countering "Trusting Trust"

Way back in 1974, Paul Karger and Roger Schell discovered a devastating attack against computer systems. Ken Thompson described it in his classic 1984 speech, “Reflections on Trusting Trust.” Basically, an attacker changes a compiler binary to produce malicious versions of some programs, INCLUDING ITSELF. Once this is done, the attack perpetuates, essentially undetectably. Thompson demonstrated the attack in a devastating way: he subverted a compiler of an experimental victim, allowing Thompson to log in as root without using a password. The victim never noticed the attack, even when they disassembled the binaries—the compiler rigged the disassembler, too.

This attack has long been part of the lore of computer security, and everyone knows that there’s no defense. And that makes this paper by David A. Wheeler so interesting. It’s “Countering Trusting Trust through Diverse Double-Compiling,” and here’s the abstract:

An Air Force evaluation of Multics, and Ken Thompson’s famous Turing award lecture “Reflections on Trusting Trust,” showed that compilers can be subverted to insert malicious Trojan horses into critical software, including themselves. If this attack goes undetected, even complete analysis of a system’s source code will not find the malicious code that is running, and methods for detecting this particular attack are not widely known. This paper describes a practical technique, termed diverse double-compiling (DDC), that detects this attack and some unintended compiler defects as well. Simply recompile the purported source code twice: once with a second (trusted) compiler, and again using the result of the first compilation. If the result is bit-for-bit identical with the untrusted binary, then the source code accurately represents the binary. This technique has been mentioned informally, but its issues and ramifications have not been identified or discussed in a peer-reviewed work, nor has a public demonstration been made. This paper describes the technique, justifies it, describes how to overcome practical challenges, and demonstrates it.

To see how this works, look at the attack. In a simple form, the attacker modifies the compiler binary so that whenever some targeted security code like a password check is compiled, the compiler emits the attacker’s backdoor code in the executable.

Now, this would be easy to get around by just recompiling the compiler. Since that will be done from time to time as bugs are fixed or features are added, a more robust form of of the attack adds a step: Whenever the compiler is itself compiled, it emits the code to insert malicious code into various programs, including itself.

Assuming broadly that the compiler source is updated, but not completely rewritten, this attack is undetectable.

Wheeler explains how to defeat this more robust attack. Suppose we have two completely independent compilers: A and T. More specifically, we have source code SA of compiler A, and executable code EA and ET. We want to determine if the binary of compiler A—EA—contains this trusting trust attack.

Here’s Wheeler’s trick:

Step 1: Compile SA with EA, yielding new executable X.

Step 2: Compile SA with ET, yielding new executable Y.

Since X and Y were generated by two different compilers, they should have different binary code but be functionally equivalent. So far, so good. Now:

Step 3: Compile SA with X, yielding new executable V.

Step 4: Compile SA with Y, yielding new executable W.

Since X and Y are functionally equivalent, V and W should be bit-for-bit equivalent.

And that’s how to detect the attack. If EA is infected with the robust form of the attack, then X and Y will be functionally different. And if X and Y are functionally different, then V and W will be bitwise different. So all you have to do is to run a binary compare between V and W; if they’re different, then EA is infected.

Now you might read this and think: “What’s the big deal? All I need to test if I have a trusted compiler is…another trusted compiler. Isn’t it turtles all the way down?”

Not really. You do have to trust a compiler, but you don’t have to know beforehand which one you must trust. If you have the source code for compiler T, you can test it against compiler A. Basically, you still have to have at least one executable compiler you trust. But you don’t have to know which one you should start trusting.

And the definition of “trust” is much looser. This countermeasure will only fail if both A and T are infected in exactly the same way. The second compiler can be malicious; it just has to be malicious in some different way: i.e., it can’t have the same triggers and payloads of the first. You can greatly increase the odds that the triggers/payloads are not identical by increasing diversity: using a compiler from a different era, on a different platform, without a common heritage, transforming the code, etc.

Also, the only thing compiler B has to do is compile the compiler-under-test. It can be hideously slow, produce code that is hideously slow, or only work on a machine that hasn’t been produced in a decade. You could create a compiler specifically for this task. And if you’re really worried about “turtles all the way down,” you can write Compiler B yourself for a computer you built yourself from vacuum tubes that you made yourself. Since Compiler B only has to occasionally recompile your “real” compiler, you can impose a lot of restrictions that you would never accept in a typical production-use compiler. And you can periodically check Compiler B’s integrity using every other compiler out there.

For more detailed information, see Wheeler’s website.

Now, this technique only detects when the binary doesn’t match the source, so someone still needs to examine the compiler source code. But now you only have to examine the source code (a much easier task), not the binary.

It’s interesting: the “trusting trust” attack has actually gotten easier over time, because compilers have gotten increasingly complex, giving attackers more places to hide their attacks. Here’s how you can use a simpler compiler—that you can trust more—to act as a watchdog on the more sophisticated and more complex compiler.

Posted on January 23, 2006 at 6:19 AMView Comments

Forged Credentials and Security

In Beyond Fear, I wrote about the difficulty of verifying credentials. Here’s a real story about that very problem:

When Frank Coco pulled over a 24-year-old carpenter for driving erratically on Interstate 55, Coco was furious. Coco was driving his white Chevy Caprice with flashing lights and had to race in front of the young man and slam on his brakes to force him to stop.

Coco flashed his badge and shouted at the driver, Joe Lilja: “I’m a cop and when I tell you to pull over, you pull over, you motherf——!”

Coco punched Lilja in the face and tried to drag him out of his car.

But Lilja wasn’t resisting arrest. He wasn’t even sure what he’d done wrong.

“I thought, ‘Oh my God, I can’t believe he’s hitting me,’ ” Lilja recalled.

It was only after Lilja sped off to escape—leading Coco on a tire-squealing, 90-mph chase through the southwest suburbs—that Lilja learned the truth.

Coco wasn’t a cop at all.

He was a criminal.

There’s no obvious way to solve this. This is some of what I wrote in Beyond Fear:

Authentication systems suffer when they are rarely used and when people aren’t trained to use them.

[…]

Imagine you’re on an airplane, and Man A starts attacking a flight attendant. Man B jumps out of his seat, announces that he’s a sky marshal, and that he’s taking control of the flight and the attacker. (Presumably, the rest of the plane has subdued Man A by now.) Man C then stands up and says: “Don’t believe Man B. He’s not a sky marshal. He’s one of Man A’s cohorts. I’m really the sky marshal.”

What do you do? You could ask Man B for his sky marshal identification card, but how do you know what an authentic one looks like? If sky marshals travel completely incognito, perhaps neither the pilots nor the flight attendants know what a sky marshal identification card looks like. It doesn’t matter if the identification card is hard to forge if person authenticating the credential doesn’t have any idea what a real card looks like.

[…]

Many authentication systems are even more informal. When someone knocks on your door wearing an electric company uniform, you assume she’s there to read the meter. Similarly with deliverymen, service workers, and parking lot attendants. When I return my rental car, I don’t think twice about giving the keys to someone wearing the correct color uniform. And how often do people inspect a police officer’s badge? The potential for intimidation makes this security system even less effective.

Posted on January 13, 2006 at 7:00 AMView Comments

Kevin Kelly on Anonymity

He’s against it:

More anonymity is good: that’s a dangerous idea.

Fancy algorithms and cool technology make true anonymity in mediated environments more possible today than ever before. At the same time this techno-combo makes true anonymity in physical life much harder. For every step that masks us, we move two steps toward totally transparent unmasking. We have caller ID, but also caller ID Block, and then caller ID-only filters. Coming up: biometric monitoring and little place to hide. A world where everything about a person can be found and archived is a world with no privacy, and therefore many technologists are eager to maintain the option of easy anonymity as a refuge for the private.

However in every system that I have seen where anonymity becomes common, the system fails. The recent taint in the honor of Wikipedia stems from the extreme ease which anonymous declarations can be put into a very visible public record. Communities infected with anonymity will either collapse, or shift the anonymous to pseudo-anonymous, as in eBay, where you have a traceable identity behind an invented nickname. Or voting, where you can authenticate an identity without tagging it to a vote.

Anonymity is like a rare earth metal. These elements are a necessary ingredient in keeping a cell alive, but the amount needed is a mere hard-to-measure trace. In larger does these heavy metals are some of the most toxic substances known to a life. They kill. Anonymity is the same. As a trace element in vanishingly small doses, it’s good for the system by enabling the occasional whistleblower, or persecuted fringe. But if anonymity is present in any significant quantity, it will poison the system.

There’s a dangerous idea circulating that the option of anonymity should always be at hand, and that it is a noble antidote to technologies of control. This is like pumping up the levels of heavy metals in your body into to make it stronger.

Privacy can only be won by trust, and trust requires persistent identity, if only pseudo-anonymously. In the end, the more trust, the better. Like all toxins, anonymity should be keep as close to zero as possible.

I don’t even know where to begin. Anonymity is essential for free and fair elections. It’s essential for democracy and, I think, liberty. It’s essential to privacy in a large society, and so it is essential to protect the rights of the minority against the tyranny of the majority…and to protect individual self-respect.

Kelly makes the very valid point that reputation makes society work. But that doesn’t mean that 1) reputation can’t be anonymous, or 2) anonymity isn’t also essential for society to work.

I’m writing an essay on this for Wired News. Comments and arguments, pro or con, are appreciated.

Posted on January 5, 2006 at 1:20 PMView Comments

ID Cards and ID Fraud

Unforeseen security effects of weak ID cards:

It can even be argued that the introduction of the photocard licence has encouraged ID fraud. It has been relatively easy for fraudsters to obtain a licence, but because it looks and feels like ‘photo ID’, it is far more readily accepted as proof of identity than the paper licence is, and can therefore be used directly as an ID document or to support the establishment of stronger fraudulent ID, particularly in countries familiar with ID cards in this format, but perhaps unfamiliar with the relative strengths of British ID documents.

During the Commons ID card debates this kind of process was described by Tory MP Patrick Mercer, drawing on his experience as a soldier in Northern Ireland, where photo driving licences were first introduced as an anti-terror measure. This “quasi-identity card… I think—had a converse effect to that which the Government sought… anybody who had such a card or driving licence on their person had a pass, which, if shown to police or soldiers, gave them free passage. So, it had precisely the opposite effect to that which was intended.”

Effectively – as security experts frequently point out – apparently stronger ID can have a negative effect in that it means that the people responsible for checking it become more likely to accept it as conclusive, and less likely to consider the individual bearing it in any detail. A similar effect has been observed following the introduction of chip and PIN credit cards, where ownership of the card and knowledge of the PIN is now almost always viewed as conclusive.

Posted on December 30, 2005 at 1:51 PMView Comments

Idiotic Article on TPM

This is just an awful news story.

“TPM” stands for “Trusted Platform Module.” It’s a chip that may soon be in your computer that will try to enforce security: both your security, and the security of software and media companies against you. It’s complicated, and it will prevent some attacks. But there are dangers. And lots of ways to hack it. (I’ve written about TPM here, and here when Microsoft called it Palladium. Ross Anderson has some good stuff here.)

In fact, with TPM, your bank wouldn’t even need to ask for your username and password—it would know you simply by the identification on your machine.

Since when is “your computer” the same as “you”? And since when is identifying a computer the same as authenticating the user? And until we can eliminate bot networks and “owned” machines, there’s no way to know who is controlling your computer.

Of course you could always “fool” the system by starting your computer with your unique PIN or fingerprint and then letting another person use it, but that’s a choice similar to giving someone else your credit card.

Right, letting someone use your computer is the same as letting someone use your credit card. Does he have any idea that there are shared computers that you can rent and use? Does he know any families that share computers? Does he ever have friends who visit him at home? There are lots of ways a PIN can be guessed or stolen.

Oh, I can’t go on.

My guess is the reporter was fed the story by some PR hack, and never bothered to check out if it were true.

Posted on December 23, 2005 at 11:13 AMView Comments

Limitations on Police Power Shouldn't Be a Partisan Issue

In response to my op ed last week, the Minneapolis Star Tribune published this letter:

THE PATRIOT ACT

Where are the abuses?

The Nov. 22 commentary “The erosion of freedom” is yet another example of how liberal hysteria is conspicuously light on details.

While the Patriot Act may allow for potential abuses of power, flaws undoubtedly to be fine-tuned over time, the “erosion of freedom” it may foster absolutely pales in comparison to the freedom it is designed to protect in the new age of global terrorism.

I have yet to read of one incident of infringement of any private citizen’s rights as a direct result of the Patriot Act—nor does this commentary point out any, either.

While I’m a firm believer in the Fourth Amendment, I also want our law enforcement to have the legal tools necessary, unfettered by restrictions to counter liberals’ paranoid fixation on “fascism,” in order to combat the threat that terrorism has on all our freedoms.

I have enough trust in our free democratic society and the coequal branches of government that we won’t evolve into a sinister “police state,” as ominously predicted by this commentary.

CHRIS GARDNER, MINNEAPOLIS

Two things strike me in this letter. The first is his “I have yet to read of one incident of infringement of any private citizen’s rights as a direct result of the Patriot Act….” line. It’s just odd. A simple Googling of “patriot act abuses” comes up with almost 3 million hits, many of them pretty extensive descriptions of Patriot Act abuses. Now, he could decide that none of them are abuses. He could choose not to believe any of them are true. He could choose to believe, as he seems to, that it’s all in some liberal fantasy. But to simply not even bother reading about them…isn’t he just admitting that he’s not qualified to have an opinion on the matter? (There’s also that “direct result” weaseling, which I’m not sure what to make of either. Are infringements that are an indirect result of the Patriot Act somehow better?)

I suppose that’s just being petty, though.

The more important thing that strikes me is how partisan he is. He writes about “liberal hysteria” and “liberals’ paranoid fixation on ‘fascism.'” In his last paragraph, he writes about his trust in government.

Most laws don’t matter when we all trust each other. Contracts are rarely if ever looked at if the parties trust each other. The whole point of laws and contracts is to protect us when the parties don’t trust each other. It’s not enough that this guy, and everyone else with this opinion, trusts the Bush government to judiciously balance his rights with the need to fight global terrorism. This guy has to believe that when the Democrats are in power that his rights are just as protected: that he is just as secure against police and government abuse.

Because that’s how you should think about laws, contracts, and government power. When reading through a contract, don’t think about how much you like the other person who’s signing it; imagine how the contract will protect you if you become enemies. When thinking about a law, imagine how it will protect you when your worst nightmare—Hillary Clinton as President, Janet Reno as Attorney General, Howard Dean as something-or-other, and a Democratic Senate and House—is in power.

Laws and contracts are not written for one political party, or for one side. They’re written for everybody. History teaches us this lesson again and again. In the United States, the Bill of Rights was opposed on the grounds that it wasn’t necessary; the Alien and Sedition Act of 1798 proved that it was, only nine years later.

It makes no sense to me that this is a partisan issue.

Posted on December 2, 2005 at 6:11 AMView Comments

Identity Cards Don't Help

Emily Finch, of the University of East Anglia, has researched criminals and how they adapt their fraud techniques to identity cards, especially the “chip and PIN” system that is currently being adapted in the UK. Her analysis: the security measures don’t help:

“There are various strategies that fraudsters use to get around the pin problem,” she said. “One of the things that is very clear is that it is a difficult matter for a fraudster to get hold of somebody’s card and then find out the pin.

“So the focus has been changed to finding the pin first, which is very, very easy if you are prepared to break social convention and look when people type the number in at the point of sale.”

Reliance in the technology actually reduces security, because people stop paying attention:

“One of the things we found quite alarming was how much the human element has been taken out of point-of-sale transactions,” Dr Finch said. “Point-of-sale staff are told to look away when people put their pin number in; so they don’t check at all.”

[…]

Some strategies relied on trust. Another fraudster trick was to produce a stolen card and pretend to misremember the number and search for it on a piece of paper.

Imagine, she said, someone searching for a piece of paper and saying, “Oh yes, that’s my signature”; there would be instant suspicion.

But there was utter trust in the new technology to pick up a fraudulent transaction, and criminals exploited this trust to get around the problem of having to enter a pin number.

“You go in, you put the card in, you type any number because you don’t know what it is. It won’t go through. The fraudster—because fraudsters are so good with people—says, ‘Oh, it’s no good, I haven’t got the hang of this yet. I could have sworn that was my number… I’ve probably got it confused with my other card.’

“They chat for a bit. The sales assistant, who is either disinterested or sympathetic, falls back on the old system, and swipes the card through.

“Because a relationship of empathy has already been established, and because they have already become accustomed to averting their gaze when people put pin numbers in, they don’t check the signature at all.

“So fraud is actually easier. There is very little vigilance at the point of sale any more. Fraudsters know this and they are taking advantage of it.”

I’ve been saying this kind of thing for a while, and it’s nice to read about some research that backs it up.

Other articles on the research are here, here, and here.

Posted on September 6, 2005 at 4:07 PMView Comments

Hogwarts Security

From Karl Lembke:

In the latest Harry Potter book, we see Hogwarts implementing security precautions in order to safeguard its students and faculty.

One step that was taken was that all the students were searched – wanded, in fact – to detect any harmful magic. In addition, all mail coming in or out was checked for harmful magic.

In spite of these precautions, two students are nearly killed by cursed items.

One of the items was a poisoned bottle of mead, which made it onto school grounds and into a professor’s office.

It turned out that packages sent from various addresses in the nearby town were not checked. The addresses were trusted, and anything received from them was considered safe. When a key person was compromised (in this case, by a mind-control spell), the trusted address was no longer trustworthy, and a gaping hole in security was created.

Of course, since everyone knew everything was checked on its way into the school, no one felt the need to take any special precautions.

The moral of the story is, inadequate security can be worse than no security at all.

And while we’re on the subject, can you really render a powerful wizard helpless simply by taking away his wand? And is taking away a powerful wizard’s wand simply as easy as doing something to him at the same time he is doing something else?

One, this means that you’re dead if you’re outnumbered. All it would take it two synchronized wizards, both of much lower power level, to defeat a powerful wizard. And two, it means that you’re dead if you’re taking by surprise or distracted.

This seems like an enormous hole in magical defenses, one that wizards would have worked feverishly to close up generations ago.

EDITED TO ADD: Here’s a page on trust in the series.

Posted on September 4, 2005 at 3:27 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.