Entries Tagged "authentication"

Page 24 of 28

Database Error Causes Unbalanced Budget

This story of a database error cascading into a major failure has some interesting security morals:

A house erroneously valued at $400 million is being blamed for budget shortfalls and possible layoffs in municipalities and school districts in northwest Indiana.

[…]

County Treasurer Jim Murphy said the home usually carried about $1,500 in property taxes; this year, it was billed $8 million.

Most local officials did not learn about the mistake until Tuesday, when 18 government taxing units were asked to return a total of $3.1 million of tax money. The city of Valparaiso and the Valparaiso Community School Corp. were asked to return $2.7 million. As a result, the school system has a $200,000 budget shortfall, and the city loses $900,000.

User error is being blamed for the problem:

An outside user of Porter County’s computer system may have triggered the mess by accidentally changing the value of the Valparaiso house, said Sharon Lippens, director of the county’s information technologies and service department.

[…]

Lippens said the outside user changed the property value, most likely while trying to access another program while using the county’s enhanced access system, which charges users a fee for access to public records that are not otherwise available on the Internet.

Lippens said the user probably tried to access a real estate record display by pressing R-E-D, but accidentally typed R-E-R, which brought up an assessment program written in 1995. The program is no longer in use, and technology officials did not know it could be accessed.

Three things immediately spring to mind:

One, the system did not fail safely. This one error seems to have cascaded into multiple errors, as the new tax total immediately changed budgets of “18 government taxing units.”

Two, there were no sanity checks on the system. “The city of Valparaiso and the Valparaiso Community School Corp. were asked to return $2.7 million.” Didn’t the city wonder where all that extra money came from in the first place?

Three, the access-control mechanisms on the computer system were too broad. When a user is authenticated to use the “R-E-D” program, he shouldn’t automatically have permission to use the “R-E-R” program as well. Authentication isn’t all or nothing; it should be granular to the operation.

Posted on February 17, 2006 at 7:29 AMView Comments

Security, Economics, and Lost Conference Badges

Conference badges are an interesting security token. They can be very valuable—a full conference registration at the RSA Conference this week in San Jose, for example, costs $1,985—but their value decays rapidly with time. By tomorrow afternoon, they’ll be worthless.

Counterfeiting badges is one security concern, but an even bigger concern is people losing their badge or having their badge stolen. It’s way cheaper to find or steal someone else’s badge than it is to buy your own. People could do this sort of thing on purpose, pretending to lose their badge and giving it to someone else.

A few years ago, the RSA Conference charged people $100 for a replacement badge, which is far cheaper than a second membership. So the fraud remained. (At least, I assume it did. I don’t know anything about how prevalent this kind of fraud was at RSA.)

Last year, the RSA Conference tried to further limit these types of fraud by putting people’s photographs on their badges. Clever idea, but difficult to implement.

For this to work, though, guards need to match photographs with faces. This means that either 1) you need a lot more guards at entrance points, or 2) the lines will move a lot slower. Actually, far more likely is 3) no one will check the photographs.

And it was an expensive solution for the RSA Conference. They needed the equipment to put the photos on the badges. Registration was much slower. And pro-privacy people objected to the conference keeping their photographs on file.

This year, the RSA Conference solved the problem through economics:

If you lose your badge and/or badge holder, you will be required to purchase a new one for a fee of $1,895.00.

Look how clever this is. Instead of trying to solve this particular badge fraud problem through security, they simply moved the problem from the conference to the attendee. The badges still have that $1,895 value, but now if it’s stolen and used by someone else, it’s the attendee who’s out the money. As far as the RSA Conference is concerned, the security risk is an externality.

Note that from an outside perspective, this isn’t the most efficient way to deal with the security problem. It’s likely that the cost to the RSA Conference for centralized security is less than the aggregate cost of all the individual security measures. But the RSA Conference gets to make the trade-off, so they chose a solution that was cheaper for them.

Of course, it would have been nice if the conference provided a slightly more secure attachment point for the badge holder than a thin strip of plastic. But why should they? It’s not their problem anymore.

Posted on February 16, 2006 at 7:16 AMView Comments

The New Internet Explorer

I’m just starting to read about the new security features in Internet Explorer 7. So far, I like what I am reading.

IE 7 requires that all browser windows display an address bar. This helps foil attackers that operate by popping up new windows masquerading as pages on a legitimate site, when in fact the site is fraudulent. By requiring an address bar, users will immediately see the true URL of the displayed page, making these types of attacks more obvious. If you think you’re looking at www.microsoft.com, but the browser address bar says www.illhackyou.net, you ought to be suspicious.

I use Opera, and have long used the address bar to “check” on URLs. This is an excellent idea. So is this:

In early November, a bunch of Web browser developers got together and started fleshing out standards for address bar coloring, which can cue users to secured connections. Under the proposal laid out by IE 7 team member Rob Franco, even sites that use a standard SSL certificate will display a standard white address bar. Sites that use a stronger, as yet undetermined level of protection will use a green bar.

I like easy visual indications about what’s going on. And I really like that SSL is generic white, because it really doesn’t prove that you’re communicating with the site you think you’re communicating with. This feature helps with that, though:

Franco also said that when navigating to an SSL-protected site, the IE 7 address bar will display the business name and certification authority’s name in the address bar.

Some of the security measures in IE7 weaken the integration between the browser and the operating system:

People using Windows Vista beta 2 will find a new feature called Protected Mode, which renders IE 7 unable to modify system files and settings. This essentially breaks down part of the integration between IE and Windows itself.

Think of it is as a wall between IE and the rest of the operating system. No, the code won’t be perfect, and yes, there’ll be ways found to circumvent this security, but this is an important and long-overdue feature.

The majority of IE’s notorious security flaws stem from its pervasive integration with Windows. That is a feature no other Web browser offers—and an ability that Vista’s Protected Mode intends to mitigate. IE 7 obviously won’t remove all of that tight integration. Lacking deep architectural changes, the effort has focused instead on hardening or eliminating potential vulnerabilities. Unfortunately, this approach requires Microsoft to anticipate everything that could go wrong and block it in advance—hardly a surefire way to secure a browser.

That last sentence is about the general Internet attitude to allow everything that is not explicitly denied, rather than deny everything that is not explicitly allowed.

Also, you’ll have to wait until Vista to use it:

…this capability will not be available in Windows XP because it’s woven directly into Windows Vista itself.

There are also some good changes under the hood:

IE 7 does eliminate a great deal of legacy code that dates back to the IE 4 days, which is a welcome development.

And:

Microsoft has rewritten a good bit of IE 7’s core code to help combat attacks that rely on malformed URLs (that typically cause a buffer overflow). It now funnels all URL processing through a single function (thus reducing the amount of code that “looks” at URLs).

All good stuff, but I agree with this conclusion:

IE 7 offers several new security features, but it’s hardly a given that the situation will improve. There has already been a set of security updates for IE 7 beta 1 released for both Windows Vista and Windows XP computers. Security vulnerabilities in a beta product shouldn’t be alarming (IE 7 is hardly what you’d consider “finished” at this point), but it may be a sign that the product’s architecture and design still have fundamental security issues.

I’m not switching from Opera yet, and my second choice is still Firefox. But the masses still use IE, and our security depends in part on those masses keeping their computers worm-free and bot-free.

NOTE: Here’s some info on how to get your own copy of Internet Explorer 7 beta 2.

Posted on February 9, 2006 at 3:37 PMView Comments

Identity Theft in the UK

Recently there was some serious tax credit fraud in the UK. Basically, there is a tax-credit system that allows taxpayers to get a refund for some of their taxes if they meet certain criteria. Politically, this was a major objective of the Labour Party. So the Inland Revenue (the UK version of the IRS) made it as easy as possible to apply for this refund. One of the ways taxpayers could apply was via a Web portal.

Unfortunately, the only details necessary when applying were the applicant’s National Insurance number (the UK version of the Social Security number) and mother’s maiden name. The refund was then paid directly into any bank account specified on the application form. Anyone who knows anything about security can guess what happened. Estimates are that fifteen millions pounds has been stolen by criminal syndicates.

The press has been treating this as an issue of identity theft, talking about how criminals went Dumpster diving to get National Insurance numbers and so forth. I have seen very little about how the authentication scheme failed. The system tried—using semi-secret information like NI number and mother’s maiden name—to authenticate the person. Instead, the system should have tried to authenticate the transaction. Even a simple verification step—does the name on the account match the name of the person who should receive the refund—would have gone a long way to preventing this type of fraud.

Posted on February 8, 2006 at 3:42 PMView Comments

Passlogix Misquotes Me in Their PR Material

I recently received a PR e-mail from a company called Passlogix:

Password security is still a very prevalent threat, 2005 had security gurus like Bruce Schneier publicly suggest that you actually write them down on sticky-notes. A recent survey stated 78% of employees use passwords as their primary forms of security, 52% use the same password for their accounts—yet 77% struggle to remember their passwords.

Actually, I don’t. I recommend writing your passwords down and keeping them in your wallet.

I know nothing about this company, but I am unhappy at their misrepresentation of what I said.

Posted on February 7, 2006 at 7:23 AMView Comments

Foiling Counterfeiting Countermeasures

Great story illustrating how criminals adapt to security measures.

The notes were all $5 bills that had been bleached and altered to look like $100 bills, sheriff’s investigators said. They passed muster with the pen because it determines only whether the paper used to manufacture the currency is legitimate, Bandy said.

As a security measure, the merchants use a chemical pen that determines if the bills are counterfeit. But that’s not exactly what the pen does. The pen only verifies that the paper is legitimate. The criminals successfully exploited this security hole.

Posted on January 19, 2006 at 6:38 AMView Comments

Forged Credentials and Security

In Beyond Fear, I wrote about the difficulty of verifying credentials. Here’s a real story about that very problem:

When Frank Coco pulled over a 24-year-old carpenter for driving erratically on Interstate 55, Coco was furious. Coco was driving his white Chevy Caprice with flashing lights and had to race in front of the young man and slam on his brakes to force him to stop.

Coco flashed his badge and shouted at the driver, Joe Lilja: “I’m a cop and when I tell you to pull over, you pull over, you motherf——!”

Coco punched Lilja in the face and tried to drag him out of his car.

But Lilja wasn’t resisting arrest. He wasn’t even sure what he’d done wrong.

“I thought, ‘Oh my God, I can’t believe he’s hitting me,’ ” Lilja recalled.

It was only after Lilja sped off to escape—leading Coco on a tire-squealing, 90-mph chase through the southwest suburbs—that Lilja learned the truth.

Coco wasn’t a cop at all.

He was a criminal.

There’s no obvious way to solve this. This is some of what I wrote in Beyond Fear:

Authentication systems suffer when they are rarely used and when people aren’t trained to use them.

[…]

Imagine you’re on an airplane, and Man A starts attacking a flight attendant. Man B jumps out of his seat, announces that he’s a sky marshal, and that he’s taking control of the flight and the attacker. (Presumably, the rest of the plane has subdued Man A by now.) Man C then stands up and says: “Don’t believe Man B. He’s not a sky marshal. He’s one of Man A’s cohorts. I’m really the sky marshal.”

What do you do? You could ask Man B for his sky marshal identification card, but how do you know what an authentic one looks like? If sky marshals travel completely incognito, perhaps neither the pilots nor the flight attendants know what a sky marshal identification card looks like. It doesn’t matter if the identification card is hard to forge if person authenticating the credential doesn’t have any idea what a real card looks like.

[…]

Many authentication systems are even more informal. When someone knocks on your door wearing an electric company uniform, you assume she’s there to read the meter. Similarly with deliverymen, service workers, and parking lot attendants. When I return my rental car, I don’t think twice about giving the keys to someone wearing the correct color uniform. And how often do people inspect a police officer’s badge? The potential for intimidation makes this security system even less effective.

Posted on January 13, 2006 at 7:00 AMView Comments

Idiotic Article on TPM

This is just an awful news story.

“TPM” stands for “Trusted Platform Module.” It’s a chip that may soon be in your computer that will try to enforce security: both your security, and the security of software and media companies against you. It’s complicated, and it will prevent some attacks. But there are dangers. And lots of ways to hack it. (I’ve written about TPM here, and here when Microsoft called it Palladium. Ross Anderson has some good stuff here.)

In fact, with TPM, your bank wouldn’t even need to ask for your username and password—it would know you simply by the identification on your machine.

Since when is “your computer” the same as “you”? And since when is identifying a computer the same as authenticating the user? And until we can eliminate bot networks and “owned” machines, there’s no way to know who is controlling your computer.

Of course you could always “fool” the system by starting your computer with your unique PIN or fingerprint and then letting another person use it, but that’s a choice similar to giving someone else your credit card.

Right, letting someone use your computer is the same as letting someone use your credit card. Does he have any idea that there are shared computers that you can rent and use? Does he know any families that share computers? Does he ever have friends who visit him at home? There are lots of ways a PIN can be guessed or stolen.

Oh, I can’t go on.

My guess is the reporter was fed the story by some PR hack, and never bothered to check out if it were true.

Posted on December 23, 2005 at 11:13 AMView Comments

Weakest Link Security

Funny story:

At the airport where this pilot fish works, security has gotten a lot more attention since 9/11. “All the security doors that connect the concourses to office spaces and alleyways for service personnel needed an immediate upgrade,” says fish. “It seems that the use of a security badge was no longer adequate protection.

“So over the course of about a month, more than 50 doors were upgraded to require three-way protection. To open the door, a user needed to present a security badge (something you possess), a numeric code (something you know) and a biometric thumb scan (something you are).

“Present all three, and the door beeps and lets you in.”

One by one, the doors are brought online. The technology works, and everything looks fine—until fish decides to test the obvious.

After all, the average member of the public isn’t likely to forge a security badge, guess a multidigit number and fake a thumb scan. “But what happens if you just turn the handle without any of the above?” asks fish. “Would it set off alarms or call security?

“It turns out that if you turn the handle, the door opens.

“Despite the addition of all that technology and security on every single door, nobody bothered to check that the doors were set to lock by default.”

Remember, security is only as strong as the weakest link.

Posted on December 14, 2005 at 11:59 AMView Comments

Snake-Oil Research in Nature

Snake-oil isn’t only in commercial products. Here’s a piece of research published (behind a paywall) in Nature that’s just full of it.

The article suggests using chaos in an electro-optical system to generate a pseudo-random light sequence, which is then added to the message to protect it from interception. Now, the idea of using chaos to build encryption systems has been tried many times in the cryptographic community, and has always failed. But the authors of the Nature article show no signs of familiarity with prior cryptographic work.

The published system has the obvious problem that it does not include any form of message authentication, so it will be trivial to send spoofed messages or tamper with messages while they are in transit.

But a closer examination of the paper’s figures suggests a far more fundamental problem. There’s no key. Anyone with a valid receiver can decode the ciphertext. No key equals no security, and what you have left is a totally broken system.

I e-mailed Claudio R. Mirasso, the corresponding author, about the lack of any key, and got this reply: “To extract the message from the chaotic carrier you need to replicate the carrier itself. This can only be done by a laser that matches the emitter characteristics within, let’s say, within 2-5%. Semiconductor lasers with such similarity have to be carefully selected from the same wafer. Even though you have to test them because they can still be too different and do not synchronize. We talk abut a hardware key. Also the operating conditions (current, feedback length and coupling strength) are part of the key.”

Let me translate that. He’s saying that there is a hardware key baked into the system at fabrication. (It comes from manufacturing deviations in the lasers.) There’s no way to change the key in the field. There’s no way to recover security if any of the transmitters/receivers are lost or stolen. And they don’t know how hard it would be for an attacker to build a compatible receiver, or even a tunable receiver that could listen to a variety of encodings.

This paper would never get past peer review in any competent cryptography journal or conference. I’m surprised it was accepted in Nature, a fiercely competitive journal. I don’t know why Nature is taking articles on topics that are outside its usual competence, but it looks to me like Nature got burnt here by a lack of expertise in the area.

To be fair, the paper very carefully skirts the issue of security, and claims hardly anything: “Additionally, chaotic carriers offer a certain degree of intrinsic privacy, which could complement (via robust hardware encryption) both classical (software based) and quantum cryptography systems.” Now that “certain degree of intrinsic privacy” is approximately zero. But other than that, they’re very careful how they word their claims.

For instance, the abstract says: “Chaotic signals have been proposed as broadband information carriers with the potential of providing a high level of robustness and privacy in data transmission.” But there’s no disclosure that this proposal is bogus, from a privacy perspective. And the next-to-last paragraph says “Building on this, it should be possible to develop reliable cost-effective secure communication systems that exploit deeper properties of chaotic dynamics.” No disclosure that “chaotic dynamics” is actually irrelevant to the “secure” part. The last paragraph talks about “smart encryption techniques” (referencing a paper that talks about chaos encryption), “developing active eavesdropper-evasion strategies” (whatever that means), and so on. It’s just enough that if you don’t parse their words carefully and don’t already know the area well, you might come away with the impression that this is a major advance in secure communications. It seems as if it would have helped to have a more careful disclaimer.

Communications security was listed as one of the motivations for studying this communications technique. To list this as a motivation, without explaining that their experimental setup is actually useless for communications security, is questionable at best.

Meanwhile, the press has written articles that convey the wrong impression. Science News has an article that lauds this as a big achievement for communications privacy.

It talks about it as a “new encryption strategy,” “chaos-encrypted communication,” “1 gigabyte of chaos-encrypted information per second.” It’s obvious that the communications security aspect is what Science News is writing about. If the authors knew that their scheme is useless for communications security, they didn’t explain that very well.

There is also a New Scientist article titled “Let chaos keep your secrets safe” that characterizes this as a “new cryptographic technique, ” but I can’t get a copy of the full article.

Here are two more articles that discuss its security benefits. In the latter, Mirasso says “the main task we have for the future” is to “define, test, and calibrate the security that our system can offer.”

And their project web page says that “the continuous increase of computer speed threatens the safety” of traditional cryptography (which is bogus) and suggests using physical-layer chaos as a way to solve this. That’s listed as the goal of the project.

There’s a lesson here. This is research undertaken by researchers with no prior track record in cryptography, submitted to a journal with no background in cryptography, and reviewed by reviewers with who knows what kind of experience in cryptography. Cryptography is a subtle subject, and trying to design new cryptosystems without the necessary experience and training in the field is a quick route to insecurity.

And what’s up with Nature? Cryptographers with no training in physics know better than to think they are competent to evaluate physics research. If a physics paper were submitted to a cryptography journal, the authors would likely be gently redirected to a physics journal—we wouldn’t want our cryptography conferences to accept a paper on a subject they aren’t competent to evaluate. Why would Nature expect the situation to be any different when physicists try to do cryptography research?

Posted on December 7, 2005 at 6:36 AMView Comments

1 22 23 24 25 26 28

Sidebar photo of Bruce Schneier by Joe MacInnis.