Entries Tagged "computer security"

Page 22 of 33

The Myth of the Superuser

This is a very interesting law journal paper:

The Myth of the Superuser: Fear, Risk, and Harm Online

Paul Ohm

Abstract: Fear of the powerful computer user, “the Superuser,” dominates debates about online conflict. This mythic figure is difficult to find, immune to technological constraints, and aware of legal loopholes. Policymakers, fearful of his power, too often overreact, passing overbroad, ambiguous laws intended to ensnare the Superuser, but which are used instead against inculpable, ordinary users. This response is unwarranted because the Superuser is often a marginal figure whose power has been greatly exaggerated.

The exaggerated attention to the Superuser reveals a pathological characteristic of the study of power, crime, and security online, which springs from a widely-held fear of the Internet. Building on the social science fear literature, this Article challenges the conventional wisdom and standard assumptions about the role of experts. Unlike dispassionate experts in other fields, computer experts are as susceptible as lay-people to exaggerate the power of the Superuser, in part because they have misapplied Larry Lessig’s ideas about code.

The experts in computer security and Internet law have failed to deliver us from fear, resulting in overbroad prohibitions, harms to civil liberties, wasted law enforcement resources, and misallocated economic investment. This Article urges policymakers and partisans to stop using tropes of fear; calls for better empirical work on the probability of online harm; and proposes an anti-Precautionary Principle, a presumption against new laws designed to stop the Superuser.

If I have one complaint, it’s that Ohm doesn’t take into account the effects of the smarter hackers to encapsulate their expertise in easy-to-run software programs, and distribute them to those without the skill. He does mention this at the end, in a section about script kiddies, but I think this is a fundamental difference between hacking skills and other potentially criminal skills.

Here’s a threepart summary of the topic by Ohm.

Posted on May 8, 2007 at 6:14 AMView Comments

Commentary on Vista Security and the Microsoft Monopoly

This is right:

As Dan Geer has been saying for years, Microsoft has a bit of a problem. Either it stonewalls and pretends there is no security problem, which is what Vista does, by taking over your computer to force patches (and DRM) down its throat. Or you actually change the basic design and produce a secure operating system, which risks people wondering why they’re sticking with Windows and Microsoft, then? It turns out the former course may also result in the latter result:

If you fit Microsoft’s somewhat convoluted definition of poor, it still wants to lock you in, you might get rich enough to afford the full-priced stuff someday. It is at a dangerous crossroads, if its software bumps up the price of a computer by 100 per cent, people might look to alternatives.

That means no MeII DRM infection lock in, no mass migration to the newer Office obfuscated and patented file formats, and worse yet, people might utter the W word. Yes, you guessed it, ‘why’. People might ask why they are sticking with the MS lock in, and at that point, it is in deep trouble.

Monopolies eventually overreach themselves and die. Maybe it’s finally Microsoft’s time to die. That would decrease the risk to the rest of us.

Posted on April 27, 2007 at 7:03 AMView Comments

Keystroke Biometrics

This sounds like a good idea. From a news article:

The technology, which measures the time for which keys are held down, as well as the length between strokes, takes advantage of the fact that most computer users evolve a method of typing which is both consistent and idiosyncratic ­ especially for words used frequently such as a user name and password.

When registering, the user types his or her details nine times so that the software can generate a profile. Future login attempts are measured against the profile which, the company claims, can recognise the same user’s keystrokes with 99 per cent accuracy, using what is known as a “behavioural biometric.”

I wouldn’t want to automatically block users unless they get this right, and the false-positive/false-negative ratio would have to be jiggered properly, but if they can get it working right, it’s an extra layer of authentication for “free.”

Another news article. Slashdot thread.

Posted on April 23, 2007 at 6:49 AMView Comments

A Security Market for Lemons

More than a year ago, I wrote about the increasing risks of data loss because more and more data fits in smaller and smaller packages. Today I use a 4-GB USB memory stick for backup while I am traveling. I like the convenience, but if I lose the tiny thing I risk all my data.

Encryption is the obvious solution for this problem—I use PGPdisk—but Secustick sounds even better: It automatically erases itself after a set number of bad password attempts. The company makes a bunch of other impressive claims: The product was commissioned, and eventually approved, by the French intelligence service; it is used by many militaries and banks; its technology is revolutionary.

Unfortunately, the only impressive aspect of Secustick is its hubris, which was revealed when Tweakers.net completely broke its security. There’s no data self-destruct feature. The password protection can easily be bypassed. The data isn’t even encrypted. As a secure storage device, Secustick is pretty useless.

On the surface, this is just another snake-oil security story. But there’s a deeper question: Why are there so many bad security products out there? It’s not just that designing good security is hard—although it is—and it’s not just that anyone can design a security product that he himself cannot break. Why do mediocre security products beat the good ones in the marketplace?

In 1970, American economist George Akerlof wrote a paper called “The Market for ‘Lemons‘” (abstract and article for pay here), which established asymmetrical information theory. He eventually won a Nobel Prize for his work, which looks at markets where the seller knows a lot more about the product than the buyer.

Akerlof illustrated his ideas with a used car market. A used car market includes both good cars and lousy ones (lemons). The seller knows which is which, but the buyer can’t tell the difference—at least until he’s made his purchase. I’ll spare you the math, but what ends up happening is that the buyer bases his purchase price on the value of a used car of average quality.

This means that the best cars don’t get sold; their prices are too high. Which means that the owners of these best cars don’t put their cars on the market. And then this starts spiraling. The removal of the good cars from the market reduces the average price buyers are willing to pay, and then the very good cars no longer sell, and disappear from the market. And then the good cars, and so on until only the lemons are left.

In a market where the seller has more information about the product than the buyer, bad products can drive the good ones out of the market.

The computer security market has a lot of the same characteristics of Akerlof’s lemons market. Take the market for encrypted USB memory sticks. Several companies make encrypted USB drives—Kingston Technology sent me one in the mail a few days ago—but even I couldn’t tell you if Kingston’s offering is better than Secustick. Or if it’s better than any other encrypted USB drives. They use the same encryption algorithms. They make the same security claims. And if I can’t tell the difference, most consumers won’t be able to either.

Of course, it’s more expensive to make an actually secure USB drive. Good security design takes time, and necessarily means limiting functionality. Good security testing takes even more time, especially if the product is any good. This means the less-secure product will be cheaper, sooner to market and have more features. In this market, the more-secure USB drive is going to lose out.

I see this kind of thing happening over and over in computer security. In the late 1980s and early 1990s, there were more than a hundred competing firewall products. The few that “won” weren’t the most secure firewalls; they were the ones that were easy to set up, easy to use and didn’t annoy users too much. Because buyers couldn’t base their buying decision on the relative security merits, they based them on these other criteria. The intrusion detection system, or IDS, market evolved the same way, and before that the antivirus market. The few products that succeeded weren’t the most secure, because buyers couldn’t tell the difference.

How do you solve this? You need what economists call a “signal,” a way for buyers to tell the difference. Warranties are a common signal. Alternatively, an independent auto mechanic can tell good cars from lemons, and a buyer can hire his expertise. The Secustick story demonstrates this. If there is a consumer advocate group that has the expertise to evaluate different products, then the lemons can be exposed.

Secustick, for one, seems to have been withdrawn from sale.

But security testing is both expensive and slow, and it just isn’t possible for an independent lab to test everything. Unfortunately, the exposure of Secustick is an exception. It was a simple product, and easily exposed once someone bothered to look. A complex software product—a firewall, an IDS—is very hard to test well. And, of course, by the time you have tested it, the vendor has a new version on the market.

In reality, we have to rely on a variety of mediocre signals to differentiate the good security products from the bad. Standardization is one signal. The widely used AES encryption standard has reduced, although not eliminated, the number of lousy encryption algorithms on the market. Reputation is a more common signal; we choose security products based on the reputation of the company selling them, the reputation of some security wizard associated with them, magazine reviews, recommendations from colleagues or general buzz in the media.

All these signals have their problems. Even product reviews, which should be as comprehensive as the Tweakers’ Secustick review, rarely are. Many firewall comparison reviews focus on things the reviewers can easily measure, like packets per second, rather than how secure the products are. In IDS comparisons, you can find the same bogus “number of signatures” comparison. Buyers lap that stuff up; in the absence of deep understanding, they happily accept shallow data.

With so many mediocre security products on the market, and the difficulty of coming up with a strong quality signal, vendors don’t have strong incentives to invest in developing good products. And the vendors that do tend to die a quiet and lonely death.

This essay originally appeared in Wired.

EDITED TO ADD (4/22): Slashdot thread.

Posted on April 19, 2007 at 7:59 AMView Comments

JavaScript Hijacking

Interesting paper on JavaScript Hijacking: a new type of eavesdropping attack against Ajax-style Web applications. I’m pretty sure it’s the first type of attack that specifically targets Ajax code. The attack is possible because Web browsers don’t protect JavaScript the same way they protect HTML; if a Web application transfers confidential data using messages written in JavaScript, in some cases the messages can be read by an attacker.

The authors show that many popular Ajax programming frameworks do nothing to prevent JavaScript hijacking. Some actually require a programmer to create a vulnerable server in order to function.

Like so many of these sorts of vulnerabilities, preventing the class of attacks is easy. In many cases, it requires just a few additional lines of code. And like so many software security problems, programmers need to understand the security implications of their work so that they can mitigate the risks they face. But my guess is that JavaScript hijacking won’t be solved so easily, because programmers don’t understand the security implications of their work and won’t prevent the attacks.

Posted on April 2, 2007 at 3:45 PMView Comments

2006 Operating System Vulnerability Study

Long, but interesting.

Closing

While there are an enormous variety of operating systems to choose from, only four “core” lineages exist in the mainstream—Windows, OS X, Linux and UNIX. Each system carries its own baggage of vulnerabilities ranging from local exploits and user introduced weaknesses to remotely available attack vectors.

As far as “straight-out-of-box” conditions go, both Microsoft’s Windows and Apple’s OS X are ripe with remotely accessible vulnerabilities. Even before enabling the servers, Windows based machines contain numerous exploitable holes allowing attackers to not only access the system but also execute arbitrary code. Both OS X and Windows were susceptible to additional vulnerabilities after enabling the built-in services. Once patched, however, both companies support a product that is secure, at least from the outside. The UNIX and Linux variants present a much more robust exterior to the outside. Even when the pre-configured server binaries are enabled, each system generally maintained its integrity against remote attacks. Compared with the Microsoft and Apple products, however, UNIX and Linux systems tend to have a higher learning curve for acceptance as desktop platforms.

When it comes to business, most systems have the benefit of trained administrators and IT departments to properly patch and configure the operating systems and their corresponding services. Things are different with home computers. The esoteric nature of the UNIX and Linux systems tend to result in home users with an increased understanding of security concerns. An already “hardened” operating system therefore has the benefit of a knowledgeable user base. The more consumer oriented operating systems made by Microsoft and Apple are each hardened in their own right. As soon as users begin to arbitrarily enable remote services or fiddle with the default configurations, the systems quickly become open to intrusion. Without a diligence for applying the appropriate patches or enabling automatic updates, owners of Windows and OS X systems are the most susceptible to quick and thorough remote violations by hackers.

Posted on April 2, 2007 at 7:38 AMView Comments

Misplacing the Blame in Personal Identity Thefts

Really good article:

In a recent dissection of the connection between gaming and violence, the term “folk devil” was used to describe something that can be labeled dangerous in order to assign blame in a case where the causes are complex and unclear. The new paper suggests that hackers have become the folk devils of computer security, stating that “even though the campaign against hackers has successfully cast them as the primary culprits to blame for insecurity in cyberspace, it is not clear that constructing this target for blame has improved the security of personal digital records.”

Part of this argument is based on the contention that many of the criminal groups that engage in illicit access to records are culturally distinct from the hacker community and that the hacker community proper is composed of a number of subcultures, some of which may access personal data without distributing it.

But, even if a more liberal definition of hacker is allowed, they still account for far less than half of the data losses. The report states that “60 percent of the incidents involve missing or stolen hardware, insider abuse or theft, administrative error, or accidentally exposing data online.”

Those figures come from analyzing the data while eliminating a single event, the compromise of 1.6 billion records at Axciom. The Axciom data loss is informative, as it reveals how what could be categorized as a hack involves institutional negligence. The records stolen from the company were taken by an employee that had access to Axciom servers in order to upload data. That employee gained download access because Axciom set the same passwords for both types of access.

Posted on March 23, 2007 at 10:29 AMView Comments

Faking Hardware Memory Access

Interesting:

[Joanna] Rutkowksa will show how an attacker could prevent forensics investigators from getting a real image of the memory where the malware resides. “Even if they somehow find out that the system is compromised, they will be unable to get the real image of memory containing the malware, and consequently, they will be unable to analyze it,” says Rutkowska, senior security researcher for COSEINC.

Posted on March 1, 2007 at 1:33 PMView Comments

1 20 21 22 23 24 33

Sidebar photo of Bruce Schneier by Joe MacInnis.