Entries Tagged "essays"

Page 39 of 48

Is Penetration Testing Worth it?

There are security experts who insist penetration testing is essential for network security, and you have no hope of being secure unless you do it regularly. And there are contrarian security experts who tell you penetration testing is a waste of time; you might as well throw your money away. Both of these views are wrong. The reality of penetration testing is more complicated and nuanced.

Penetration testing is a broad term. It might mean breaking into a network to demonstrate you can. It might mean trying to break into a network to document vulnerabilities. It might involve a remote attack, physical penetration of a data center or social engineering attacks. It might use commercial or proprietary vulnerability scanning tools, or rely on skilled white-hat hackers. It might just evaluate software version numbers and patch levels, and make inferences about vulnerabilities.

It’s going to be expensive, and you’ll get a thick report when the testing is done.

And that’s the real problem. You really don’t want a thick report documenting all the ways your network is insecure. You don’t have the budget to fix them all, so the document will sit around waiting to make someone look bad. Or, even worse, it’ll be discovered in a breach lawsuit. Do you really want an opposing attorney to ask you to explain why you paid to document the security holes in your network, and then didn’t fix them? Probably the safest thing you can do with the report, after you read it, is shred it.

Given enough time and money, a pen test will find vulnerabilities; there’s no point in proving it. And if you’re not going to fix all the uncovered vulnerabilities, there’s no point uncovering them. But there is a way to do penetration testing usefully. For years I’ve been saying security consists of protection, detection and response—and you need all three to have good security. Before you can do a good job with any of these, you have to assess your security. And done right, penetration testing is a key component of a security assessment.

I like to restrict penetration testing to the most commonly exploited critical vulnerabilities, like those found on the SANS Top 20 list. If you have any of those vulnerabilities, you really need to fix them.

If you think about it, penetration testing is an odd business. Is there an analogue to it anywhere else in security? Sure, militaries run these exercises all the time, but how about in business? Do we hire burglars to try to break into our warehouses? Do we attempt to commit fraud against ourselves? No, we don’t.

Penetration testing has become big business because systems are so complicated and poorly understood. We know about burglars and kidnapping and fraud, but we don’t know about computer criminals. We don’t know what’s dangerous today, and what will be dangerous tomorrow. So we hire penetration testers in the belief they can explain it.

There are two reasons why you might want to conduct a penetration test. One, you want to know whether a certain vulnerability is present because you’re going to fix it if it is. And two, you need a big, scary report to persuade your boss to spend more money. If neither is true, I’m going to save you a lot of money by giving you this free penetration test: You’re vulnerable.

Now, go do something useful about it.

This essay appeared in the March issue of Information Security, as the first half of a point/counterpoint with Marcus Ranum. Here’s his half.

Posted on May 15, 2007 at 7:05 AMView Comments

Does Secrecy Help Protect Personal Information?

Personal information protection is an economic problem, not a security problem. And the problem can be easily explained: The organizations we trust to protect our personal information do not suffer when information gets exposed. On the other hand, individuals who suffer when personal information is exposed don’t have the capability to protect that information.

There are actually two problems here: Personal information is easy to steal, and it’s valuable once stolen. We can’t solve one problem without solving the other. The solutions aren’t easy, and you’re not going to like them.

First, fix the economic problem. Credit card companies make more money extending easy credit and making it trivial for customers to use their cards than they lose from fraud. They won’t improve their security as long as you (and not they) are the one who suffers from identity theft. It’s the same for banks and brokerages: As long as you’re the one who suffers when your account is hacked, they don’t have any incentive to fix the problem. And data brokers like ChoicePoint are worse; they don’t suffer if they reveal your information. You don’t have a business relationship with them; you can’t even switch to a competitor in disgust.

Credit card security works as well as it does because the 1968 Truth in Lending Law limits consumer liability for fraud to $50. If the credit card companies could pass fraud losses on to the consumers, they would be spending far less money to stop those losses. But once Congress forced them to suffer the costs of fraud, they invented all sorts of security measures—real-time transaction verification, expert systems patrolling the transaction database and so on—to prevent fraud. The lesson is clear: Make the party in the best position to mitigate the risk responsible for the risk. What this will do is enable the capitalist innovation engine. Once it’s in the financial interest of financial institutions to protect us from identity theft, they will.

Second, stop using personal information to authenticate people. Watch how credit cards work. Notice that the store clerk barely looks at your signature, or how you can use credit cards remotely where no one can check your signature. The credit card industry learned decades ago that authenticating people has only limited value. Instead, they put most of their effort into authenticating the transaction, and they’re much more secure because of it.

This won’t solve the problem of securing our personal information, but it will greatly reduce the threat. Once the information is no longer of value, you only have to worry about securing the information from voyeurs rather than the more common—and more financially motivated—fraudsters.

And third, fix the other economic problem: Organizations that expose our personal information aren’t hurt by that exposure. We need a comprehensive privacy law that gives individuals ownership of their personal information and allows them to take action against organizations that don’t care for it properly.

“Passwords” like credit card numbers and mother’s maiden name used to work, but we’ve forever left the world where our privacy comes from the obscurity of our personal information and the difficulty others have in accessing it. We need to abandon security systems that are based on obscurity and difficulty, and build legal protections to take over where technological advances have left us exposed.

This essay appeared in the January issue of Information Security, as the second half of a point/counterpoint with Marcus Ranum. Here’s his half.

Posted on May 14, 2007 at 12:24 PMView Comments

Is Big Brother a Big Deal?

Big Brother isn’t what he used to be. George Orwell extrapolated his totalitarian state from the 1940s. Today’s information society looks nothing like Orwell’s world, and watching and intimidating a population today isn’t anything like what Winston Smith experienced.

Data collection in 1984 was deliberate; today’s is inadvertent. In the information society, we generate data naturally. In Orwell’s world, people were naturally anonymous; today, we leave digital footprints everywhere.

1984‘s police state was centralized; today’s is decentralized. Your phone company knows who you talk to, your credit card company knows where you shop and Netflix knows what you watch. Your ISP can read your email, your cell phone can track your movements and your supermarket can monitor your purchasing patterns. There’s no single government entity bringing this together, but there doesn’t have to be. As Neal Stephenson said, the threat is no longer Big Brother, but instead thousands of Little Brothers.

1984‘s Big Brother was run by the state; today’s Big Brother is market driven. Data brokers like ChoicePoint and credit bureaus like Experian aren’t trying to build a police state; they’re just trying to turn a profit. Of course these companies will take advantage of a national ID; they’d be stupid not to. And the correlations, data mining and precise categorizing they can do is why the U.S. government buys commercial data from them.

1984-style police states required lots of people. East Germany employed one informant for every 66 citizens. Today, there’s no reason to have anyone watch anyone else; computers can do the work of people.

1984-style police states were expensive. Today, data storage is constantly getting cheaper. If some data is too expensive to save today, it’ll be affordable in a few years.

And finally, the police state of 1984 was deliberately constructed, while today’s is naturally emergent. There’s no reason to postulate a malicious police force and a government trying to subvert our freedoms. Computerized processes naturally throw off personalized data; companies save it for marketing purposes, and even the most well-intentioned law enforcement agency will make use of it.

Of course, Orwell’s Big Brother had a ruthless efficiency that’s hard to imagine in a government today. But that completely misses the point. A sloppy and inefficient police state is no reason to cheer; watch the movie Brazil and see how scary it can be. You can also see hints of what it might look like in our completely dysfunctional “no-fly” list and useless projects to secretly categorize people according to potential terrorist risk. Police states are inherently inefficient. There’s no reason to assume today’s will be any more effective.

The fear isn’t an Orwellian government deliberately creating the ultimate totalitarian state, although with the U.S.’s programs of phone-record surveillance, illegal wiretapping, massive data mining, a national ID card no one wants and Patriot Act abuses, one can make that case. It’s that we’re doing it ourselves, as a natural byproduct of the information society.We’re building the computer infrastructure that makes it easy for governments, corporations, criminal organizations and even teenage hackers to record everything we do, and—yes—even change our votes. And we will continue to do so unless we pass laws regulating the creation, use, protection, resale and disposal of personal data. It’s precisely the attitude that trivializes the problem that creates it.

This essay appeared in the May issue of Information Security, as the second half of a point/counterpoint with Marcus Ranum. Here’s his half.

Posted on May 11, 2007 at 9:19 AMView Comments

Do We Really Need a Security Industry?

Last week I attended the Infosecurity Europe conference in London. Like at the RSA Conference in February, the show floor was chockablock full of network, computer and information security companies. As I often do, I mused about what it means for the IT industry that there are thousands of dedicated security products on the market: some good, more lousy, many difficult even to describe. Why aren’t IT products and services naturally secure, and what would it mean for the industry if they were?

I mentioned this in an interview with Silicon.com, and the published article seems to have caused a bit of a stir. Rather than letting people wonder what I really meant, I thought I should explain.

The primary reason the IT security industry exists is because IT products and services aren’t naturally secure. If computers were already secure against viruses, there wouldn’t be any need for antivirus products. If bad network traffic couldn’t be used to attack computers, no one would bother buying a firewall. If there were no more buffer overflows, no one would have to buy products to protect against their effects. If the IT products we purchased were secure out of the box, we wouldn’t have to spend billions every year making them secure.

Aftermarket security is actually a very inefficient way to spend our security dollars; it may compensate for insecure IT products, but doesn’t help improve their security. Additionally, as long as IT security is a separate industry, there will be companies making money based on insecurity—companies who will lose money if the internet becomes more secure.

Fold security into the underlying products, and the companies marketing those products will have an incentive to invest in security upfront, to avoid having to spend more cash obviating the problems later. Their profits would rise in step with the overall level of security on the internet. Initially we’d still be spending a comparable amount of money per year on security—on secure development practices, on embedded security and so on—but some of that money would be going into improving the quality of the IT products we’re buying, and would reduce the amount we spend on security in future years.

I know this is a utopian vision that I probably won’t see in my lifetime, but the IT services market is pushing us in this direction. As IT becomes more of a utility, users are going to buy a whole lot more services than products. And by nature, services are more about results than technologies. Service customers—whether home users or multinational corporations—care less and less about the specifics of security technologies, and increasingly expect their IT to be integrally secure.

Eight years ago, I formed Counterpane Internet Security on the premise that end users (big corporate users, in this case) really don’t want to have to deal with network security. They want to fly airplanes, produce pharmaceuticals or do whatever their core business is. They don’t want to hire the expertise to monitor their network security, and will gladly farm it out to a company that can do it for them. We provided an array of services that took day-to-day security out of the hands of our customers: security monitoring, security-device management, incident response. Security was something our customers purchased, but they purchased results, not details.

Last year BT bought Counterpane, further embedding network security services into the IT infrastructure. BT has customers that don’t want to deal with network management at all; they just want it to work. They want the internet to be like the phone network, or the power grid, or the water system; they want it to be a utility. For these customers, security isn’t even something they purchase: It’s one small part of a larger IT services deal. It’s the same reason IBM bought ISS: to be able to have a more integrated solution to sell to customers.

This is where the IT industry is headed, and when it gets there, there’ll be no point in user conferences like Infosec and RSA. They won’t go away; they’ll simply become industry conferences. If you want to measure progress, look at the demographics of these conferences. A shift toward infrastructure-geared attendees is a measure of success.

Of course, security products won’t disappear—at least, not in my lifetime. There’ll still be firewalls, antivirus software and everything else. There’ll still be startup companies developing clever and innovative security technologies. But the end user won’t care about them. They’ll be embedded within the services sold by large IT outsourcing companies like BT, EDS and IBM, or ISPs like EarthLink and Comcast. Or they’ll be a check-box item somewhere in the core switch.

IT security is getting harder—increasing complexity is largely to blame—and the need for aftermarket security products isn’t disappearing anytime soon. But there’s no earthly reason why users need to know what an intrusion-detection system with stateful protocol analysis is, or why it’s helpful in spotting SQL injection attacks. The whole IT security industry is an accident—an artifact of how the computer industry developed. As IT fades into the background and becomes just another utility, users will simply expect it to work—and the details of how it works won’t matter.

This was my 41st essay for Wired.com.

EDITED TO ADD (5/3): Commentary.

EDITED TO ADD (5/4): More commentary.

EDITED TO ADD (5/10): More commentary.

Posted on May 3, 2007 at 10:09 AMView Comments

A Security Market for Lemons

More than a year ago, I wrote about the increasing risks of data loss because more and more data fits in smaller and smaller packages. Today I use a 4-GB USB memory stick for backup while I am traveling. I like the convenience, but if I lose the tiny thing I risk all my data.

Encryption is the obvious solution for this problem—I use PGPdisk—but Secustick sounds even better: It automatically erases itself after a set number of bad password attempts. The company makes a bunch of other impressive claims: The product was commissioned, and eventually approved, by the French intelligence service; it is used by many militaries and banks; its technology is revolutionary.

Unfortunately, the only impressive aspect of Secustick is its hubris, which was revealed when Tweakers.net completely broke its security. There’s no data self-destruct feature. The password protection can easily be bypassed. The data isn’t even encrypted. As a secure storage device, Secustick is pretty useless.

On the surface, this is just another snake-oil security story. But there’s a deeper question: Why are there so many bad security products out there? It’s not just that designing good security is hard—although it is—and it’s not just that anyone can design a security product that he himself cannot break. Why do mediocre security products beat the good ones in the marketplace?

In 1970, American economist George Akerlof wrote a paper called “The Market for ‘Lemons‘” (abstract and article for pay here), which established asymmetrical information theory. He eventually won a Nobel Prize for his work, which looks at markets where the seller knows a lot more about the product than the buyer.

Akerlof illustrated his ideas with a used car market. A used car market includes both good cars and lousy ones (lemons). The seller knows which is which, but the buyer can’t tell the difference—at least until he’s made his purchase. I’ll spare you the math, but what ends up happening is that the buyer bases his purchase price on the value of a used car of average quality.

This means that the best cars don’t get sold; their prices are too high. Which means that the owners of these best cars don’t put their cars on the market. And then this starts spiraling. The removal of the good cars from the market reduces the average price buyers are willing to pay, and then the very good cars no longer sell, and disappear from the market. And then the good cars, and so on until only the lemons are left.

In a market where the seller has more information about the product than the buyer, bad products can drive the good ones out of the market.

The computer security market has a lot of the same characteristics of Akerlof’s lemons market. Take the market for encrypted USB memory sticks. Several companies make encrypted USB drives—Kingston Technology sent me one in the mail a few days ago—but even I couldn’t tell you if Kingston’s offering is better than Secustick. Or if it’s better than any other encrypted USB drives. They use the same encryption algorithms. They make the same security claims. And if I can’t tell the difference, most consumers won’t be able to either.

Of course, it’s more expensive to make an actually secure USB drive. Good security design takes time, and necessarily means limiting functionality. Good security testing takes even more time, especially if the product is any good. This means the less-secure product will be cheaper, sooner to market and have more features. In this market, the more-secure USB drive is going to lose out.

I see this kind of thing happening over and over in computer security. In the late 1980s and early 1990s, there were more than a hundred competing firewall products. The few that “won” weren’t the most secure firewalls; they were the ones that were easy to set up, easy to use and didn’t annoy users too much. Because buyers couldn’t base their buying decision on the relative security merits, they based them on these other criteria. The intrusion detection system, or IDS, market evolved the same way, and before that the antivirus market. The few products that succeeded weren’t the most secure, because buyers couldn’t tell the difference.

How do you solve this? You need what economists call a “signal,” a way for buyers to tell the difference. Warranties are a common signal. Alternatively, an independent auto mechanic can tell good cars from lemons, and a buyer can hire his expertise. The Secustick story demonstrates this. If there is a consumer advocate group that has the expertise to evaluate different products, then the lemons can be exposed.

Secustick, for one, seems to have been withdrawn from sale.

But security testing is both expensive and slow, and it just isn’t possible for an independent lab to test everything. Unfortunately, the exposure of Secustick is an exception. It was a simple product, and easily exposed once someone bothered to look. A complex software product—a firewall, an IDS—is very hard to test well. And, of course, by the time you have tested it, the vendor has a new version on the market.

In reality, we have to rely on a variety of mediocre signals to differentiate the good security products from the bad. Standardization is one signal. The widely used AES encryption standard has reduced, although not eliminated, the number of lousy encryption algorithms on the market. Reputation is a more common signal; we choose security products based on the reputation of the company selling them, the reputation of some security wizard associated with them, magazine reviews, recommendations from colleagues or general buzz in the media.

All these signals have their problems. Even product reviews, which should be as comprehensive as the Tweakers’ Secustick review, rarely are. Many firewall comparison reviews focus on things the reviewers can easily measure, like packets per second, rather than how secure the products are. In IDS comparisons, you can find the same bogus “number of signatures” comparison. Buyers lap that stuff up; in the absence of deep understanding, they happily accept shallow data.

With so many mediocre security products on the market, and the difficulty of coming up with a strong quality signal, vendors don’t have strong incentives to invest in developing good products. And the vendors that do tend to die a quiet and lonely death.

This essay originally appeared in Wired.

EDITED TO ADD (4/22): Slashdot thread.

Posted on April 19, 2007 at 7:59 AMView Comments

Cyber-Attack

Last month Marine General James Cartwright told the House Armed Services Committee that the best cyber defense is a good offense.

As reported in Federal Computer Week, Cartwright said: “History teaches us that a purely defensive posture poses significant risks,” and that if “we apply the principle of warfare to the cyberdomain, as we do to sea, air and land, we realize the defense of the nation is better served by capabilities enabling us to take the fight to our adversaries, when necessary, to deter actions detrimental to our interests.”

The general isn’t alone. In 2003, the entertainment industry tried to get a law passed giving them the right to attack any computer suspected of distributing copyrighted material. And there probably isn’t a sys-admin in the world who doesn’t want to strike back at computers that are blindly and repeatedly attacking their networks.

Of course, the general is correct. But his reasoning illustrates perfectly why peacetime and wartime are different, and why generals don’t make good police chiefs.

A cyber-security policy that condones both active deterrence and retaliation—without any judicial determination of wrongdoing—is attractive, but it’s wrongheaded, not least because it ignores the line between war, where those involved are permitted to determine when counterattack is required, and crime, where only impartial third parties (judges and juries) can impose punishment.

In warfare, the notion of counterattack is extremely powerful. Going after the enemy—its positions, its supply lines, its factories, its infrastructure—is an age-old military tactic. But in peacetime, we call it revenge, and consider it dangerous. Anyone accused of a crime deserves a fair trial. The accused has the right to defend himself, to face his accuser, to an attorney, and to be presumed innocent until proven guilty.

Both vigilante counterattacks, and pre-emptive attacks, fly in the face of these rights. They punish people before who haven’t been found guilty. It’s the same whether it’s an angry lynch mob stringing up a suspect, the MPAA disabling the computer of someone it believes made an illegal copy of a movie, or a corporate security officer launching a denial-of-service attack against someone he believes is targeting his company over the net.

In all of these cases, the attacker could be wrong. This has been true for lynch mobs, and on the internet it’s even harder to know who’s attacking you. Just because my computer looks like the source of an attack doesn’t mean that it is. And even if it is, it might be a zombie controlled by yet another computer; I might be a victim, too. The goal of a government’s legal system is justice; the goal of a vigilante is expediency.

I understand the frustrations of General Cartwright, just as I do the frustrations of the entertainment industry, and the world’s sys-admins. Justice in cyberspace can be difficult. It can be hard to figure out who is attacking you, and it can take a long time to make them stop. It can be even harder to prove anything in court. The international nature of many attacks exacerbates the problems; more and more cybercriminals are jurisdiction shopping: attacking from countries with ineffective computer crime laws, easily bribable police forces and no extradition treaties.

Revenge is appealingly straightforward, and treating the whole thing as a military problem is easier than working within the legal system.

But that doesn’t make it right. In 1789, the Declaration of the Rights of Man and of the Citizen declared: “No person shall be accused, arrested, or imprisoned except in the cases and according to the forms prescribed by law. Any one soliciting, transmitting, executing, or causing to be executed any arbitrary order shall be punished.”

I’m glad General Cartwright thinks about offensive cyberwar; it’s how generals are supposed to think. I even agree with Richard Clarke’s threat of military-style reaction in the event of a cyber-attack by a foreign country or a terrorist organization. But short of an act of war, we’re far safer with a legal system that respects our rights.

This essay originally appeared in Wired.

Posted on April 5, 2007 at 7:35 AMView Comments

Copycats

It’s called “splash-and-grab,” and it’s a new way to rob convenience stores. Two guys walk into a store, and one comes up to the counter with a cup of hot coffee or cocoa. He pays for it, and when the clerk opens the cash drawer, he throws the coffee in the clerk’s face. The other one grabs the cash drawer, and they both run.

Crimes never change, but tactics do. This tactic is new; someone just invented it. But now that it’s in the news, copycats are repeating the trick. There have been at least 19 such robberies in Delaware, Pennsylvania and New Jersey. (Some arrests have been made since then.)

Here’s another example: On Nov. 24, 1971, someone with the alias Dan Cooper invented a new way to hijack an aircraft. Claiming he had a bomb, he forced a plane to land and then exchanged the passengers and flight attendants for $200,000 and four parachutes. (I leave it as an exercise for the reader to explain why asking for more than one parachute is critical to the plan’s success.) Taking off again, he told the pilots to fly to 10,000 feet. He then lowered the plane’s back stairs and parachuted away. He was never caught, and the FBI still doesn’t know who he is or whether he survived.

After this story hit the press, there was an epidemic of copycat attacks. In 31 hijackings the following year, half of the hijackers demanded parachutes. It got so bad that the FAA required Boeing to install a special latch—the Cooper Vane—on the back staircases of its 727s so they couldn’t be lowered in the air.

The internet is filled with copycats. Green-card lawyers invented spam; now everyone does it. Other people invented phishing, pharming, spear phishing. The virus, the worm, the Trojan: It’s hard to believe that these ubiquitous internet attack tactics were, until comparatively recently, tactics that no one had thought of.

Most attackers are copycats. They aren’t clever enough to invent a new way to rob a convenience store, use the web to steal money, or hijack an airplane. They try the same attacks again and again, or read about a new attack in the newspaper and decide they can try it, too.

In combating threats, it makes sense to focus on copycats when there is a population of people already willing to commit the crime, who will migrate to a new tactic once it has been demonstrated to be successful. In instances where there aren’t many attacks or attackers, and they’re smarter—al-Qaida-style terrorism comes to mind—focusing on copycats is less effective because the bad guys will respond by modifying their attacks accordingly.

Compare that to suicide bombings in Israel, which are mostly copycat attacks. The authorities basically know what a suicide bombing looks like, and do a pretty good job defending against the particular tactics they tend to see again and again. It’s still an arms race, but there is a lot of security gained by defending against copycats.

But even so, it’s important to understand which aspect of the crime will be adopted by copycats. Splash-and-grab crimes have nothing to do with convenience stores; copycats can target any store where hot coffee is easily available and there is only one clerk on duty. And the tactic doesn’t necessarily need coffee; one copycat used bleach. The new idea is to throw something painful and damaging in a clerk’s face, grab the valuables and run.

Similarly, when a suicide bomber blows up a restaurant in Israel, the authorities don’t automatically assume the copycats will attack other restaurants. They focus on the particulars of the bomb, the triggering mechanism and the way the bomber arrived at his target. Those are the tactics that copycats will repeat. The next target may be a theater or a hotel or any other crowded location.

The lesson for counterterrorism in America: Stay flexible. We’re not threatened by a bunch of copycats, so we’re best off expending effort on security measures that will work regardless of the tactics or the targets: intelligence, investigation and emergency response. By focusing too much on specifics—what the terrorists did last time—we’re wasting valuable resources that could be used to keep us safer.

This essay originally appeared on Wired.com.

Posted on March 8, 2007 at 3:23 PMView Comments

Private Police Forces

In Raleigh, N.C., employees of Capitol Special Police patrol apartment buildings, a bowling alley and nightclubs, stopping suspicious people, searching their cars and making arrests.

Sounds like a good thing, but Capitol Special Police isn’t a police force at all—it’s a for-profit security company hired by private property owners.

This isn’t unique. Private security guards outnumber real police more than 5-1, and increasingly act like them.

They wear uniforms, carry weapons and drive lighted patrol cars on private properties like banks and apartment complexes and in public areas like bus stations and national monuments. Sometimes they operate as ordinary citizens and can only make citizen’s arrests, but in more and more states they’re being granted official police powers.

This trend should greatly concern citizens. Law enforcement should be a government function, and privatizing it puts us all at risk.

Most obviously, there’s the problem of agenda. Public police forces are charged with protecting the citizens of the cities and towns over which they have jurisdiction. Of course, there are instances of policemen overstepping their bounds, but these are exceptions, and the police officers and departments are ultimately responsible to the public.

Private police officers are different. They don’t work for us; they work for corporations. They’re focused on the priorities of their employers or the companies that hire them. They’re less concerned with due process, public safety and civil rights.

Also, many of the laws that protect us from police abuse do not apply to the private sector. Constitutional safeguards that regulate police conduct, interrogation and evidence collection do not apply to private individuals. Information that is illegal for the government to collect about you can be collected by commercial data brokers, then purchased by the police.

We’ve all seen policemen “reading people their rights” on television cop shows. If you’re detained by a private security guard, you don’t have nearly as many rights.

For example, a federal law known as Section 1983 allows you to sue for civil rights violations by the police but not by private citizens. The Freedom of Information Act allows us to learn what government law enforcement is doing, but the law doesn’t apply to private individuals and companies. In fact, most of your civil right protections apply only to real police.

Training and regulation is another problem. Private security guards often receive minimal training, if any. They don’t graduate from police academies. And while some states regulate these guard companies, others have no regulations at all: anyone can put on a uniform and play policeman. Abuses of power, brutality, and illegal behavior are much more common among private security guards than real police.

A horrific example of this happened in South Carolina in 1995. Ricky Coleman, an unlicensed and untrained Best Buy security guard with a violent criminal record, choked a fraud suspect to death while another security guard held him down.

This trend is larger than police. More and more of our nation’s prisons are being run by for-profit corporations. The IRS has started outsourcing some back-tax collection to debt-collection companies that will take a percentage of the money recovered as their fee. And there are about 20,000 private police and military personnel in Iraq, working for the Defense Department.

Throughout most of history, specific people were charged by those in power to keep the peace, collect taxes and wage wars. Corruption and incompetence were the norm, and justice was scarce. It is for this very reason that, since the 1600s, European governments have been built around a professional civil service to both enforce the laws and protect rights.

Private security guards turn this bedrock principle of modern government on its head. Whether it’s FedEx policemen in Tennessee who can request search warrants and make arrests; a privately funded surveillance helicopter in Jackson, Miss., that can bypass constitutional restrictions on aerial spying; or employees of Capitol Special Police in North Carolina who are lobbying to expand their jurisdiction beyond the specific properties they protect—privately funded policemen are not protecting us or working in our best interests.

This op ed originally appeared in the Minneapolis Star-Tribune.

EDITED TO ADD (4/2): This is relevant.

Posted on February 27, 2007 at 6:02 AMView Comments

DRM in Windows Vista

Windows Vista includes an array of “features” that you don’t want. These features will make your computer less reliable and less secure. They’ll make your computer less stable and run slower. They will cause technical support problems. They may even require you to upgrade some of your peripheral hardware and existing software. And these features won’t do anything useful. In fact, they’re working against you. They’re digital rights management (DRM) features built into Vista at the behest of the entertainment industry.

And you don’t get to refuse them.

The details are pretty geeky, but basically Microsoft has reworked a lot of the core operating system to add copy protection technology for new media formats like HD DVD and Blu-ray disks. Certain high-quality output paths—audio and video—are reserved for protected peripheral devices. Sometimes output quality is artificially degraded; sometimes output is prevented entirely. And Vista continuously spends CPU time monitoring itself, trying to figure out if you’re doing something that it thinks you shouldn’t. If it does, it limits functionality and in extreme cases restarts just the video subsystem. We still don’t know the exact details of all this, and how far-reaching it is, but it doesn’t look good.

Microsoft put all those functionality-crippling features into Vista because it wants to own the entertainment industry. This isn’t how Microsoft spins it, of course. It maintains that it has no choice, that it’s Hollywood that is demanding DRM in Windows in order to allow “premium content”—meaning, new movies that are still earning revenue—onto your computer. If Microsoft didn’t play along, it’d be relegated to second-class status as Hollywood pulled its support for the platform.

It’s all complete nonsense. Microsoft could have easily told the entertainment industry that it was not going to deliberately cripple its operating system, take it or leave it. With 95% of the operating system market, where else would Hollywood go? Sure, Big Media has been pushing DRM, but recently some—Sony after their 2005 debacle and now EMI Group—are having second thoughts.

What the entertainment companies are finally realizing is that DRM doesn’t work, and just annoys their customers. Like every other DRM system ever invented, Microsoft’s won’t keep the professional pirates from making copies of whatever they want. The DRM security in Vista was broken the day it was released. Sure, Microsoft will patch it, but the patched system will get broken as well. It’s an arms race, and the defenders can’t possibly win.

I believe that Microsoft knows this and also knows that it doesn’t matter. This isn’t about stopping pirates and the small percentage of people who download free movies from the Internet. This isn’t even about Microsoft satisfying its Hollywood customers at the expense of those of us paying for the privilege of using Vista. This is about the overwhelming majority of honest users and who owns the distribution channels to them. And while it may have started as a partnership, in the end Microsoft is going to end up locking the movie companies into selling content in its proprietary formats.

We saw this trick before; Apple pulled it on the recording industry. First iTunes worked in partnership with the major record labels to distribute content, but soon Warner Music’s CEO Edgar Bronfman Jr. found that he wasn’t able to dictate a pricing model to Steve Jobs. The same thing will happen here; after Vista is firmly entrenched in the marketplace, Sony’s Howard Stringer won’t be able to dictate pricing or terms to Bill Gates. This is a war for 21st-century movie distribution and, when the dust settles, Hollywood won’t know what hit them.

To be fair, just last week Steve Jobs publicly came out against DRM for music. It’s a reasonable business position, now that Apple controls the online music distribution market. But Jobs never mentioned movies, and he is the largest single shareholder in Disney. Talk is cheap. The real question is would he actually allow iTunes Music Store purchases to play on Microsoft or Sony players, or is this just a clever way of deflecting blame to the—already hated—music labels.

Microsoft is reaching for a much bigger prize than Apple: not just Hollywood, but also peripheral hardware vendors. Vista’s DRM will require driver developers to comply with all kinds of rules and be certified; otherwise, they won’t work. And Microsoft talks about expanding this to independent software vendors as well. It’s another war for control of the computer market.

Unfortunately, we users are caught in the crossfire. We are not only stuck with DRM systems that interfere with our legitimate fair-use rights for the content we buy, we’re stuck with DRM systems that interfere with all of our computer use—even the uses that have nothing to do with copyright.

I don’t see the market righting this wrong, because Microsoft’s monopoly position gives it much more power than we consumers can hope to have. It might not be as obvious as Microsoft using its operating system monopoly to kill Netscape and own the browser market, but it’s really no different. Microsoft’s entertainment market grab might further entrench its monopoly position, but it will cause serious damage to both the computer and entertainment industries. DRM is bad, both for consumers and for the entertainment industry: something the entertainment industry is just starting to realize, but Microsoft is still fighting. Some researchers think that this is the final straw that will drive Windows users to the competition, but I think the courts are necessary.

In the meantime, the only advice I can offer you is to not upgrade to Vista. It will be hard. Microsoft’s bundling deals with computer manufacturers mean that it will be increasingly hard not to get the new operating system with new computers. And Microsoft has some pretty deep pockets and can wait us all out if it wants to. Yes, some people will shift to Macintosh and some fewer number to Linux, but most of us are stuck on Windows. Still, if enough customers say no to Vista, the company might actually listen.

This essay originally appeared on Forbes.com.

EDITED TO ADD (2/23): Some commentary.

Posted on February 12, 2007 at 10:37 AMView Comments

1 37 38 39 40 41 48

Sidebar photo of Bruce Schneier by Joe MacInnis.