Entries Tagged "economics of security"

Page 28 of 39

Architecture and Security

You’ve seen them: those large concrete blocks in front of skyscrapers, monuments and government buildings, designed to protect against car and truck bombs. They sprang up like weeds in the months after 9/11, but the idea is much older. The prettier ones doubled as planters; the uglier ones just stood there.

Form follows function. From medieval castles to modern airports, security concerns have always influenced architecture. Castles appeared during the reign of King Stephen of England because they were the best way to defend the land and there wasn’t a strong king to put any limits on castle-building. But castle design changed over the centuries in response to both innovations in warfare and politics, from motte-and-bailey to concentric design in the late medieval period to entirely decorative castles in the 19th century.

These changes were expensive. The problem is that architecture tends toward permanence, while security threats change much faster. Something that seemed a good idea when a building was designed might make little sense a century—or even a decade—later. But by then it’s hard to undo those architectural decisions.

When Syracuse University built a new campus in the mid-1970s, the student protests of the late 1960s were fresh on everybody’s mind. So the architects designed a college without the open greens of traditional college campuses. It’s now 30 years later, but Syracuse University is stuck defending itself against an obsolete threat.

Similarly, hotel entries in Montreal were elevated above street level in the 1970s, in response to security worries about Quebecois separatists. Today the threat is gone, but those older hotels continue to be maddeningly difficult to navigate.

Also in the 1970s, the Israeli consulate in New York built a unique security system: a two-door vestibule that allowed guards to identify visitors and control building access. Now this kind of entryway is widespread, and buildings with it will remain unwelcoming long after the threat is gone.

The same thing can be seen in cyberspace as well. In his book, Code and Other Laws of Cyberspace, Lawrence Lessig describes how decisions about technological infrastructure—the architecture of the internet—become embedded and then impracticable to change. Whether it’s technologies to prevent file copying, limit anonymity, record our digital habits for later investigation or reduce interoperability and strengthen monopoly positions, once technologies based on these security concerns become standard it will take decades to undo them.

It’s dangerously shortsighted to make architectural decisions based on the threat of the moment without regard to the long-term consequences of those decisions.

Concrete building barriers are an exception: They’re removable. They started appearing in Washington, D.C., in 1983, after the truck bombing of the Marines barracks in Beirut. After 9/11, they were a sort of bizarre status symbol: They proved your building was important enough to deserve protection. In New York City alone, more than 50 buildings were protected in this fashion.

Today, they’re slowly coming down. Studies have found they impede traffic flow, turn into giant ashtrays and can pose a security risk by becoming flying shrapnel if exploded.

We should be thankful they can be removed, and did not end up as permanent aspects of our cities’ architecture. We won’t be so lucky with some of the design decisions we’re seeing about internet architecture.

This essay originally appeared (my 29th column) in Wired.com.

EDITED TO ADD (11/3): Activism-restricting architecture at the University of Texas. And commentary from the Architectures of Control in Design Blog.

Posted on October 19, 2006 at 9:27 AMView Comments

Expensive Cameras in Checked Luggage

This is a blog post about the problems of being forced to check expensive camera equipment on airplanes:

Well, having lived in Kashmir for 12+ years I am well accustomed to this type of security. We haven’t been able to have hand carries since 1990. We also cannot have batteries in any of our equipment checked or otherwise. At least we have been able to carry our laptops on and recently been able to actually use them (with the batteries). But, if things keep moving in this direction, and I’m sure it will, we need to start thinking now about checking our cameras and computers and how to do it safely.
This is a very unpleasant idea. Two years ago I ordered a Canon 20D and had it “hand carried” over to meet me in England by a friend. My friend put it in their checked bag. The bag never showed up. She did not have insurance and all I got $100 from British Airways for the camera and $500 from American Express (buyers protection) that was it. So now it looks as if we are going to have to check our cameras and our computers involuntarily. OK here are a few thoughts.

Pretty basic stuff, and we all know about the risks of putting expensive stuff in your checked luggage.

The interesting part is one of the blog comments, about halfway down. Another photographer wonders if the TSA rules for firearms could be extended to camera equipment:

Why not just have the TSA adopt the same check in rules for photographic and video equipment as they do for firearms?

All firearms must be in checked baggage, no carry on.

All firearms must be transported in a locked, hard sided case using a non-TSA approved lock. This is to prevent anyone from opening the case after its been screened.

After bringing the equipment to the airline counter and declaring and showing the contents to the airline representative, you take it over to the TSA screening area where it it checked by a screener, relocked in front of you, your key or keys returned to you (if it’s not a combination lock) and put directly on the conveyor belt for loading onto the plane.

No markings, stickers or labels identifying what’s inside are put on the outside of the case or, if packed inside something else, the bag.

Might this solve the problem? I’ve never lost a firearm when flying.

Then someone has the brilliant suggestion of putting a firearm in your camera-equipment case:

A “weapons” is defined as a rifle, shotgun, pistol, airgun, and STARTER PISTOL. Yes, starter pistols – those little guns that fire blanks at track and swim meets – are considered weapons…and do NOT have to be registered in any state in the United States.

I have a starter pistol for all my cases. All I have to do upon check-in is tell the airline ticket agent that I have a weapon to declare…I’m given a little card to sign, the card is put in the case, the case is given to a TSA official who takes my key and locks the case, and gives my key back to me.

That’s the procedure. The case is extra-tracked…TSA does not want to lose a weapons case. This reduces the chance of the case being lost to virtually zero.

It’s a great way to travel with camera gear…I’ve been doing this since Dec 2001 and have had no problems whatsoever.

I have to admit that I am impressed with this solution.

Posted on September 22, 2006 at 12:17 PMView Comments

Organized Cybercrime

Cybercrime is getting organized:

Cyberscams are increasingly being committed by organized crime syndicates out to profit from sophisticated ruses rather than hackers keen to make an online name for themselves, according to a top U.S. official.

Christopher Painter, deputy chief of the computer crimes and intellectual property section at the Department of Justice, said there had been a distinct shift in recent years in the type of cybercriminals that online detectives now encounter.

“There has been a change in the people who attack computer networks, away from the ‘bragging hacker’ toward those driven by monetary motives,” Painter told Reuters in an interview this week.

Although media reports often focus on stories about teenage hackers tracked down in their bedroom, the greater danger lies in the more anonymous virtual interlopers.

“There are still instances of these ‘lone-gunman’ hackers but more and more we are seeing organized criminal groups, groups that are often organized online targeting victims via the internet,” said Painter, in London for a cybercrime conference.

I’ve been saying this sort of thing for years, and have long complained that cyberterrorism gets all the press while cybercrime is the real threat. I don’t think this article is fear and hype; it’s a real problem.

Posted on September 19, 2006 at 7:16 AMView Comments

Is There Strategic Software?

If you define “critical infrastructure” as “things essential for the functioning of a society and economy,” then software is critical infrastructure. For many companies and individuals, if their computers stop working, they stop working.

It’s a situation that snuck up on us. Everyone knew that the software that flies 747s or targets cruise missiles was critical, but who thought of the airlines’ weight and balance computers, or the operating system running the databases and spreadsheets that determine which cruise missiles get shipped where?

And over the years, common, off-the-shelf, personal- and business-grade software has been used for more and more critical applications. Today we find ourselves in a situation where a well-positioned flaw in Windows, Cisco routers or Apache could seriously affect the economy.

It’s perfectly rational to assume that some programmers—a tiny minority I’m sure—are deliberately adding vulnerabilities and back doors into the code they write. I’m actually kind of amazed that back doors secretly added by the CIA/NSA, MI5, the Chinese, Mossad and others don’t conflict with each other. Even if these groups aren’t infiltrating software companies with back doors, you can be sure they’re scouring products for vulnerabilities they can exploit, if necessary. On the other hand, we’re already living in a world where dozens of new flaws are discovered in common software products weekly, and the economy is humming along. But we’re not talking about this month’s worm from Asia or new phishing software from the Russian mafia—we’re talking national intelligence organizations. “Infowar” is an overhyped term, but the next war will have a cyberspace component, and these organizations wouldn’t be doing their jobs if they weren’t preparing for it.

Marcus is 100 percent correct when he says it’s simply too late to do anything about it. The software industry is international, and no country can start demanding domestic-only software and expect to get anywhere. Nor would that actually solve the problem, which is more about the allegiance of millions of individual programmers than which country they happen to inhabit.

So, what to do? The key here is to remember the real problem: current commercial software practices are not secure enough to reliably detect and delete deliberately inserted malicious code. Once you understand this, you’ll drop the red herring arguments that led to CheckPoint not being able to buy Sourcefire and concentrate on the real solution: defense in depth.

In theory, security software are after-the-fact kludges because the underlying OS and apps are riddled with vulnerabilities. If your software were written properly, you wouldn’t need a firewall—right?

If we were to get serious about critical infrastructure, we’d recognize it’s all critical and start building security software to protect it. We’d build our security based on the principles of safe failure; we’d assume security would fail and make sure it’s OK when it does. We’d use defense in depth and compartmentalization to minimize the effects of failure. Basically, we’d do everything we’re supposed to do now to secure our networks.

It’d be expensive, probably prohibitively so. Maybe it would be easier to continue to ignore the problem, or at least manage geopolitics so that no national military wants to take us down.

This is the second half of a point/counterpoint I did with Marcus Ranum (here’s his half) for the September 2006 issue of Information Security Magazine.

Posted on September 12, 2006 at 10:38 AMView Comments

Land Title Fraud

There seems to be a small epidemic of land title fraud in Ontario, Canada.

What happens is someone impersonates the homeowner, and then sells the house out from under him. The former owner is still liable for the mortgage, but can’t get in his former house. Cleaning up the problem takes a lot of time and energy.

The problem is one of economic incentives. If banks were held liable for fraudulent mortgages, then the problem would go away really quickly. But as long as they’re not, they have no incentive to ensure that this fraud doesn’t occur. (They have some incentive, because the fraud costs them money, but as long as the few fraud cases cost less than ensuring the validity of every mortgage, they’ll just ignore the problem and eat the losses when fraud occurs.)

EDITED TO ADD (9/8): Another article.

Posted on September 8, 2006 at 6:43 AMView Comments

Microsoft and FairUse4WM

If you really want to see Microsoft scramble to patch a hole in its software, don’t look to vulnerabilities that impact countless Internet Explorer users or give intruders control of thousands of Windows machines. Just crack Redmond’s DRM.

Security patches used to be rare. Software vendors were happy to pretend that vulnerabilities in their products were illusory—and then quietly fix the problem in the next software release.

That changed with the full disclosure movement. Independent security researchers started going public with the holes they found, making vulnerabilities impossible for vendors to ignore. Then worms became more common; patching—and patching quickly—became the norm.

But even now, no software vendor likes to issue patches. Every patch is a public admission that the company made a mistake. Moreover, the process diverts engineering resources from new development. Patches annoy users by making them update their software, and piss them off even more if the update doesn’t work properly.

For the vendor, there’s an economic balancing act: how much more will your users be annoyed by unpatched software than they will be by the patch, and is that reduction in annoyance worth the cost of patching?

Since 2003, Microsoft’s strategy to balance these costs and benefits has been to batch patches: instead of issuing them one at a time, it’s been issuing them all together on the second Tuesday of each month. This decreases Microsoft’s development costs and increases the reliability of its patches.

The user pays for this strategy by remaining open to known vulnerabilities for up to a month. On the other hand, users benefit from a predictable schedule: Microsoft can test all the patches that are going out at the same time, which means that patches are more reliable and users are able to install them faster with more confidence.

In the absence of regulation, software liability, or some other mechanism to make unpatched software costly for the vendor, “Patch Tuesday” is the best users are likely to get.

Why? Because it makes near-term financial sense to Microsoft. The company is not a public charity, and if the internet suffers, or if computers are compromised en masse, the economic impact on Microsoft is still minimal.

Microsoft is in the business of making money, and keeping users secure by patching its software is only incidental to that goal.

There’s no better example of this of this principle in action than Microsoft’s behavior around the vulnerability in its digital rights management software PlaysForSure.

Last week, a hacker developed an application called FairUse4WM that strips the copy protection from Windows Media DRM 10 and 11 files.

Now, this isn’t a “vulnerability” in the normal sense of the word: digital rights management is not a feature that users want. Being able to remove copy protection is a good thing for some users, and completely irrelevant for everyone else. No user is ever going to say: “Oh no. I can now play the music I bought for my computer in my car. I must install a patch so I can’t do that anymore.”

But to Microsoft, this vulnerability is a big deal. It affects the company’s relationship with major record labels. It affects the company’s product offerings. It affects the company’s bottom line. Fixing this “vulnerability” is in the company’s best interest; never mind the customer.

So Microsoft wasted no time; it issued a patch three days after learning about the hack. There’s no month-long wait for copyright holders who rely on Microsoft’s DRM.

This clearly demonstrates that economics is a much more powerful motivator than security.

It should surprise no one that the system didn’t stay patched for long. FairUse4WM 1.2 gets around Microsoft’s patch, and also circumvents the copy protection in Windows Media DRM 9 and 11beta2 files.

That was Saturday. Any guess on how long it will take Microsoft to patch Media Player once again? And then how long before the FairUse4WM people update their own software?

Certainly much less time than it will take Microsoft and the recording industry to realize they’re playing a losing game, and that trying to make digital files uncopyable is like trying to make water not wet.

If Microsoft abandoned this Sisyphean effort and put the same development effort into building a fast and reliable patching system, the entire internet would benefit. But simple economics says it probably never will.

This essay originally appeared on Wired.com.

EDITED TO ADD (9/8): Commentary.

EDITED TO ADD (9/9): Microsoft released a patch for FairUse4WM 1.2 on Thursday, September 7th.

EDITED TO ADD (9/13): BSkyB halts download service because of the breaks.

EDITED TO ADD (9/16): Microsoft is threatening legal action against people hosting copies of FairUse4WM.

Posted on September 7, 2006 at 8:33 AMView Comments

Call Forwarding Credit Card Scam

This is impressive:

A fraudster contacts an AT&T service rep and says he works at a pizza parlor and that the phone is having trouble. Until things get fixed, he requests that all incoming calls be forwarded to another number, which he provides.

Pizza orders are thus routed by AT&T to the fraudster’s line. When a call comes in, the fraudster pretends to take the customer’s order but says payment must be made in advance by credit card.

The unsuspecting customer gives his or her card number and expiration date, and before you can say “extra cheese,” the fraudster is ready to go on an Internet shopping spree using someone else’s money.

Those of us who know security have been telling people not to trust incoming phone calls—that you should call the company if you are going to divulge personal information to them. Seems like that advice isn’t foolproof.

The problem is the phone company, of course. They’re forwarding calls based on an unauthenticated request. AT&T doesn’t really want to talk about details:

He was reluctant to discuss the steps AT&T has taken to improve its call-forwarding system so this sort of thing doesn’t happen again. What, for example, is to prevent someone from convincing AT&T to forward all calls to a local flower store or some other business that takes orders by phone?

“We had some guidelines in place that we believe were effective,” Britton said. “Now we have extra precautions.”

It seems to me that AT&T would solve this problem more quickly if it were liable. Shouldn’t a pizza customer who has been scammed be allowed to sue AT&T? After all, the phone company didn’t route the customer’s calls properly. Does the credit card company have a basis for a suit? Certainly the pizza parlor does, but the effects of AT&T’s sloppy authentication are much greater than a few missed pizza orders.

Posted on August 21, 2006 at 1:35 PMView Comments

DHS Report on US-VISIT and RFID

Department of Homeland Security, Office of the Inspector General, “Enhanced Security Controls Needed For US-VISIT’s System Using RFID Technology (Redacted),” OIG-06-39, June 2006.

From the Executive Summary:

We audited the Department of Homeland Security (DHS) and select organizational components’ security programs to evaluate the effectiveness of controls implemented on Radio Frequency Identification (RFID) systems. Systems employing RFID technology include a tag and reader on the front end and an application and database on the back end.

[…]

Overall, information security controls have been implemented to provide an effective level of security on the Automated Identification Management System (AIDMS). US-VISIT has implemented effective physical security controls over the RFID tags, readers, computer equipment, and database supporting the RFID system at the POEs visited. No personal information is stored on the tags used for US-VISIT. Travelers’ personal information is maintained in and can be obtained only with access to the system’s database. Additional security controls would need to be implemented if US-VISIT decides to store travelers’ personal information on RFID-enabled forms or migrates to universally readable Generation 2 (Gen2) products.

Although these controls provide overall system security, US-VISIT has not properly configured its AIDMS database to ensure that data captured and stored is properly protected. Furthermore, while AIDMS is operating with an Authority to Operate, US-VISIT had not tested its contingency plan to ensure that critical operations could be restored in the event of a disruption. In addition, US-VISIT has not developed its own RFID policy or ensured that the standard operating procedures are properly distributed and followed at all POEs.

I wrote about US-VISIT in 2004 and again in 2006. In that second essay, I gave a price of $15B. I have since come to not believe that data, and I don’t have any better information on the price. But I still think my analysis holds. I would much rather take the money spent on US-VISIT and spend it on intelligence and investigation, the kind of security that resulted in the U.K. arrests earlier this week and is likely to actually make us safer.

Posted on August 11, 2006 at 7:27 AMView Comments

1 26 27 28 29 30 39

Sidebar photo of Bruce Schneier by Joe MacInnis.