Blog: June 2006 Archives

Friday Squid Blogging: Former Squid Researcher to Lead Episcopal Church

Bishop Katharine Jefferts Schori of Nevada was elected as presiding bishop of the Episcopal Church:

A former research oceanographer who studied squid, octopuses and creatures living in marine mud, she was a second-career priest who was ordained in 1994.

The jokes have begun:

One wag noted that the study of invertebrates makes Bishop Schori supremely qualified to rule the ECUSA. She’s studied oysters and squids…this is a mental picture that I really did not need. Is this a case of ‘squid pro quo’?

Do you suspect that ECUSA elected an oceanographer as its primate in recognition that it is floundering?

Posted on June 30, 2006 at 3:41 PM13 Comments

Microsoft Windows Kill Switch

Does Microsoft have the ability to disable Windows remotely? Maybe:

Two weeks ago, I wrote about my serious objections to Microsoft’s latest salvo in the war against unauthorized copies of Windows. Two Windows Genuine Advantage components are being pushed onto users’ machines with insufficient notification and inadequate quality control, and the result is a big mess. (For details, see Microsoft presses the Stupid button.)

Guess what? WGA might be on the verge of getting even messier. In fact, one report claims WGA is about to become a Windows “kill switch” ­ and when I asked Microsoft for an on-the-record response, they refused to deny it.

And this, supposedly from someone at Microsoft Support:

He told me that “in the fall, having the latest WGA will become mandatory and if its not installed, Windows will give a 30 day warning and when the 30 days is up and WGA isn’t installed, Windows will stop working, so you might as well install WGA now.”

The stupidity of this idea is amazing. Not just the inevitability of false positives, but the potential for a hacker to co-opt the controls. I hope this rumor ends up not being true.

Although if they actually do it, the backlash could do more for non-Windows OSs than anything those OSs could do for themselves.

Posted on June 30, 2006 at 11:51 AM116 Comments

Password-Protected Bullets

New invention, just patented:

Meyerle is patenting a design for a modified cartridge that would be fired by a burst of high-frequency radio energy. But the energy would only ignite the charge if a solid-state switch within the cartridge had been activated. This would only happen if a password entered into the gun using a tiny keypad matched one stored in the cartridge.

When they are sold, cartridges could be programmed with a password that matches the purchaser’s gun. An owner could set the gun to request the password when it is reloaded, or to perform a biometric check before firing. The gun could also automatically lock itself after a pre-set period of time has passed since the password was entered.

Posted on June 30, 2006 at 6:41 AM59 Comments

Economics and Information Security

I’m sitting in a conference room at Cambridge University, trying to simultaneously finish this article for Wired News and pay attention to the presenter onstage.

I’m in this awkward situation because 1) this article is due tomorrow, and 2) I’m attending the fifth Workshop on the Economics of Information Security, or WEIS: to my mind, the most interesting computer security conference of the year.

The idea that economics has anything to do with computer security is relatively new. Ross Anderson and I seem to have stumbled upon the idea independently. He, in his brilliant article from 2001, “Why Information Security Is Hard—An Economic Perspective” (.pdf), and me in various essays and presentations from that same period.

WEIS began a year later at the University of California at Berkeley and has grown ever since. It’s the only workshop where technologists get together with economists and lawyers and try to understand the problems of computer security.

And economics has a lot to teach computer security. We generally think of computer security as a problem of technology, but often systems fail because of misplaced economic incentives: The people who could protect a system are not the ones who suffer the costs of failure.

When you start looking, economic considerations are everywhere in computer security. Hospitals’ medical-records systems provide comprehensive billing-management features for the administrators who specify them, but are not so good at protecting patients’ privacy. Automated teller machines suffered from fraud in countries like the United Kingdom and the Netherlands, where poor regulation left banks without sufficient incentive to secure their systems, and allowed them to pass the cost of fraud along to their customers. And one reason the internet is insecure is that liability for attacks is so diffuse.

In all of these examples, the economic considerations of security are more important than the technical considerations.

More generally, many of the most basic security questions are at least as much economic as technical. Do we spend enough on keeping hackers out of our computer systems? Or do we spend too much? For that matter, do we spend appropriate amounts on police and Army services? And are we spending our security budgets on the right things? In the shadow of 9/11, questions like these have a heightened importance.

Economics can actually explain many of the puzzling realities of internet security. Firewalls are common, e-mail encryption is rare: not because of the relative effectiveness of the technologies, but because of the economic pressures that drive companies to install them. Corporations rarely publicize information about intrusions; that’s because of economic incentives against doing so. And an insecure operating system is the international standard, in part, because its economic effects are largely borne not by the company that builds the operating system, but by the customers that buy it.

Some of the most controversial cyberpolicy issues also sit squarely between information security and economics. For example, the issue of digital rights management: Is copyright law too restrictive—or not restrictive enough—to maximize society’s creative output? And if it needs to be more restrictive, will DRM technologies benefit the music industry or the technology vendors? Is Microsoft’s Trusted Computing initiative a good idea, or just another way for the company to lock its customers into Windows, Media Player and Office? Any attempt to answer these questions becomes rapidly entangled with both information security and economic arguments.

WEIS encourages papers on these and other issues in economics and computer security. We heard papers presented on the economics of digital forensics of cell phones (.pdf)—if you have an uncommon phone, the police probably don’t have the tools to perform forensic analysis—and the effect of stock spam on stock prices: It actually works in the short term. We learned that more-educated wireless network users are not more likely to secure their access points (.pdf), and that the best predictor of wireless security is the default configuration of the router.

Other researchers presented economic models to explain patch management (.pdf), peer-to-peer worms (.pdf), investment in information security technologies (.pdf) and opt-in versus opt-out privacy policies (.pdf). There was a field study that tried to estimate the cost to the U.S. economy for information infrastructure failures (.pdf): less than you might think. And one of the most interesting papers looked at economic barriers to adopting new security protocols (.pdf), specifically DNS Security Extensions.

This is all heady stuff. In the early years, there was a bit of a struggle as the economists and the computer security technologists tried to learn each others’ languages. But now it seems that there’s a lot more synergy, and more collaborations between the two camps.

I’ve long said that the fundamental problems in computer security are no longer about technology; they’re about applying technology. Workshops like WEIS are helping us understand why good security technologies fail and bad ones succeed, and that kind of insight is critical if we’re going to improve security in the information age.

This essay originally appeared on Wired.com.

Posted on June 29, 2006 at 4:31 PM56 Comments

Wiretappers' Conference

I can’t believe I forgot to blog this great article about the communications intercept trade show in DC earlier this month:

“You really need to educate yourself,” he insisted. “Do you think this stuff doesn’t happen in the West? Let me tell you something. I sell this equipment all over the world, especially in the Middle East. I deal with buyers from Qatar, and I get more concern about proper legal procedure from them than I get in the USA.”

Read the whole thing.

Posted on June 29, 2006 at 1:43 PM6 Comments

Schneier Asks to Be Hacked

Maybe I shouldn’t have said this:

“I have a completely open Wi-Fi network,” Schneier told ZDNet UK. “Firstly, I don’t care if my neighbors are using my network. Secondly, I’ve protected my computers. Thirdly, it’s polite. When people come over they can use it.”

For the record, I have an ultra-secure wireless network that automatically reports all hacking attempts to unsavory men with bitey dogs.

Posted on June 28, 2006 at 1:23 PM72 Comments

Applying CALEA to VoIP

Security Implications of Applying the Communications Assistance to Law Enforcement Act to Voice over IP,” paper by Steve Bellovin, Matt Blaze, Ernie Brickell, Clint Brooks, Vint Cerf, Whit Diffie, Susan Landau, Jon Peterson, and John Treichler.

Executive Summary

For many people, Voice over Internet Protocol (VoIP) looks like a nimble way of using a computer to make phone calls. Download the software, pick an identifier and then wherever there is an Internet connection, you can make a phone call. From this perspective, it makes perfect sense that anything that can be done with a telephone, including the graceful accommodation of wiretapping, should be able to be done readily with VoIP as well.

The FCC has issued an order for all “interconnected” and all broadband access VoIP services to comply with Communications Assistance for Law Enforcement Act (CALEA)—without specific regulations on what compliance would mean. The FBI has suggested that CALEA should apply to all forms of VoIP, regardless of the technology involved in the VoIP implementation.

Intercept against a VoIP call made from a fixed location with a fixed IP address directly to a big internet provider’s access router is equivalent to wiretapping a normal phone call, and classical PSTN-style CALEA concepts can be applied directly. In fact, these intercept capabilities can be exactly the same in the VoIP case if the ISP properly secures its infrastructure and wiretap control process as the PSTN’s central offices are assumed to do.

However, the network architectures of the Internet and the Public Switched Telephone Network (PSTN) are substantially different, and these differences lead to security risks in applying the CALEA to VoIP. VoIP, like most Internet communications, are communications for a mobile environment. The feasibility of applying CALEA to more decentralized VoIP services is quite problematic. Neither the manageability of such a wiretapping regime nor whether it can be made secure against subversion seem clear. The real danger is that a CALEA-type regimen is likely to introduce serious vulnerabilities through its “architected security breach.”

Potential problems include the difficulty of determining where the traffic is coming from (the VoIP provider enables the connection but may not provide the services for the actual conversation), the difficulty of ensuring safe transport of the signals to the law-enforcement facility, the risk of introducing new vulnerabilities into Internet communications, and the difficulty of ensuring proper minimization. VOIP implementations vary substantially across the Internet making it impossible to implement CALEA uniformly. Mobility and the ease of creating new identities on the Internet exacerbate the problem.

Building a comprehensive VoIP intercept capability into the Internet appears to require the cooperation of a very large portion of the routing infrastructure, and the fact that packets are carrying voice is largely irrelevant. Indeed, most of the provisions of the wiretap law do not distinguish among different types of electronic communications. Currently the FBI is focused on applying CALEA’s design mandates to VoIP, but there is nothing in wiretapping law that would argue against the extension of intercept design mandates to all types of Internet communications. Indeed, the changes necessary to meet CALEA requirements for VoIP would likely have to be implemented in a way that covered all forms of Internet communication.

In order to extend authorized interception much beyond the easy scenario, it is necessary either to eliminate the flexibility that Internet communications allow, or else introduce serious security risks to domestic VoIP implementations. The former would have significant negative effects on U.S. ability to innovate, while the latter is simply dangerous. The current FBI and FCC direction on CALEA applied to VoIP carries great risks.

Posted on June 28, 2006 at 12:01 PM10 Comments

Congress Learns How Little Privacy We Have

Reuters story:

Almost every piece of personal information that Americans try to keep secret—including bank account statements, e-mail messages and telephone records—is semi-public and available for sale.

That was the lesson Congress learned over the last week during a series of hearings aimed at exposing peddlers of personal data, from whom banks, car dealers, jealous lovers and even some law enforcement officers have covertly purchased information to use as they wish.

And:

The committee subpoenaed representatives from 11 companies that use the Internet and phone calls to obtain, market, and sell personal data, but they refused to talk.

All invoked their constitutional right to not incriminate themselves when asked whether they sold “personal, non-public information” that had been obtained by lying or impersonating someone.

Posted on June 28, 2006 at 7:39 AM25 Comments

Ignoring the "Great Firewall of China"

Richard Clayton is presenting a paper (blog post here) that discusses how to defeat China’s national firewall:

…the keyword detection is not actually being done in large routers on the borders of the Chinese networks, but in nearby subsidiary machines. When these machines detect the keyword, they do not actually prevent the packet containing the keyword from passing through the main router (this would be horribly complicated to achieve and still allow the router to run at the necessary speed). Instead, these subsiduary machines generate a series of TCP reset packets, which are sent to each end of the connection. When the resets arrive, the end-points assume they are genuine requests from the other end to close the connection—and obey. Hence the censorship occurs.

However, because the original packets are passed through the firewall unscathed, if both of the endpoints were to completely ignore the firewall’s reset packets, then the connection will proceed unhindered! We’ve done some real experiments on this—and it works just fine!! Think of it as the Harry Potter approach to the Great Firewall—just shut your eyes and walk onto Platform 9¾.

Ignoring resets is trivial to achieve by applying simple firewall rules… and has no significant effect on ordinary working. If you want to be a little more clever you can examine the hop count (TTL) in the reset packets and determine whether the values are consistent with them arriving from the far end, or if the value indicates they have come from the intervening censorship device. We would argue that there is much to commend examining TTL values when considering defences against denial-of-service attacks using reset packets. Having operating system vendors provide this new functionality as standard would also be of practical use because Chinese citizens would not need to run special firewall-busting code (which the authorities might attempt to outlaw) but just off-the-shelf software (which they would necessarily tolerate).

Posted on June 27, 2006 at 1:13 PM94 Comments

Employee Theft at Australian Mint

You’d think a national mint would have better security against insiders.

But Justice Connolly also criticised security at the mint, saying he was amazed a theft on this scale could happen.

The court heard Grzeskowiac, 48, of the southern Canberra suburb of Monash, simply scooped coins from the production line into his pockets before transferring them to his boots or lunchbox in a toilet cubicle.

Over a 10-month period he walked out with an average of $600 a day.

Justice Connolly expressed astonishment that the mint’s security procedures were so lax.

“I find it hard to believe that 150 coins could be concealed in each boot and a person could still walk through the security system,” he said.

Justice Connolly also said he was amazed the mint could give no indication of just how many coins had actually gone missing.

“I would like to think those working at the other mint factory printing $100 notes might be subject to a better system of security,” he said.

Posted on June 27, 2006 at 7:45 AM26 Comments

Yet Another Redacting Failure

This sort of thing happens so often it’s no longer news:

Conte’s e-mails were intended to be blacked out in a 51-page electronic filing Wednesday in which the government argued against the Chronicle’s motion to quash the subpoena. Eight of those pages were not supposed to be public.

But the redacted parts in the computer file could be seen by copying them and pasting the material in a word processing program.

Another news article here.

Posted on June 26, 2006 at 12:29 PM14 Comments

MySpace Increases Security

According to CNN:

Besides the contact restrictions, all users—not just those 14 and 15—will have the option to make only partial profiles available to those not already on their friends list.

All users also will get an option to prevent contact from people outside their age group. Currently, they may only choose to require that a person know their e-mail or last name first; that will remain an option to those 16 and over, even as it becomes mandatory for those younger.

MySpace also will beef up its ad-targeting technology, so that it can avoid displaying gambling and other adult-themed ads on minors’ profile pages and target special public-service announcements to them.

Honestly, this all sounds a lot more like cover-your-ass security than real security: MySpace securing itself from lawsuits.

“Safety experts” seem to agree that it won’t improve security much.

Posted on June 26, 2006 at 8:20 AM31 Comments

AT&T Rewrites its Privacy Policy

AT&T has a new privacy policy, and if you are its customer you have no choice but to accept it.

The new policy says that AT&T—not customers—owns customers’ confidential info and can use it “to protect its legitimate business interests, safeguard others, or respond to legal process.”

The policy also indicates that AT&T will track the viewing habits of customers of its new video service—something that cable and satellite providers are prohibited from doing.

Moreover, AT&T (formerly known as SBC) is requiring customers to agree to its updated privacy policy as a condition for service—a new move that legal experts say will reduce customers’ recourse for any future data sharing with government authorities or others.

EDITED TO ADD (6/27): User Friendly on the issue.

Posted on June 23, 2006 at 6:03 AM57 Comments

Greek Wiretapping Scandal

Back in February, I wrote about a major wiretapping scandal in Greece. The Wall Street Journal has a really interesting article (link only good for a week, unfortunately) about it:

Behind the bugging operation were two pieces of sophisticated software, according to Ericsson. One was Ericsson’s own, some basic elements of which came as a preinstalled feature of the network equipment. When enabled, the feature can be used for lawful interception by government authorities, which has become increasingly common since the Sept. 11 terror attacks. But to use the interception feature, operators like Vodafone would need to pay Ericsson millions of dollars to purchase the additional hardware, software and passwords that are required to activate it. Both companies say Vodafone hadn’t done that in Greece at the time.

The second element was the rogue software that the eavesdroppers implanted in parts of Vodafone’s network to achieve two things: activate the Ericsson-made interception feature and at the same time hide all traces that the feature was in use. Ericsson, which analyzed the software in conjunction with Greece’s independent telecom watchdog, says it didn’t design, develop or install the rogue software.

The software allowed the cellphone calls of the targeted individuals to be monitored via 14 prepaid cellphones, according to the government officials and telecom experts probing the matter. They say when calls to or from one of the more than 100 targeted phones were made, the rogue software enabled one of the interceptor phones to be connected also.

The interceptor phones likely enabled conversations to be secretly recorded elsewhere, the government said during a February 2006 news conference. At least some of the prepaid cellphones were activated between June and August 2004. Such cellphones, particularly when paid for in cash, typically are harder to trace than those acquired with a monthly subscription plan.

Vodafone claims it didn’t know that even the basic elements of the legal interception software were included in the equipment it bought. Ericsson never informed the service provider’s top managers in Greece that the features were included nor was there a “special briefing” to the relevant technical division, according to a Vodafone statement in March.

But Ericsson’s top executive in Greece, Bill Zikou, claimed during parliamentary-committee testimony that his company had informed Vodafone about the feature via its sales force and instruction manuals.

Vodafone and Ericsson discovered something was amiss in late January 2005 when some Greek cellphone users started complaining about problems sending text messages. Vodafone asked Ericsson to look into the issue. Ericsson’s technicians spent several weeks trying to figure out the problem, with help from the equipment maker’s technical experts at its headquarters in Sweden. In early March of that year, Ericsson’s technicians told Vodafone’s technology director in Greece of their unusual discovery about the cause of the problems: software that appeared to be capable of illegally monitoring calls. It’s unclear exactly how the rogue software caused the text-messaging problem.

Ericsson confirmed the software was able to monitor calls, and Vodafone soon discovered that the targeted phones included those used by some of the country’s most important officials. On March 8, Mr. Koronias ordered that the illegal bugging program be shut down, in a move he has said was made to protect the privacy of its customers. He called the prime minister’s office the next evening.

The head of Greece’s intelligence service, Ioannis Korantis, said in testimony before the parliamentary committee last month that Vodafone’s disabling of the software before authorities could investigate hampered their efforts. “From the moment that the software was shut down, the string broke that could have lead us to who was behind this,” he said. Separately, he distanced his own agency from the bugging effort, saying it didn’t have the technical know-how to effectively monitor cellphone calls.

Posted on June 22, 2006 at 1:25 PM24 Comments

Privacy-Enhanced Data Mining

There are a variety of encryption technologies that allow you to analyze data without knowing details of the data:

Largely by employing the head-spinning principles of cryptography, the researchers say they can ensure that law enforcement, intelligence agencies and private companies can sift through huge databases without seeing names and identifying details in the records.

For example, manifests of airplane passengers could be compared with terrorist watch lists—without airline staff or government agents seeing the actual names on the other side’s list. Only if a match were made would a computer alert each side to uncloak the record and probe further.

“If it’s possible to anonymize data and produce … the same results as clear text, why not?” John Bliss, a privacy lawyer in IBM’s “entity analytics” unit, told a recent workshop on the subject at Harvard University.

This is nothing new. I’ve seen papers on this sort of stuff since the late 1980s. The problem is that no one in law enforcement has any incentive to use them. Privacy is rarely a technological problem; it’s far more often a social or economic problem.

Posted on June 20, 2006 at 6:26 AM29 Comments

Patrick Smith on Airline Security

Patrick Smith writes the “Ask the Pilot” column for Salon. He’s written two very good posts on airline security, one about how Israel’s system won’t work in the U.S., and the other about profiling:

…here’s a more useful quiz:

  • In 1985, Air India Flight 182 was blown up over the Atlantic by:

    a. Muslim male extremists mostly between the ages of 17 and 40
    b. Bill O’Reilly
    c. The Mormon Tabernacle Choir
    d. Indian Sikh extremists, in retaliation for the Indian Army’s attack on the Golden Temple shrine in Amritsar

  • In 1986, who attempted to smuggle three pounds of explosives onto an El Al jetliner bound from London to Tel Aviv?

    a. Muslim male extremists mostly between the ages of 17 and 40
    b. Michael Smerconish
    c. Bob Mould
    d. A pregnant Irishwoman named Anne Murphy

  • In 1962, in the first-ever successful sabotage of a commercial jet, a Continental Airlines 707 was blown up with dynamite over Missouri by:

    a. Muslim male extremists mostly between the ages of 17 and 40
    b. Ann Coulter
    c. Henry Rollins
    d. Thomas Doty, a 34-year-old American passenger, as part of an insurance scam

  • In 1994, who nearly succeeded in skyjacking a DC-10 and crashing it into the Federal Express Corp. headquarters?

    a. Muslim male extremists mostly between the ages of 17 and 40
    b. Michelle Malkin
    c. Charlie Rose
    d. Auburn Calloway, an off-duty FedEx employee and resident of Memphis, Tenn.

  • In 1974, who stormed a Delta Air Lines DC-9 at Baltimore-Washington Airport, intending to crash it into the White House, and shot both pilots?

    a. Muslim male extremists mostly between the ages of 17 and 40
    b. Joe Scarborough
    c. Spalding Gray
    d. Samuel Byck, an unemployed tire salesman from Philadelphia

The answer, in all cases, is D.

Racial profiling doesn’t work against terrorism, because terrorists don’t fit any racial profile.

Posted on June 19, 2006 at 7:22 AM55 Comments

Border Security and the DHS

Surreal story about a person coming into the U.S. from Iraq who is held up at the border because he used to sell copyrighted images on T-shirts:

Homeland Security, the $40-billion-a-year agency set up to combat terrorism after 9/11, has been given universal jurisdiction and can hold anyone on Earth for crimes unrelated to national security—even me for a court date I missed while I was in Iraq helping America deter terror—without asking what I had been doing in Pakistan among Islamic extremists the agency is designated to stop. Instead, some of its actions are erasing the lines of jurisdiction between local police and the federal state, scarily bringing the words “police” and “state” closer together. As long as we allow Homeland Security to act like a Keystone Stasi, terrorism will continue to win in destroying our freedom.

Kevin Drum mentions it, too.

Posted on June 16, 2006 at 9:31 AM26 Comments

Movie-Plot Threat Contest Winner

I can tell you one thing, you guys are really imaginative. The response to my Movie-Plot Threat Contest was more than I could imagine: 892 comments. I printed them all out—195 pages, double sided—and spiral bound them, so I could read them more easily. The cover read: “The Big Book of Terrorist Plots.” I tried not to wave it around too much in airports.

I almost didn’t want to pick a winner, because the real point is the enormous list of them all. And because it’s hard to choose. But after careful deliberation (see selection criteria here), the winning entry is by Tom Grant. Although planes filled with explosives is already cliche, destroying the Grand Coulee Dam is inspired. Here it is:

Mission: Terrorize Americans. Neutralize American economy, make America feel completely vulnerable, and all Americans unsafe.

Scene 1: A rented van drives from Spokane, WA, to a remote setting in Idaho and loads up with shoulder-mounted rocket launchers and a couple of people dressed in fatigues.

Scene 2: Terrorists dressed in “delivery man” garb take over the UPS cargo depot at the Spokane, WA, airport. A van full of explosives is unloaded at the depot.

Scene 3: Terrorists dressed in “delivery man” garb take over the UPS cargo depot at the Kamloops, BC, airport. A van full of explosives is unloaded at the depot.

Scene 4: A van with mercenaries drives through the Idaho forests en route to an unknown destination. Receives cell communiqué that locations Alpha and Bravo are secured.

Scene 5: UPS cargo plane lands in Kamloops and is met at the depot by terrorists who overtake the plane and its crew. Explosives are loaded aboard the aircraft. The same scene plays out in Spokane moments later, and that plane is loaded with explosives. Two pilots board each of the cargo planes and ask for takeoff instructions as night falls across the West.

Scene 6: Two cargo jets go airborne from two separate locations. A van with four terrorists arrives at its destination, parked on an overlook ridge just after nightfall. They use infrared glasses to scope the target. The camera pans down and away from the van, exposing the target. Grand Coulee Dam. The cell phone rings and notification comes to the leader that “Nighthawks alpha and bravo have launched.”

Scene 7: Two radar operators in separate locations note with alarm that UPS cargo jets they have been tracking have dropped off the radar and may have crashed. Aboard each craft the pilots have turned off navigational radios and are flying on “manual” at low altitude. One heading South, one heading North.

Scene 8: Planes are closing in on the “target” and the rocket launcher crew goes to work. With precision they strike lookout and defense positions on the dam, then target the office structures below. As they finish, a cargo jet approaches from the North at high velocity, slamming into the back side of the dam just above the waterline and exploding, shuddering the earth. A large portion of the center-top of the dam is missing. Within seconds a cargo plane coming from the South slams into the front face of the dam, closer to the base, and explodes in a blinding flash, shuddering the earth. In moments, the dam begins to fail, and a final volley from four rocket launchers on the hill above helps break open the face of the dam. The 40-mile-long Lake Roosevelt begins to pour down the Columbia River Valley, uncontrolled. No warning is given to the dams downriver, other than the generation at G.C. is now offline.

Scene 9: Through the night, the surging wall of water roars down the Columbia waterway, overtopping dam after dam and gaining momentum (and huge amounts of water) along the way. The cities of Wenatchee and Kennewick are inundated and largely swept away. A van of renegades retreats to Northern Idaho to hide.

Scene 10: As day breaks in the West, there is no power from Seattle to Los Angeles. The Western power grid has failed. Commerce has ground to a halt west of the Rocky Mountains. Water is sweeping down the Columbia River gorge, threatening to overtop Bonneville dam and wipe out the large metro area of Portland, OR.

Scene 11: Bin Laden releases a video on Al Jazeera that claims victory over the Americans.

Scene 12: Pandemonium, as water sweeps into a panicked Portland, Oregon, washing all away in its path, and surging water well up the Willamette valley.

Scene 13: Washington situation room…little input is coming in from the West. Some military bases have emergency power and sat phones, and are reporting that the devastation of the dam infrastructure is complete. Seven major and five minor dams have been destroyed. Re-powering the West coast will take months, as connections from the Eastern grid will have to be made through the New Mexico Mountains.

Scene 14: Worst U.S. market crash in history. America’s GNP drops from the top of the charts to 20th worldwide. Exports and imports cease on the West coast. Martial law fails to control mass exodus from Seattle, San Francisco, and L.A. as millions flee to the east. Gas shortages and vigilante mentality take their toll on the panicked populace. The West is “wild” once more. The East is overrun with millions seeking homes and employment.

Congratulations, Tom. I’m still trying to figure out what you win.

There’s a more coherent essay about this on Wired.com, but I didn’t reprint it here because it contained too much that I’ve already posted on this blog.

Posted on June 15, 2006 at 2:37 PM59 Comments

NSA Combing Through MySpace

No surprise.

New Scientist has discovered that Pentagon’s National Security Agency, which specialises in eavesdropping and code-breaking, is funding research into the mass harvesting of the information that people post about themselves on social networks. And it could harness advances in internet technology – specifically the forthcoming “semantic web” championed by the web standards organisation W3C – to combine data from social networking websites with details such as banking, retail and property records, allowing the NSA to build extensive, all-embracing personal profiles of individuals.

Posted on June 15, 2006 at 6:13 AM43 Comments

$1M VoIP Scam

Lots of details.

The basic service that Pena provided is not uncommon. Telecommunications brokers often buy long-distance minutes from carriers—especially VoIP carriers—and then re-sell those minutes directly to customers. They make money by marking up the services they buy from carriers.

Pena sold minutes to customers, but rather than buy the minutes, he instead decided to hack into the Internet phone company networks, and route calls over those networks surreptitiously, say prosecutors. So he had to pay virtually no costs for providing phone service.

Posted on June 13, 2006 at 2:15 PM15 Comments

U.S./Mexican Security Barrier

Great article comparing the barrier Israel is erecting to protect itself from the West Bank with the hypothetical barrier the U.S. would build to protect itself from Mexico:

The Israeli West Bank barrier, when finished, will run for more than 400 miles and will consist of trenches, security roads, electronic fences, and concrete walls. Its main goal is to stop terrorists from detonating themselves in restaurants and cafes and buses in the cities and towns of central Israel. So, planners set the bar very high: It is intended to prevent every single attempt to cross it. The rules of engagement were written accordingly. If someone trying to cross the fence in the middle of the night is presumed to be a terrorist, there’s no need to hesitate before shooting. To kill.

As such, the Israeli fence is very efficient. The number of fatalities from terror attacks within Israel dropped from more than 130 in 2003 to fewer than 25 in 2005. The number of bombings fell from dozens to fewer than 10. The cost for Israel is in money and personnel; the cost for Palestinians is in unemployment, health, frustration, and blood. The demographic benefit—keeping out the Palestinians—is just another positive side effect for the Israelis.

No wonder the fence is considered a good deal by those living on its western side. But applying this model to the U.S.-Mexico border will not be easy. U.S. citizens will find it hard to justify such tough measures when their only goal is to stop people coming in for work—rather than preventing them from trying to commit murder. And the cost will be more important. It’s much easier to open your wallet when someone is threatening to blow up your local cafe.

Posted on June 13, 2006 at 6:50 AM83 Comments

The Security of RFID Cards

Interesting paper on the security of contactless smartcards:

Interestingly, the outcome of this investigation shows that contactless smartcards are not fundamentally less secure than contact cards. However, some attacks are inherently facilitated. Therefore both the user and the issuer should be aware of these threats and take them into account when building or using the systems based on contactless smartcards.

Posted on June 11, 2006 at 7:04 AM21 Comments

New Directions in Chemical Warfare

From New Scientist:

The Pentagon considered developing a host of non-lethal chemical weapons that would disrupt discipline and morale among enemy troops, newly declassified documents reveal.

Most bizarre among the plans was one for the development of an “aphrodisiac” chemical weapon that would make enemy soldiers sexually irresistible to each other. Provoking widespread homosexual behaviour among troops would cause a “distasteful but completely non-lethal” blow to morale, the proposal says.

Other ideas included chemical weapons that attract swarms of enraged wasps or angry rats to troop positions, making them uninhabitable. Another was to develop a chemical that caused “severe and lasting halitosis”, making it easy to identify guerrillas trying to blend in with civilians. There was also the idea of making troops’ skin unbearably sensitive to sunlight.

Technology always gets better; it never gets worse. There will be a time, probably in our lifetimes, when weapons like these will be real.

Posted on June 9, 2006 at 1:33 PM69 Comments

Privacy as Contextual Integrity

Interesting law review article by Helen Nissenbaum:

Abstract: The practices of public surveillance, which include the monitoring of individuals in public through a variety of media (e.g., video, data, online), are among the least understood and controversial challenges to privacy in an age of information technologies. The fragmentary nature of privacy policy in the United States reflects not only the oppositional pulls of diverse vested interests, but also the ambivalence of unsettled intuitions on mundane phenomena such as shopper cards, closed-circuit television, and biometrics. This Article, which extends earlier work on the problem of privacy in public, explains why some of the prominent theoretical approaches to privacy, which were developed over time to meet traditional privacy challenges, yield unsatisfactory conclusions in the case of public surveillance. It posits a new construct, ‘contextual integrity’ as an alternative benchmark for privacy, to capture the nature of challenges posed by information technologies. Contextual integrity ties adequate protection for privacy to norms of specific contexts, demanding that information gathering and dissemination be appropriate to that context and obey the governing norms of distribution within it. Building on the idea of ‘spheres of justice’ developed by political philosopher Michael Walzer, this Article argues that public surveillance violates a right to privacy because it violates contextual integrity; as such, it constitutes injustice and even tyranny.

Posted on June 9, 2006 at 7:11 AM45 Comments

Hacking Computers Over USB

I’ve previously written about the risks of small portable computing devices; how more and more data can be stored on them, and then lost or stolen. But there’s another risk: if an attacker can convince you to plug his USB device into your computer, he can take it over.

Plug an iPod or USB stick into a PC running Windows and the device can literally take over the machine and search for confidential documents, copy them back to the iPod or USB’s internal storage, and hide them as “deleted” files. Alternatively, the device can simply plant spyware, or even compromise the operating system. Two features that make this possible are the Windows AutoRun facility and the ability of peripherals to use something called direct memory access (DMA). The first attack vector you can and should plug; the second vector is the result of a design flaw that’s likely to be with us for many years to come.

The article has the details, but basically you can configure a file on your USB device to automatically run when it’s plugged into a computer. That file can, of course, do anything you want it to.

Recently I’ve been seeing more and more written about this attack. The Spring 2006 issue of 2600 Magazine, for example, contains a short article called “iPod Sneakiness” (unfortunately, not on line). The author suggests that you can innocently ask someone at an Internet cafe if you can plug your iPod into his computer to power it up—and then steal his passwords and critical files.

And here’s an article about someone who used this trick in a penetration test:

We figured we would try something different by baiting the same employees that were on high alert. We gathered all the worthless vendor giveaway thumb drives collected over the years and imprinted them with our own special piece of software. I had one of my guys write a Trojan that, when run, would collect passwords, logins and machine-specific information from the user’s computer, and then email the findings back to us.

The next hurdle we had was getting the USB drives in the hands of the credit union’s internal users. I made my way to the credit union at about 6 a.m. to make sure no employees saw us. I then proceeded to scatter the drives in the parking lot, smoking areas, and other areas employees frequented.

Once I seeded the USB drives, I decided to grab some coffee and watch the employees show up for work. Surveillance of the facility was worth the time involved. It was really amusing to watch the reaction of the employees who found a USB drive. You know they plugged them into their computers the minute they got to their desks.

I immediately called my guy that wrote the Trojan and asked if anything was received at his end. Slowly but surely info was being mailed back to him. I would have loved to be on the inside of the building watching as people started plugging the USB drives in, scouring through the planted image files, then unknowingly running our piece of software.

There is a defense. From the first article:

AutoRun is just a bad idea. People putting CD-ROMs or USB drives into their computers usually want to see what’s on the media, not have programs automatically run. Fortunately you can turn AutoRun off. A simple manual approach is to hold down the “Shift” key when a disk or USB storage device is inserted into the computer. A better way is to disable the feature entirely by editing the Windows Registry. There are many instructions for doing this online (just search for “disable autorun”) or you can download and use Microsoft’s TweakUI program, which is part of the Windows XP PowerToys download. With Windows XP you can also disable AutoRun for CDs by right-clicking on the CD drive icon in the Windows explorer, choosing the AutoPlay tab, and then selecting “Take no action” for each kind of disk that’s listed. Unfortunately, disabling AutoPlay for CDs won’t always disable AutoPlay for USB devices, so the registry hack is the safest course of action.

In the 1990s, the Macintosh operating system had this feature, which was removed after a virus made use of it in 1998. Microsoft needs to remove this feature as well.

EDITED TO ADD (6/12): In the penetration test, they didn’t use AutoRun.

Posted on June 8, 2006 at 1:34 PM55 Comments

The Doghouse: KRYPTO 2.0

The website is hysterical:

Why are 256 bits the technically highest coding depth at all on computers possible are ?

A computer knows only 256 different indications.
1 indication = 1 byte has 8 bits in binary the number system exactly.
1 bit knows only the switching status: on or out or 0 or 1 by the combination of these 8 bits results 256 bits.
The computation in addition: 2 switching status highly 8 bits = 256 bits these 256 bits
is addressed in decimally the number system from 0 to 255 = 256 bits.
Computers work however in in hexadecimals the number system.
There these 256 bits designated above are addressed from 00 to FF = 256 bits.
A byte cannot be thus under bits 0 or over bits 255.
Therefore 256 bits are the technically highest coding depth at all on computers
possible are.

Proof of the Krypto security !
Which would be, if one would try one of Krypto coded file unauthorized to decode.
A coded file with the length of 18033 indications has therefore according to computation, 256 bits highly 18033 indications = 6,184355814363201353319227173630ë+43427
file possibilities. Each file possibility has exactly 18033 indications byte.
Multiplied by the number of file possibilities then need results in the memory.
Those are then: 1,1152248840041161000440562362208e+43432 byte.
Those are then: 1,038634110245961789082788150963è+43423 Giga byte data quantity.
That is a number with 43424 places.
I can surely maintain as much memory place give it in the whole world not never.
And the head problem now is, which is now the correctly decoded file.
Who it does not know can only say there. That does not know so exactly !
They can code naturally naturally also still successively several times, even up to
the infinity.

My head hurts just trying to read that.

Posted on June 8, 2006 at 7:50 AM

Assassins Don't Do Movie Plots, Either

From “Assassination in the United States: An Operational Study of Recent Assassins, Attackers, and Near-Lethal Approachers,” (a 1999 article published in the Journal of Forensic Sciences):

Few attackers or near-lethal approachers possessed the cunning or the bravado of assassins in popular movies or novels. The reality of American assassination is much more mundane, more banal than assassinations depicted on the screen. Neither monsters nor martyrs, recent American assassins, attackers, and near-lethal approachers engaged in pre-incident patterns of thinking and behaviour.

The quote is from the last page. The whole thing is interesting reading.

Posted on June 7, 2006 at 1:15 PM14 Comments

Comments from a Fake ID Salesman

In case you thought a hard-to-forge national ID card would solve the fake ID problem, here’s what the criminals have to say:

Luis Hernandez just laughs as he sells fake driver’s licenses and Social Security cards to illegal immigrants near a park known for shady deals. The joke—to him and others in his line of work—is the government’s promise to put people like him out of business with a tamperproof national ID card.

“One way or another, we’ll always find a way,” said Hernandez, 35, a sidewalk operator who is part of a complex counterfeiting network around MacArthur Park, where authentic-looking IDs are available for as little as $150.

Posted on June 6, 2006 at 6:33 AM40 Comments

Lying to Government Agents

“How to Avoid Going to Jail under 18 U.S.C. Section 1001 for Lying to Government Agents”

Title 18, United States Code, Section 1001 makes it a crime to: 1) knowingly and willfully; 2) make any materially false, fictitious or fraudulent statement or representation; 3) in any matter within the jurisdiction of the executive, legislative or judicial branch of the United States. Your lie does not even have to be made directly to an employee of the national government as long as it is “within the jurisdiction” of the ever expanding federal bureaucracy. Though the falsehood must be “material” this requirement is met if the statement has the “natural tendency to influence or [is] capable of influencing, the decision of the decisionmaking body to which it is addressed.” United States v. Gaudin, 515 U.S. 506, 510 (1995). (In other words, it is not necessary to show that your particular lie ever really influenced anyone.) Although you must know that your statement is false at the time you make it in order to be guilty of this crime, you do not have to know that lying to the government is a crime or even that the matter you are lying about is “within the jurisdiction” of a government agency. United States v. Yermian, 468 U.S. 63, 69 (1984). For example, if you lie to your employer on your time and attendance records and, unbeknownst to you, he submits your records, along with those of other employees, to the federal government pursuant to some regulatory duty, you could be criminally liable.

Posted on June 5, 2006 at 1:24 PM49 Comments

Interview with a Debit Card Scammer

Podcast:

We discuss credit card data centers getting hacked; why banks getting hacked doesn’t make mainstream media; reissuing bank cards; how much he makes cashing out bank cards; how banks cover money stolen from credit cards; why companies are not cracking down on credit card crimes; how to prevent credit card theft; ATM scams; being “legit” in the criminal world; how he gets cash out gigs; getting PINs and encoding blank credit cards; how much money he can pull in a day; e-gold; his chances of getting caught; the best day to hit the ATMs; encrypting ICQ messages.

Posted on June 5, 2006 at 6:23 AM46 Comments

Aligning Interest with Capability

Have you ever been to a retail store and seen this sign on the register: “Your purchase free if you don’t get a receipt”? You almost certainly didn’t see it in an expensive or high-end store. You saw it in a convenience store, or a fast-food restaurant. Or maybe a liquor store. That sign is a security device, and a clever one at that. And it illustrates a very important rule about security: it works best when you align interests with capability.

If you’re a store owner, one of your security worries is employee theft. Your employees handle cash all day, and dishonest ones will pocket some of it for themselves. The history of the cash register is mostly a history of preventing this kind of theft. Early cash registers were just boxes with a bell attached. The bell rang when an employee opened the box, alerting the store owner—who was presumably elsewhere in the store—that an employee was handling money.

The register tape was an important development in security against employee theft. Every transaction is recorded in write-only media, in such a way that it’s impossible to insert or delete transactions. It’s an audit trail. Using that audit trail, the store owner can count the cash in the drawer, and compare the amount with what the register. Any discrepancies can be docked from the employee’s paycheck.

If you’re a dishonest employee, you have to keep transactions off the register. If someone hands you money for an item and walks out, you can pocket that money without anyone being the wiser. And, in fact, that’s how employees steal cash in retail stores.

What can the store owner do? He can stand there and watch the employee, of course. But that’s not very efficient; the whole point of having employees is so that the store owner can do other things. The customer is standing there anyway, but the customer doesn’t care one way or another about a receipt.

So here’s what the employer does: he hires the customer. By putting up a sign saying “Your purchase free if you don’t get a receipt,” the employer is getting the customer to guard the employee. The customer makes sure the employee gives him a receipt, and employee theft is reduced accordingly.

There is a general rule in security to align interest with capability. The customer has the capability of watching the employee; the sign gives him the interest.

In Beyond Fear I wrote about ATM fraud; you can see the same mechanism at work:

“When ATM cardholders in the US complained about phantom withdrawals from their accounts, the courts generally held that the banks had to prove fraud. Hence, the banks’ agenda was to improve security and keep fraud low, because they paid the costs of any fraud. In the UK, the reverse was true: The courts generally sided with the banks and assumed that any attempts to repudiate withdrawals were cardholder fraud, and the cardholder had to prove otherwise. This caused the banks to have the opposite agenda; they didn’t care about improving security, because they were content to blame the problems on the customers and send them to jail for complaining. The result was that in the US, the banks improved ATM security to forestall additional losses—most of the fraud actually was not the cardholder’s fault—while in the UK, the banks did nothing.”

The banks had the capability to improve security. In the US, they also had the interest. But in the UK, only the customer had the interest. It wasn’t until the UK courts reversed themselves and aligned interest with capability that ATM security improved.

Computer security is no different. For years I have argued in favor of software liabilities. Software vendors are in the best position to improve software security; they have the capability. But, unfortunately, they don’t have much interest. Features, schedule, and profitability are far more important. Software liabilities will change that. They’ll align interest with capability, and they’ll improve software security.

One last story… In Italy, tax fraud used to be a national hobby. (It may still be; I don’t know.) The government was tired of retail stores not reporting sales and paying taxes, so they passed a law regulating the customers. Any customer having just purchased an item and stopped within a certain distance of a retail store, has to produce a receipt or they would be fined. Just as in the “Your purchase free if you don’t get a receipt” story, the law turned the customers into tax inspectors. They demanded receipts from merchants, which in turn forced the merchants to create a paper audit trail for the purchase and pay the required tax.

This was a great idea, but it didn’t work very well. Customers, especially tourists, didn’t like to be stopped by police. People started demanding that the police prove they just purchased the item. Threatening people with fines if they didn’t guard merchants wasn’t as effective an enticement as offering people a reward if they didn’t get a receipt.

Interest must be aligned with capability, but you need to be careful how you generate interest.

This essay originally appeared on Wired.com.

Posted on June 1, 2006 at 6:27 AM54 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.