Blog: May 2006 Archives

UK Report on July 7th Terrorist Bombings

About the Intelligence and Security Committee:

Parliamentary oversight of SIS, GCHQ and the Security Service is provided by the Intelligence and Security Committee (ISC), established by the Intelligence Services Act 1994. The Committee examines the expenditure, administration and policy of the three Agencies. It operates within the ‘ring of secrecy’ and has wide access to the range of Agency activities and to highly classified information. Its cross­party membership of nine from both Houses is appointed by the Prime Minister after consultation with the Leader of the Opposition. The Committee is required to report annually to the Prime Minister on its work. These reports, after any deletions of sensitive material, are placed before Parliament by the Prime Minister. The Committee also provides ad hoc reports to the Prime Minister from time to time. The Chairman of the Intelligence and Security Committee is the Right Honourable Paul Murphy. The Committee is supported by a Clerk and secretariat in the Cabinet Office and can employ an investigator to pursue specific matters in greater detail.

They have released the “Intelligence and Security Committee Report into the London Terrorist Attacks on 7 July 2005,” and the UK government has issued a response.

Posted on May 31, 2006 at 11:19 AM9 Comments

Data Mining Software from IBM

In the long term, corporate data mining efforts are more of a privacy risk than government data mining efforts. And here’s an off-the-shelf product from IBM:

IBM Entity Analytic Solutions (EAS) is unique identity disambiguation software that provides public sector organizations or commercial enterprises with the ability to recognize and mitigate the incidence of fraud, threat and risk. This IBM EAS offering provides insight on demand, and in context, on “who is who,” “who knows who,” and “anonymously.”

This industry-leading, patented technology enables enterprise-wide identity insight, full attribution and self-correction in real time, and scales to process hundreds of millions of entities—all while accumulating context about those identities. It is the only software in the market that provides in-context information regarding non-obvious and obvious relationships that may exist between identities and can do it anonymously to enhance privacy of information.

For most businesses and government agencies, it is important to figure out when a person is using more than one identity Package (that is, name, address, phone number, social insurance number and other such personal attributes) intentionally or unintentionally. Identity resolution software can help determine when two or more different looking identity packages are describing the same person, even if the data is inconsistent. For example, by comparing names, addresses, phone numbers, social insurance numbers and other personal information across different records, this software might reveal that three customers calling themselves Tom R., Thomas Rogers, and T. Rogers are really just the same person.

It may also be useful for organizations to know with whom such a person associates. Relationship resolution software can process resolved identity data to find out whether people have worked for some of the same companies, for example. This would be useful to an organization that tracks down terrorists, but it can also help businesses such as banks, for example, to see whether the Hope Smith who just applied for a loan is related to Rock Smith, the account holder with a sterling credit rating.

Posted on May 31, 2006 at 6:52 AM31 Comments

Solzhenitsyn Quote on Data and Privacy

As every man goes through life he fills in a number of forms for the record, each containing a number of questions . .. There are thus hundreds of little threads radiating from every man, millions of threads in all. If these threads were suddenly to become visible, the whole sky would look like a spider’s web, and if they materialized as rubber bands, buses; trams and even people would all lose the ability to move, and the wind would be unable to carry torn-up newspapers or autumn leaves along the streets of the city. They are not visible, they are not material, but every man is constantly aware of their existence…. Each man, permanently aware of his own invisible threads, naturally develops a respect for the people who manipulate the threads.

&#160&#160&#160&#160&#160—Alexander Solzhenitsyn, Cancer Ward, 1968.

Posted on May 30, 2006 at 10:55 AM15 Comments

Aircraft Locator a "Terrorist's Dream"

The movie plots keep coming and coming. Here’s my nomination for dumb movie plot of this week:

Skies ‘now terrorist’s dream’

Australia’s proposed new aviation tracking system would make it easier for terrorists to locate aircraft, aviation campaigner Dick Smith said today.

Mr Smith said a plan by Airservices Australia to replace radar tracking of planes with the Automatic Dependent Surveillance Broadcast (ADS ­ B) system would allow terrorists to track every aircraft in the sky.

“Government policy using conventional radar makes it almost impossible for a terrorist or a criminal to locate the position and identity of an aircraft,” Mr Smith said.

“With ADS ­ B it’s the opposite because all you need to track every aircraft is a small, non-directional aerial, worth $5.”

Under the present system, a terrorist can locate the position of an aircraft by looking up. And if a terrorist is smart enough to perform this intelligence-gathering exercise near an airport, he can locate the position of aircraft that are low to the ground, and easier to shoot at with missiles. Why are we worrying about telling terrorists where all the high-altitude hard-to-hit planes are?

Now I can invent a movie plot that has the terrorists needing to shoot down a particular plane because this or that famous personage is on it, but that’s a bit much.

Posted on May 29, 2006 at 12:00 PM52 Comments

A Robotic Bill of Rights

Asimov’s Three Laws of Robotics are a security device, protecting humans from robots:

1) A robot may not harm a human being, or, through inaction, allow a human being to come to harm.

2) A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.

3) A robot must protect its own existence, as long as such protection does not conflict with the First or Second Law.

In an interesting blog post, Greg London wonders if we also need explicit limitations on those laws: a Robotic Bill of Rights. Here are his suggestions:

First amendment: A robot will act as an agent representing its owner’s best interests.

Second amendment: A robot will not hide the execution of any order from its owner.

Third amendment: A robot will not perform any order that would be against its owner’s standing orders.

Fourth amendment: The robot’s standing orders can only be overridden by the robot’s owner.

Fifth amendment: A robot’s execution of any of its orders can be halted by the robot’s owner.

Sixth amendment: Any standing orders in a robot can be overridden by the robot’s owner.

Seventh amendment: A robot will not perform any order issued by anyone other than its owner without explicitely informing its owner of the order, the effects the order would have, and who issued the order, and then getting the owner’s permission to execute the order.

I haven’t thought enough about this to know if these seven amendments are both necessary and sufficient, but it’s a fascinating topic of discussion.

Posted on May 26, 2006 at 12:17 PM69 Comments

Dangers of Reporting a Computer Vulnerability

This essay makes the case that there no way to safely report a computer vulnerability.

The first reason is that whenever you do something “unnecessary,” such as reporting a vulnerability, police wonder why, and how you found out. Police also wonders if you found one vulnerability, could you have found more and not reported them? Who did you disclose that information to? Did you get into the web site, and do anything there that you shouldn’t have? It’s normal for the police to think that way. They have to. Unfortunately, it makes it very uninteresting to report any problems.

A typical difficulty encountered by vulnerability researchers is that administrators or programmers often deny that a problem is exploitable or is of any consequence, and request a proof. This got Eric McCarty in trouble—the proof is automatically a proof that you breached the law, and can be used to prosecute you! Thankfully, the administrators of the web site believed our report without trapping us by requesting a proof in the form of an exploit and fixed it in record time. We could have been in trouble if we had believed that a request for a proof was an authorization to perform penetration testing. I believe that I would have requested a signed authorization before doing it, but it is easy to imagine a well-meaning student being not as cautious (or I could have forgotten to request the written authorization, or they could have refused to provide it…). Because the vulnerability was fixed in record time, it also protected us from being accused of the subsequent break-in, which happened after the vulnerability was fixed, and therefore had to use some other means. If there had been an overlap in time, we could have become suspects.

Interesting essay, and interesting comments. And here’s an article on the essay.

Remember, full disclosure is the best tool we have to improve security. It’s an old argument, and I wrote about it way back in 2001. If people can’t report security vulnerabilities, then vendors won’t fix them.

EDITED TO ADD (5/26): Robert Lemos on “Ethics and the Eric McCarty Case.”

Posted on May 26, 2006 at 7:35 AM41 Comments

Cheating on Tests

“How to Cheat Good.”

Edit > Paste Special > Unformatted Text

This is my Number 1 piece of advice, even if it is numbered eight. When you copy things from the web into Word, ignoring #3 above, don’t just “Edit > Paste” it into your document. When I am reading a document in black, Times New Roman, 12pt, and it suddenly changes to blue, Helvetica, 10pt (yes, really), I’m going to guess that something odd may be going on. This seems to happen in about 1% of student work turned in, and periodically makes me feel like becoming a hermit.

Posted on May 25, 2006 at 12:26 PM41 Comments

Movie Clip Mistaken for Al Qaeda Video

Oops:

Reuters quoted a Pentagon official, Dan Devlin, as saying, “What we have seen is that any video game that comes out… (al Qaeda will) modify it and change the game for their needs.”

The influential committee, chaired by Rep. Peter Hoekstra (R-MI), watched footage of animated combat in which characters depicted as Islamic insurgents killed U.S. troops in battle. The video began with the voice of a male narrator saying, “I was just a boy when the infidels came to my village in Blackhawk helicopters…”

Several GP readers immediately noticed that the voice-over was actually lifted from Team America: World Police, an outrageous 2004 satirical film produced by the creators of the popular South Park comedy series. At about the same time, gamers involved in the online Battlefield 2 community were pointing out the video footage shown to Congress was not a mod of BF2 at all, but standard game footage from EA’s Special Forces BF2 add-on module, a retail product widely available in the United States and elsewhere.

Posted on May 24, 2006 at 2:14 PM20 Comments

Counterfeit Electronics as a Terrorist Tool

Winning my award for dumb movie-plot threat of the week, here’s someone who thinks that counterfeit electronics are a terrorist tool:

Counterfeit Electronics as Weapons of Mass Disruption?

Some customers may consider knockoff clothing and watches to be good values, but counterfeit electronics can be devastating. What would happen, then, if some criminal element bent on wreaking havoc and inducing public panic were to intentionally introduce such a bogus product into the electronics supply chain—malfunctioning printed-circuit boards in a critical air-traffic-control system, say, or faulty parts into automobile braking systems? Even the suggestion that such an act had occurred might set off a wave of recalls and might ground suspect systems.

Gadzooks.

EDITED TO ADD (6/2): Here’s another article:

“Many attacks of this kind would have two components. One would alter the process control system to produce a defective product. The other would alter the quality control system so that the defect wouldn’t easily be detected,” Borg says. “Imagine, say, a life-saving drug being produced and distributed with the wrong level of active ingredients. This could gradually result in large numbers of deaths or disabilities. Yet it might take months before someone figured out what was going on.” The result, he says, would be panic, people afraid to visit hospitals and health services facing huge lawsuits.

Deadly scenarios could occur in industry, too. Online outlaws might change key specifications at a car factory, Borg says, causing a car to “burst into flames after it had been driven for a certain number of weeks”. Apart from people being injured or killed, the car maker would collapse. “People would stop buying cars.” A few such attacks, run simultaneously, would send economies crashing. Populations would be in turmoil. At the click of a mouse, the terrorists would have won.

Posted on May 24, 2006 at 11:57 AM39 Comments

The Problems with Data Mining

Great op-ed in The New York Times on why the NSA’s data mining efforts won’t work, by Jonathan Farley, math professor at Harvard.

The simplest reason is that we’re all connected. Not in the Haight-Ashbury/Timothy Leary/late-period Beatles kind of way, but in the sense of the Kevin Bacon game. The sociologist Stanley Milgram made this clear in the 1960’s when he took pairs of people unknown to each other, separated by a continent, and asked one of the pair to send a package to the other—but only by passing the package to a person he knew, who could then send the package only to someone he knew, and so on. On average, it took only six mailings—the famous six degrees of separation—for the package to reach its intended destination.

Looked at this way, President Bush is only a few steps away from Osama bin Laden (in the 1970’s he ran a company partly financed by the American representative for one of the Qaeda leader’s brothers). And terrorist hermits like the Unabomber are connected to only a very few people. So much for finding the guilty by association.

A second problem with the spy agency’s apparent methodology lies in the way terrorist groups operate and what scientists call the “strength of weak ties.” As the military scientist Robert Spulak has described it to me, you might not see your college roommate for 10 years, but if he were to call you up and ask to stay in your apartment, you’d let him. This is the principle under which sleeper cells operate: there is no communication for years. Thus for the most dangerous threats, the links between nodes that the agency is looking for simply might not exist.

(This, by him, is also worth reading.)

Posted on May 24, 2006 at 7:44 AM57 Comments

El Al Doesn't Trust the TSA

They want to do security themselves at Newark Airport, as they already do at four other U.S. airports.

No other airline has such an arrangement with U.S. officials, authorities acknowledged. At the four other airports, El Al has installed its own security software at bomb-detection machines, which authorities said is more sensitive than that used by American carriers.

Posted on May 23, 2006 at 3:30 PM45 Comments

Spammers Win One

Blue Security was an Israeli company that fought spam with spam:

Eran Reshef had an idea in the battle against spam e-mail that seemed to be working: he fought spam with spam. Today, he’ll give up the fight.

Reshef’s Silicon Valley company, Blue Security Inc., simply asked the spammers to stop sending junk e-mail to his clients. But because those sort of requests tend to be ignored, Blue Security took them to a new level: it bombarded the spammers with requests from all 522,000 of its customers at the same time.

That led to a flood of Internet traffic so heavy that it disrupted the spammers’ ability to send e-mails to other victims—a crippling effect that caused a handful of known spammers to comply with the requests.

Then, earlier this month, a Russia-based spammer counterattacked, Reshef said. Using tens of thousands of hijacked computers, the spammer flooded Blue Security with so much Internet traffic that it blocked legitimate visitors from going to Bluesecurity.com, as well as to other Web sites. The spammer also sent another message: Cease operations or Blue Security customers will soon find themselves targeted with virus-filled attacks.

Last week Blue Security gave up:

Wednesday, Blue Security said it had to give up because it couldn’t sustain the fight against spammers. “Several leading spammers viewed [us] as a strategic threat to their spam business,” Eran Reshef, Blue Security chief executive wrote in the message posted to the company’s site.

“After recovering from the attack, we determined that once we reactivated the Blue Community, spammers would resume their attacks. We cannot take the responsibility for an ever-escalating cyber war through our continued operations.

“As much as it saddens us, we believe this is the responsible thing to do,” said Reshef, who did not respond to an e-mail requesting additional comment. Later Wednesday, a spokesman said that the company would not be making any additional statements beyond the message on its site.

Another news article. And Marcus Ranum on Blue Security’s idea.

Posted on May 23, 2006 at 12:58 PM40 Comments

Smart Profiling from the DHS

About time:

Here’s how it works: Select TSA employees will be trained to identify suspicious individuals who raise red flags by exhibiting unusual or anxious behavior, which can be as simple as changes in mannerisms, excessive sweating on a cool day, or changes in the pitch of a person’s voice. Racial or ethnic factors are not a criterion for singling out people, TSA officials say. Those who are identified as suspicious will be examined more thoroughly; for some, the agency will bring in local police to conduct face-to-face interviews and perhaps run the person’s name against national criminal databases and determine whether any threat exists. If such inquiries turn up other issues countries with terrorist connections, police officers can pursue the questioning or alert Federal counterterrorism agents. And of course the full retinue of baggage x-rays, magnetometers and other checks for weapons will continue.

Posted on May 23, 2006 at 6:20 AM54 Comments

Diebold Doesn't Get It

This quote sums up nicely why Diebold should not be trusted to secure election machines:

David Bear, a spokesman for Diebold Election Systems, said the potential risk existed because the company’s technicians had intentionally built the machines in such a way that election officials would be able to update their systems in years ahead.

“For there to be a problem here, you’re basically assuming a premise where you have some evil and nefarious election officials who would sneak in and introduce a piece of software,” he said. “I don’t believe these evil elections people exist.”

If you can’t get the threat model right, you can’t hope to secure the system.

Posted on May 22, 2006 at 3:22 PM54 Comments

Coast Guard Solicits Hollywood to Help with Movie Plot Threats

Really:

Trying to avoid a failure of imagination in its uncharted new role, the agency has even called in screenwriters from Hollywood to help sketch terrorism situations.

“The biggest change is that the Coast Guard has gone from being an organization that ran when the bell went off to being a cop on the beat at all times,” said Capt. Peter V. Neffenger, who recently gave up command of the port here for a position in Washington and who consulted with the screenwriters.

Anyone who’s watched Hollywood’s output in recent years knows that screenwriters aren’t the most creative bunch of people on the planet. And anyway, they can have all of these ideas for free.

Posted on May 22, 2006 at 7:49 AM26 Comments

Friday Squid Blogging: 1866 Parisienne Squid Fad

Started by Victor Hugo:

Hugo turned away from social/political issues in his next novel, Les Travailleurs de la Mer (Toilers of the Sea), published in 1866. Nonetheless, the book was well received, perhaps due to the previous success of Les Misérables. Dedicated to the channel island of Guernsey where he spent 15 years of exile, Hugo’s depiction of Man’s battle with the sea and the horrible creatures lurking beneath its depths spawned an unusual fad in Paris: Squids. From squid dishes and exhibitions, to squid hats and parties, Parisiennes became fascinated by these unusual sea creatures, which at the time were still considered by many to be mythical.

Posted on May 19, 2006 at 4:09 PM6 Comments

The Value of Privacy

Last week, revelation of yet another NSA surveillance effort against the American people has rekindled the privacy debate. Those in favor of these programs have trotted out the same rhetorical question we hear every time privacy advocates oppose ID checks, video cameras, massive databases, data mining, and other wholesale surveillance measures: “If you aren’t doing anything wrong, what do you have to hide?”

Some clever answers: “If I’m not doing anything wrong, then you have no cause to watch me.” “Because the government gets to define what’s wrong, and they keep changing the definition.” “Because you might do something wrong with my information.” My problem with quips like these—as right as they are—is that they accept the premise that privacy is about hiding a wrong. It’s not. Privacy is an inherent human right, and a requirement for maintaining the human condition with dignity and respect.

Two proverbs say it best: Quis custodiet custodes ipsos? (“Who watches the watchers?”) and “Absolute power corrupts absolutely.”

Cardinal Richelieu understood the value of surveillance when he famously said, “If one would give me six lines written by the hand of the most honest man, I would find something in them to have him hanged.” Watch someone long enough, and you’ll find something to arrest—or just blackmail—with. Privacy is important because without it, surveillance information will be abused: to peep, to sell to marketers and to spy on political enemies—whoever they happen to be at the time.

Privacy protects us from abuses by those in power, even if we’re doing nothing wrong at the time of surveillance.

We do nothing wrong when we make love or go to the bathroom. We are not deliberately hiding anything when we seek out private places for reflection or conversation. We keep private journals, sing in the privacy of the shower, and write letters to secret lovers and then burn them. Privacy is a basic human need.

A future in which privacy would face constant assault was so alien to the framers of the Constitution that it never occurred to them to call out privacy as an explicit right. Privacy was inherent to the nobility of their being and their cause. Of course being watched in your own home was unreasonable. Watching at all was an act so unseemly as to be inconceivable among gentlemen in their day. You watched convicted criminals, not free citizens. You ruled your own home. It’s intrinsic to the concept of liberty.

For if we are observed in all matters, we are constantly under threat of correction, judgment, criticism, even plagiarism of our own uniqueness. We become children, fettered under watchful eyes, constantly fearful that—either now or in the uncertain future—patterns we leave behind will be brought back to implicate us, by whatever authority has now become focused upon our once-private and innocent acts. We lose our individuality, because everything we do is observable and recordable.

How many of us have paused during conversation in the past four-and-a-half years, suddenly aware that we might be eavesdropped on? Probably it was a phone conversation, although maybe it was an e-mail or instant-message exchange or a conversation in a public place. Maybe the topic was terrorism, or politics, or Islam. We stop suddenly, momentarily afraid that our words might be taken out of context, then we laugh at our paranoia and go on. But our demeanor has changed, and our words are subtly altered.

This is the loss of freedom we face when our privacy is taken from us. This is life in former East Germany, or life in Saddam Hussein’s Iraq. And it’s our future as we allow an ever-intrusive eye into our personal, private lives.

Too many wrongly characterize the debate as “security versus privacy.” The real choice is liberty versus control. Tyranny, whether it arises under threat of foreign physical attack or under constant domestic authoritative scrutiny, is still tyranny. Liberty requires security without intrusion, security plus privacy. Widespread police surveillance is the very definition of a police state. And that’s why we should champion privacy even when we have nothing to hide.

A version of this essay originally appeared on Wired.com.

EDITED TO ADD (5/24): Daniel Solove comments.

Posted on May 19, 2006 at 12:00 PM111 Comments

U.S. Government Sensitive but Unclassified Information

New report from the GAO: “GAO-06-385 – The Federal Government Needs to Establish Policies and Processes for Sharing Terrorism-Related and Sensitive but Unclassified Information,” March 2006:

Federal agencies report using 56 different sensitive but unclassified designations (16 of which belong to one agency) to protect sensitive information—from law or drug enforcement information to controlled nuclear information—and agencies that account for a large percentage of the homeland security budget reported using most of these designations. There are no governmentwide policies or procedures that describe the basis on which agencies should use most of these sensitive but unclassified designations, explain what the different designations mean across agencies, or ensure that they will be used consistently from one agency to another. In this absence, each agency determines what designations to apply to the sensitive but unclassified information it develops or shares. For example, one agency uses the Protected Critical Infrastructure Information designation, which has statutorily prescribed criteria for applying, sharing and protecting the information, whereas 13 agencies designate information For Official Use Only, which does not have similarly prescribed criteria. Sometimes agencies used different labels and handling requirements for similar information and, conversely, similar labels and requirements for very different kinds of information. More than half of the agencies reported encountering challenges in sharing such information. For example, DHS said that sensitive but unclassified information disseminated to its state and local partners had, on occasion, been posted to public Internet sites or otherwise compromised, potentially revealing possible vulnerabilities to business competitors.

Here’s the list:

Table 2: Sensitive but Unclassified Designations in Use at Selected Federal Agencies

Designation Agencies using designation

1 Applied Technology *Department of Energy (DOE)
2 Attorney-Client Privilege Department of Commerce (Commerce), *DOE
3 Business Confidential *DOE
4 Budgetary Information Environmental Protection Agency (EPA)
5 Census Confidential Commerce
6 Confidential Information Protection and Statistical Efficiency Act Information (CIPSEA) Social Security Administration (SSA)
7 Computer Security Act Sensitive Information (CSASI) Department of Health and Human Services (HHS)
8 Confidential Department of Labor
9 Confidential Business Information (CBI) Commerce, EPA
10 Contractor Access Restricted Information (CARI) HHS
11 Copyrighted Information *DOE
12 Critical Energy Infrastructure Information (CEII) Federal Energy Regulatory Commission (FERC)
13 Critical Infrastructure Information Office of Personnel Management (OPM)
14 DEA Sensitive Department of Justice (DOJ)
15 DOD Unclassified Controlled Nuclear Information Department of Defense (DOD)
16 Draft EPA
17 Export Controlled Information *DOE
18 For Official Use Only (FOUO) Commerce, DOD, Department of Education, EPA, General Services Administration, HHS, DHS, Department of Housing and Urban Development (HUD), DOJ, Labor, OPM, SSA, and the Department of Transportation (DOT)
19 For Official Use Only‹Law Enforcement Sensitive DOD
20 Freedom of Information Act (FOIA) EPA
21 Government Confidential Commercial Information *DOE
22 High-Temperature Superconductivity Pilot Center Information *DOE
23 In Confidence *DOE
24 Intellectual Property *DOE
25 Law Enforcement Sensitive Commerce, EPA, DHS, DOJ, HHS, Labor, OPM
26 Law Enforcement Sensitive/Sensitive DOJ
27 Limited Distribution Information DOD
28 Limited Official Use (LOU) DHS, DOJ, Department of Treasury
29 Medical records EPA
30 Non-Public Information FERC
31 Not Available National Technical Information Service Commerce
32 Official Use Only (OUO) DOE, SSA, Treasury
33 Operations Security Protected Information (OSPI) HHS
34 Patent Sensitive Information *DOE
35 Predecisional Draft *DOE
36 Privacy Act Information *DOE, EPA
37 Privacy Act Protected Information (PAPI) HHS
38 Proprietary Information *DOE, DOJ
39 Protected Battery Information *DOE
40 Protected Critical Infrastructure Information (PCII) DHS
41 Safeguards Information Nuclear Regulatory Commission (NRC)
42 Select Agent Sensitive Information (SASI) HHS
43 Sensitive But Unclassified (SBU) Commerce, HHS, NASA, National Science Foundation (NSF), Department of State, U.S. Agency for International Development (USAID)
44 Sensitive Drinking Water Related Information (SDWRI) EPA
45 Sensitive Information DOD, U.S. Postal Service (USPS)
46 Sensitive Instruction SSA
47 Sensitive Internal Use *DOE
48 Sensitive Unclassified Non-Safeguards Information NRC
49 Sensitive Nuclear Technology *DOE
50 Sensitive Security Information (SSI) DHS, DOT, U.S. Department of Agriculture (USDA)
51 Sensitive Water Vulnerability Assessment Information EPA
52 Small Business Innovative Research Information *DOE
53 Technical Information DOD
54 Trade Sensitive Information Commerce
55 Unclassified Controlled Nuclear Information (UCNI) DOE
56 Unclassified National Security-Related *DOE

I’ve already written about SSI (Sensitive Security Information).

Posted on May 19, 2006 at 7:52 AM20 Comments

Economics and Information Security

I would like to bring your attention to two conferences. The Workshop on Economics and Information Security is now in its fifth year. The next one will be held on June 26-28 in Cambridge (England, not Massachusetts). The paper selections have been announced, and it looks like a great conference.

The The Workshop on the Economics of Securing the Information Infrastructure will be held on October 23-24 in Washington, DC. This is a new workshop, and papers are still being solicited.

WEIS is currently my favorite security conference. I think that economics has a lot to teach computer security, and it is very interesting to get economists, lawyers, and computer security experts in the same room talking about issues.

I am on the program committee for both WEIS and WESII.

Posted on May 16, 2006 at 3:10 PM8 Comments

Movie-Plot Threat Contest: Second Status Report

On April 1, I announced my (possibly First) Movie-Plot Threat Contest (update here).

The submission deadline was the end of April, and I had hoped to have picked winners by now. But there are just too many entrants to wade through.

I’ll announce winners by the next Crypto-Gram; that’s June 15.

Promise.

(If any of you want to suggest a winner, please list your choices here.)

Posted on May 15, 2006 at 1:06 PM33 Comments

Reconceptualizing National Intelligence

From the Federation of American Scientists:

A new study published by the CIA Center for the Study of Intelligence calls for a fundamental reconceptualization of the process of intelligence analysis in order to overcome the “pathologies” that have rendered it increasingly dysfunctional.

“Curing Analytic Pathologies” (pdf) by Jeffrey R. Cooper has been available up to now in limited circulation in hard copy only. Like several other recent studies critical of U.S. intelligence, it was withheld from the CIA web site. It has now been published on the Federation of American Scientists web site.

It’s an interesting report. Unfortunately, the PDF on the website is scanned, so it’s hard to copy and paste sections into this blog.

Posted on May 15, 2006 at 7:21 AM16 Comments

"The TSA's Constitution-Free Zone"

Interesting first-person account of someone on the U.S. Terrorist Watch List:

To sum up, if you run afoul of the nation’s “national security” apparatus, you’re completely on your own. There are no firm rules, no case law, no real appeals processes, no normal array of Constitutional rights, no lawyers to help, and generally none of the other things that we as American citizens expect to be able to fall back on when we’ve been (justly or unjustly) identified by the government as wrong-doers.

Posted on May 12, 2006 at 1:38 PM44 Comments

Thief Disguises Himself as Security Guard

Another in our series on the security problems of trusting people in uniform:

A thief disguised as a security guard Tuesday duped the unsuspecting staff of a top Italian art gallery into giving him more than 200,000 euros ($253,100), local media reported.

The thief showed up Tuesday morning at the Pitti Palace, a grandiose renaissance construction in central Florence and one of Italy’s best known museums, wearing the same uniform used by employees of the security firm which every day collects the institution’s takings.

After the cashier staff gave him three bags full of money, he signed a receipt and calmly walked out.

Posted on May 12, 2006 at 6:10 AM26 Comments

Major Vulnerability Found in Diebold Election Machines

This is a big deal:

Elections officials in several states are scrambling to understand and limit the risk from a “dangerous” security hole found in Diebold Election Systems Inc.’s ATM-like touch-screen voting machines.

The hole is considered more worrisome than most security problems discovered on modern voting machines, such as weak encryption, easily pickable locks and use of the same, weak password nationwide.

Armed with a little basic knowledge of Diebold voting systems and a standard component available at any computer store, someone with a minute or two of access to a Diebold touch screen could load virtually any software into the machine and disable it, redistribute votes or alter its performance in myriad ways.

“This one is worse than any of the others I’ve seen. It’s more fundamental,” said Douglas Jones, a University of Iowa computer scientist and veteran voting-system examiner for the state of Iowa.

“In the other ones, we’ve been arguing about the security of the locks on the front door,” Jones said. “Now we find that there’s no back door. This is the kind of thing where if the states don’t get out in front of the hackers, there’s a real threat.”

This newspaper is withholding some details of the vulnerability at the request of several elections officials and scientists, partly because exploiting it is so simple and the tools for doing so are widely available.

[…]

Scientists said Diebold appeared to have opened the hole by making it as easy as possible to upgrade the software inside its machines. The result, said Iowa’s Jones, is a violation of federal voting system rules.

“All of us who have heard the technical details of this are really shocked. It defies reason that anyone who works with security would tolerate this design,” he said.

The immediate solution to this problem isn’t a patch. What that article refers to is election officials ensuring that they are running the “trusted” build of the software done at the federal labs and stored at the NSRL, just in case someone installed something bad in the meantime.

This article compares the security of electronic voting machines with the security of electronic slot machines. (My essay on the security of elections and voting machines.)

EDITED TO ADD (5/11): The redacted report is available.

Posted on May 11, 2006 at 1:08 PM93 Comments

Computer Problems at the NSA

Interesting:

Computers are integral to everything NSA does, yet it is not uncommon for the agency’s unstable computer system to freeze for hours, unlike the previous system, which had a backup mechanism that enabled analysts to continue their work, said Matthew Aid, a former NSA analyst and congressional intelligence staff member.

When the agency’s communications lines become overloaded, the Groundbreaker system has been known to deliver garbled intelligence reports, Aid said. Some analysts and managers have said their productivity is half of what it used to be because the new system requires them to perform many more steps to accomplish what a few keystrokes used to, he said. They also report being locked out of their computers without warning.

Similarly, agency linguists say the number of conversation segments they can translate in a day has dropped significantly under Groundbreaker, according to another former NSA employee.

Under Groundbreaker, employees get new computers every three years on a rotating schedule, so some analysts always have computers as much as three years older than their colleagues’, often with incompatible software, the former employee said.

As a result of compatibility problems, e-mail attachments can get lost in the system. An internal incident report, obtained by The Sun, states that when an employee inquired about what had happened to missing attachments, the Eagle Alliance administrator said only that “they must have fallen out.”

Posted on May 11, 2006 at 7:31 AM23 Comments

When "Off" Doesn't Mean Off

According to the specs of the new Nintendo Wii (their new game machine), “Wii can communicate with the Internet even when the power is turned off.” Nintendo accentuates the positive: “This WiiConnect24 service delivers a new surprise or game update, even if users do not play with Wii,” while ignoring the possibility that Nintendo can deactivate a game if they choose to do so, or that someone else can deliver a different—not so wanted—surprise.

We all know that, but what’s interesting here is that Nintendo is changing the meaning of the word “off.” We are all conditioned to believe that “off” means off, and therefore safe. But in Nintendo’s case, “off” really means something like “on standby.” If users expect the Nintendo Wii to be truly off, they need to pull the power plug—assuming there isn’t a battery foiling that tactic. Maybe they need to pull both the power plug and the Ethernet cable. Unless they have a wireless network at home.

Maybe there is no way to turn the Nintendo Wii off.

There’s a serious security problem here, made worse by a bad user interface. “Off” should mean off.

Posted on May 10, 2006 at 6:45 AM103 Comments

Security Risks of Airline Passenger Data

Reporter finds an old British Airways boarding pass, and proceeds to use it to find everything else about the person:

We logged on to the BA website, bought a ticket in Broer’s name and then, using the frequent flyer number on his boarding pass stub, without typing in a password, were given full access to all his personal details – including his passport number, the date it expired, his nationality (he is Dutch, living in the UK) and his date of birth. The system even allowed us to change the information.

Using this information and surfing publicly available databases, we were able – within 15 minutes – to find out where Broer lived, who lived there with him, where he worked, which universities he had attended and even how much his house was worth when he bought it two years ago. (This was particularly easy given his unusual name, but it would have been possible even if his name had been John Smith. We now had his date of birth and passport number, so we would have known exactly which John Smith.)

Notice the economic pressures:

“The problem here is that a commercial organisation is being given the task of collecting data on behalf of a foreign government, for which it gets no financial reward, and which offers no business benefit in return,” says Laurie. “Naturally, in such a case, they will seek to minimise their costs, which they do by handing the problem off to the passengers themselves. This has the neat side-effect of also handing off liability for data errors.”

Posted on May 9, 2006 at 1:17 PM27 Comments

The Ultimate Terrorist Threat: Flying Robot Drones

This one really pegs the movie-plot threat hype-meter:

The technology for remote-controlled light aircraft is now highly advanced, widely available—and, experts say, virtually unstoppable.

Models with a wingspan of five metres (16 feet), capable of carrying up to 50 kilograms (110 pounds), remain undetectable by radar.

And thanks to satellite positioning systems, they can now be programmed to hit targets some distance away with just a few metres (yards) short of pinpoint accuracy.

Security services the world over have been considering the problem for several years, but no one has yet come up with a solution.

[…]

Armed militant groups have already tried to use unmanned aircraft, according to a number of studies by institutions including the Center for Nonproliferation studies in Monterey, California, and the Center for Arms Control, Energy and Environmental Studies in Moscow.

In August 2002, for example, the Colombian military reported finding nine small remote-controlled planes at a base it had taken from the Revolutionary Armed Forces of Colombia (FARC).

On April 11, 2005 the Lebanese Shiite militia group, Hezbollah, flew a pilotless drone over Israeli territory, on what it called a “surveillance” mission. The Israeli military confirmed this and responded by flying warplanes over southern Lebanon.

Remote-control planes are not hard to get hold of, according to Jean-Christian Delessert, who runs a specialist model airplane shop near Geneva.

“Putting together a large-scale model is not difficult—all you need is a few materials and a decent electronics technician,” says Delessert.

In his view, “if terrorists get hold of that, it will be impossible to do anything about it. We did some tests with a friend who works at a military radar base: they never detected us… if the radar picks anything up, it thinks it is a flock of birds and automatically wipes it.”

Posted on May 9, 2006 at 7:36 AM65 Comments

Shell Suspends Chip & Pin in the UK

According to the BBC:

Petrol giant Shell has suspended chip-and-pin payments in 600 UK petrol stations after more than £1m was siphoned out of customers’ accounts.

This is just sad:

“These Pin pads are supposed to be tamper resistant, they are supposed to shut down, so that has obviously failed,” said Apacs spokeswoman Sandra Quinn.

She said Apacs was confident the problem was specific to Shell and not a systemic issue.

A Shell spokeswoman said: “Shell’s chip-and-pin solution is fully accredited and complies with all relevant industry standards.

That spokesperson simply can’t conceive of the fact that those “relevant industry standards” were written by those trying to sell the technology, and might possibly not be enough to ensure security.

And this is just after APACS (that’s the Association of Payment Clearing Services, by the way) reported that chip-and-pin technology reduced fraud by 13%.

Good commentary here. See also this article. Here’s a chip-and-pin FAQ from February.

EDITED TO ADD (5/8): Arrests have been made. And details emerge:

The scam works by criminals implanting devices into chip and pin machines which can copy a bank card’s magnetic strip and record a person’s pin number.

The device cannot copy the chip, which means any fake card can only be used in machines where chip and pin is not implemented – often abroad.

This is a common attack, one that I talk about in Beyond Fear: falling back to a less secure system. The attackers made use of the fact that there is a less secure system that is running parallel to the chip-and-pin system. Clever.

Posted on May 8, 2006 at 12:41 PM42 Comments

The DHS Secretly Shares European Passenger Data in Violation of Agreement

From the ACLU:

In 2003, the United States and the European Union reached an agreement under which the EU would share Passenger Name Record (PNR) data with the U.S., despite the lack of privacy laws in the United States adequate to ensure Europeans’ privacy. In return, DHS agreed that the passenger data would not be used for any purpose other than preventing acts of terrorism or other serious crimes. It is now clear that DHS did not abide by that agreement.

Posted on May 8, 2006 at 6:34 AM32 Comments

People Trusting Uniforms

An improv group in New York dressed similarly to Best Buy employees and went into a store, secretly video taping the results.

My favorite part:

Security guards and managers started talking to each other frantically on their walkie-talkies and headsets. “Thomas Crown Affair! Thomas Crown Affair!,” one employee shouted. They were worried that were using our fake uniforms to stage some type of elaborate heist. “I want every available employee out on the floor RIGHT NOW!”

Since the people did not actually try to impersonate Best Buy employees, could they be charged with any crime?

Posted on May 4, 2006 at 1:39 PM71 Comments

Who Owns Your Computer?

When technology serves its owners, it is liberating. When it is designed to serve others, over the owner’s objection, it is oppressive. There’s a battle raging on your computer right now—one that pits you against worms and viruses, Trojans, spyware, automatic update features and digital rights management technologies. It’s the battle to determine who owns your computer.

You own your computer, of course. You bought it. You paid for it. But how much control do you really have over what happens on your machine? Technically you might have bought the hardware and software, but you have less control over what it’s doing behind the scenes.

Using the hacker sense of the term, your computer is “owned” by other people.

It used to be that only malicious hackers were trying to own your computers. Whether through worms, viruses, Trojans or other means, they would try to install some kind of remote-control program onto your system. Then they’d use your computers to sniff passwords, make fraudulent bank transactions, send spam, initiate phishing attacks and so on. Estimates are that somewhere between hundreds of thousands and millions of computers are members of remotely controlled “bot” networks. Owned.

Now, things are not so simple. There are all sorts of interests vying for control of your computer. There are media companies that want to control what you can do with the music and videos they sell you. There are companies that use software as a conduit to collect marketing information, deliver advertising or do whatever it is their real owners require. And there are software companies that are trying to make money by pleasing not only their customers, but other companies they ally themselves with. All these companies want to own your computer.

Some examples:

  • Entertainment software: In October 2005, it emerged that Sony had distributed a rootkit with several music CDs—the same kind of software that crackers use to own people’s computers. This rootkit secretly installed itself when the music CD was played on a computer. Its purpose was to prevent people from doing things with the music that Sony didn’t approve of: It was a DRM system. If the exact same piece of software had been installed secretly by a hacker, this would have been an illegal act. But Sony believed that it had legitimate reasons for wanting to own its customers’ machines.
  • Antivirus: You might have expected your antivirus software to detect Sony’s rootkit. After all, that’s why you bought it. But initially, the security programs sold by Symantec and others did not detect it, because Sony had asked them not to. You might have thought that the software you bought was working for you, but you would have been wrong.
  • Internet services: Hotmail allows you to blacklist certain e-mail addresses, so that mail from them automatically goes into your spam trap. Have you ever tried blocking all that incessant marketing e-mail from Microsoft? You can’t.
  • Application software: Internet Explorer users might have expected the program to incorporate easy-to-use cookie handling and pop-up blockers. After all, other browsers do, and users have found them useful in defending against Internet annoyances. But Microsoft isn’t just selling software to you; it sells Internet advertising as well. It isn’t in the company’s best interest to offer users features that would adversely affect its business partners.
  • Spyware: Spyware is nothing but someone else trying to own your computer. These programs eavesdrop on your behavior and report back to their real owners—sometimes without your knowledge or consent—about your behavior.
  • Internet security: It recently came out that the firewall in Microsoft Vista will ship with half its protections turned off. Microsoft claims that large enterprise users demanded this default configuration, but that makes no sense. It’s far more likely that Microsoft just doesn’t want adware—and DRM spyware—blocked by default.
  • Update: Automatic update features are another way software companies try to own your computer. While they can be useful for improving security, they also require you to trust your software vendor not to disable your computer for nonpayment, breach of contract or other presumed infractions.

Adware, software-as-a-service and Google Desktop search are all examples of some other company trying to own your computer. And Trusted Computing will only make the problem worse.

There is an inherent insecurity to technologies that try to own people’s computers: They allow individuals other than the computers’ legitimate owners to enforce policy on those machines. These systems invite attackers to assume the role of the third party and turn a user’s device against him.

Remember the Sony story: The most insecure feature in that DRM system was a cloaking mechanism that gave the rootkit control over whether you could see it executing or spot its files on your hard disk. By taking ownership away from you, it reduced your security.

If left to grow, these external control systems will fundamentally change your relationship with your computer. They will make your computer much less useful by letting corporations limit what you can do with it. They will make your computer much less reliable because you will no longer have control of what is running on your machine, what it does, and how the various software components interact. At the extreme, they will transform your computer into a glorified boob tube.

You can fight back against this trend by only using software that respects your boundaries. Boycott companies that don’t honestly serve their customers, that don’t disclose their alliances, that treat users like marketing assets. Use open-source software—software created and owned by users, with no hidden agendas, no secret alliances and no back-room marketing deals.

Just because computers were a liberating force in the past doesn’t mean they will be in the future. There is enormous political and economic power behind the idea that you shouldn’t truly own your computer or your software, despite having paid for it.

This essay originally appeared on Wired.com.

EDITED TO ADD (5/5): Commentary. It seems that some of my examples were not very good. I’ll come up with other ones for the Crypto-Gram version.

Posted on May 4, 2006 at 7:13 AM99 Comments

Man Sues Compaq for False Advertising

Convicted felon Michael Crooker is suing Compaq (now HP) for false advertising. He bought a computer promised to be secure, but the FBI got his data anyway:

He bought it in September 2002, expressly because it had a feature called DriveLock, which freezes up the hard drive if you don’t have the proper password.

The computer’s manual claims that “if one were to lose his Master Password and his User Password, then the hard drive is useless and the data cannot be resurrected even by Compaq’s headquarters staff,” Crooker wrote in the suit.

Crooker has a copy of an ATF search warrant for files on the computer, which includes a handwritten notation: “Computer lock not able to be broken/disabled. Computer forwarded to FBI lab.” Crooker says he refused to give investigators the password, and was told the computer would be broken into “through a backdoor provided by Compaq,” which is now part of HP.

It’s unclear what was done with the laptop, but Crooker says a subsequent search warrant for his e-mail account, issued in January 2005, showed investigators had somehow gained access to his 40 gigabyte hard drive. The FBI had broken through DriveLock and accessed his e-mails (both deleted and not) as well as lists of websites he’d visited and other information. The only files they couldn’t read were ones he’d encrypted using Wexcrypt, a software program freely available on the Internet.

I think this is great. It’s about time that computer companies were held liable for their advertising claims.

But his lawsuit against HP may be a long shot. Crooker appears to face strong counterarguments to his claim that HP is guilty of breach of contract, especially if the FBI made the company provide a backdoor.

“If they had a warrant, then I don’t see how his case has any merit at all,” said Steven Certilman, a Stamford attorney who heads the Technology Law section of the Connecticut Bar Association. “Whatever means they used, if it’s covered by the warrant, it’s legitimate.”

If HP claimed DriveLock was unbreakable when the company knew it was not, that might be a kind of false advertising.

But while documents on HP’s web site do claim that without the correct passwords, a DriveLock’ed hard drive is “permanently unusable,” such warnings may not constitute actual legal guarantees.

According to Certilman and other computer security experts, hardware and software makers are careful not to make themselves liable for the performance of their products.

“I haven’t heard of manufacturers, at least for the consumer market, making a promise of computer security. Usually you buy naked hardware and you’re on your own,” Certilman said. In general, computer warrantees are “limited only to replacement and repair of the component, and not to incidental consequential damages such as the exposure of the underlying data to snooping third parties,” he said. “So I would be quite surprised if there were a gaping hole in their warranty that would allow that kind of claim.”

That point meets with agreement from the noted computer security skeptic Bruce Schneier, the chief technology officer at Counterpane Internet Security in Mountain View, Calif.

“I mean, the computer industry promises nothing,” he said last week. “Did you ever read a shrink-wrapped license agreement? You should read one. It basically says, if this product deliberately kills your children, and we knew it would, and we decided not to tell you because it might harm sales, we’re not liable. I mean, it says stuff like that. They’re absurd documents. You have no rights.”

My final quote in the article:

“Unfortunately, this probably isn’t a great case,” Schneier said. “Here’s a man who’s not going to get much sympathy. You want a defendant who bought the Compaq computer, and then, you know, his competitor, or a rogue employee, or someone who broke into his office, got the data. That’s a much more sympathetic defendant.”

Posted on May 3, 2006 at 9:26 AM49 Comments

A Real Movie-Plot Threat

Arson Squad Blows Up News Rack, Tom Cruise Movie to Blame

A newspaper promotion for Tom Cruise’s “Mission: Impossible III” movie was off to an explosive start when a California arson squad blew up a news rack, thinking it contained a bomb.

The confusion: the Los Angeles Times rack was fitted with a digital musical device designed to play the Mission: Impossible theme song when the door was opened. But in some cases, the red plastic boxes with protruding wires were jarred loose and dropped onto the stack of newspapers inside, alarming customers.

You just can’t make this stuff up.

Posted on May 2, 2006 at 1:41 PM23 Comments

Microsoft's BitLocker

BitLocker Drive Encryption is a new security feature in Windows Vista, designed to work with the Trusted Platform Module (TPM). Basically, it encrypts the C drive with a computer-generated key. In its basic mode, an attacker can still access the data on the drive by guessing the user’s password, but would not be able to get at the drive by booting the disk up using another operating system, or removing the drive and attaching it to another computer.

There are several modes for BitLocker. In the simplest mode, the TPM stores the key and the whole thing happens completely invisibly. The user does nothing differently, and notices nothing different.

The BitLocker key can also be stored on a USB drive. Here, the user has to insert the USB drive into the computer during boot. Then there’s a mode that uses a key stored in the TPM and a key stored on a USB drive. And finally, there’s a mode that uses a key stored in the TPM and a four-digit PIN that the user types into the computer. This happens early in the boot process, when there’s still ASCII text on the screen.

Note that if you configure BitLocker with a USB key or a PIN, password guessing doesn’t work. BitLocker doesn’t even let you get to a password screen to try.

For most people, basic mode is the best. People will keep their USB key in their computer bag with their laptop, so it won’t add much security. But if you can force users to attach it to their keychains—remember that you only need the key to boot the computer, not to operate the computer—and convince them to go through the trouble of sticking it in their computer every time they boot, then you’ll get a higher level of security.

There is a recovery key: optional but strongly encouraged. It is automatically generated by BitLocker, and it can be sent to some administrator or printed out and stored in some secure location. There are ways for an administrator to set group policy settings mandating this key.

There aren’t any back doors for the police, though.

You can get BitLocker to work in systems without a TPM, but it’s kludgy. You can only configure it for a USB key. And it only will work on some hardware: because BItLocker starts running before any device drivers are loaded, the BIOS must recognize USB drives in order for BitLocker to work.

Encryption particulars: The default data encryption algorithm is AES-128-CBC with an additional diffuser. The diffuser is designed to protect against ciphertext-manipulation attacks, and is independently keyed from AES-CBC so that it cannot damage the security you get from AES-CBC. Administrators can select the disk encryption algorithm through group policy. Choices are 128-bit AES-CBC plus the diffuser, 256-bit AES-CBC plus the diffuser, 128-bit AES-CBC, and 256-bit AES-CBC. (My advice: stick with the default.) The key management system uses 256-bit keys wherever possible. The only place where a 128-bit key limit is hard-coded is the recovery key, which is 48 digits (including checksums). It’s shorter because it has to be typed in manually; typing in 96 digits will piss off a lot of people—even if it is only for data recovery.

So, does this destroy dual-boot systems? Not really. If you have Vista running, then set up a dual boot system, Bitlocker will consider this sort of change to be an attack and refuse to run. But then you can use the recovery key to boot into Windows, then tell BitLocker to take the current configuration—with the dual boot code—as correct. After that, your dual boot system will work just fine, or so I’ve been told. You still won’t be able to share any files on your C drive between operating systems, but you will be able to share files on any other drive.

The problem is that it’s impossible to distinguish between a legitimate dual boot system and an attacker trying to use another OS—whether Linux or another instance of Vista—to get at the volume.

BitLocker is not a panacea. But it does mitigate a specific but significant risk: the risk of attackers getting at data on drives directly. It allows people to throw away or sell old drives without worry. It allows people to stop worrying about their drives getting lost or stolen. It stops a particular attack against data.

Right now BitLocker is only in the Ultimate and Enterprise editions of Vista. It’s a feature that is turned off by default. It is also Microsoft’s first TPM application. Presumably it will be enhanced in the future: allowing the encryption of other drives would be a good next step, for example.

EDITED TO ADD (5/3): BitLocker is not a DRM system. However, it is straightforward to turn it into a DRM system. Simply give programs the ability to require that files be stored only on BitLocker-enabled drives, and then only be transferrable to other BitLocker-enabled drives. How easy this would be to implement, and how hard it would be to subvert, depends on the details of the system.

Posted on May 2, 2006 at 6:54 AM98 Comments

Priority Cell Phones for First Responders

Verizon has announced that is has activated the Access Overload Control (ACCOLC) system, allowing some cell phones to have priority access to the network, even when the network is overloaded.

If you are a first responder with a Verizon phone, please visit the government’s WPS Requestor to provide the necessary information to have your handset activated.

Sounds like you’re going to have to enter some sort of code into your handset. I wonder how long before someone hacks that system.

Posted on May 1, 2006 at 1:29 PM52 Comments

Counterfeiting an Entire Company

We’ve talked about counterfeit money, counterfeit concert tickets, counterfeit police credentials, and counterfeit police departments. Here’s a story about a counterfeit company:

Evidence seized in raids on 18 factories and warehouses in China and Taiwan over the past year showed that the counterfeiters had set up what amounted to a parallel NEC brand with links to a network of more than 50 electronics factories in China, Hong Kong and Taiwan.

In the name of NEC, the pirates copied NEC products, and went as far as developing their own range of consumer electronic products – everything from home entertainment centers to MP3 players. They also coordinated manufacturing and distribution, collecting all the proceeds.

Posted on May 1, 2006 at 8:02 AM16 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.