Blog: September 2006 Archives

Friday Squid Blogging: Steganographic Squid

Seems that some squid can hide messages in their skin:

In the animal world, squid are masters of disguise. Pigmented skin cells enable them to camouflage themselves—almost instantaneously—from predators. Squid also produce polarized skin patterns by regulating the iridescence of their skin, possibly creating a “hidden communication channel”? visible only to animals that are sensitive to polarized light.

[…]

Mäthger and Hanlon’s findings present the first anatomical evidence for a “hidden communication channel”? that can remain masked by typical camouflage patterns. Their results suggest that it might be possible for squid to send concealed polarized signals to one other while staying camouflaged to fish or mammalian predators, most of which do not have polarization vision.

My favorite security stories are from the natural world. Evolution results in some of the most interesting security countermeasures.

Posted on September 29, 2006 at 2:59 PM10 Comments

Faulty Data and the Arar Case

Maher Arar is a Syrian-born Canadian citizen. On September 26, 2002, he tried to fly from Switzerland to Toronto. Changing planes in New York, he was detained by the U.S. authorities, and eventually shipped to Syria where he was tortured. He’s 100% innocent. (Background here.)

The Canadian government has completed its “Commission of Inquiry into the Actions of Canadian Officials in Relation to Maher Arar,” the results of which are public. From their press release:

On Maher Arar, the Commissioner comes to one important conclusion: “I am able to say categorically that there is no evidence to indicate that Mr. Arar has committed any offence or that his activities constitute a threat to the security of Canada.”

Certainly something that everyone who supports the U.S.’s right to detain and torture people without having to demonstrate their guilt should think about. But what’s more interesting to readers of this blog is the role that inaccurate data played in the deportation and ultimately torture of an innocent man.

Privacy International summarizes the report. These are among their bullet points:

  • The RCMP provided the U.S. with an entire database of information relating to a terrorism investigation (three CDs of information), in a way that did not comply with RCMP policies that require screening for relevance, reliability, and personal information. In fact, this action was without precedent.
  • The RCMP provided the U.S. with inaccurate information about Arar that portrayed him in an infairly negative fashion and overstated his importance to a RCMP investigation. They included some “erroneous notes.”
  • While he was detained in the U.S., the RCMP provided information regarding him to the U.S. Federal Bureau of Investigation (FBI), “some of which portrayed him in an inaccurate and unfair way.” The RCMP provided inaccurate information to the U.S. authorities that tended to link Arar to other terrorist suspects; and told the U.S. authorities that Arar had previously refused to be interviewed, which was also incorrect; and the RCMP also said that soon after refusing the interview he suddenly left Canada for Tunisia. “The statement about the refusal to be interviewed had the potential to arouse suspicion, especially among law enforcement officers, that Mr. Arar had something to hide.” The RCMP’s information to the U.S. authorities also placed Arar in the vicinity of Washington DC on September 11, 2001 when he was instead in California.

Judicial oversight is a security mechanism. It prevents the police from incarcerating the wrong person. The point of habeas corpus is that the police need to present their evidence in front of a neutral third party, and not indefinitely detain or torture people just because they believe they’re guilty. We are all less secure if we water down these security measures.

Posted on September 29, 2006 at 7:06 AM83 Comments

FairUse4WM News

A couple of weeks I ago I wrote about the battle between Microsoft’s DRM system and FairUse4WM, which breaks it. The news for this week is that Microsoft has patched their security against FairUseWM 1.2 and filed a lawsuit against the program’s anonymous authors, and those same anonymous authors have released FairUse4WM 1.3, which breaks the latest Microsoft patch.

We asked Viodentia about Redmond’s accusation that he and/or his associates broke into its systems in order to obtain the IP necessary to crack PlaysForSure; Vio replied that he’s “utterly shocked” by the charge. “I didn’t use any Microsoft source code. However, I believe that this lawsuit is a fishing expedition to get identity information, which can then be used to either bring more targeted lawsuits, or to cause other trouble.” We’re sure Microsoft would like its partners and the public to think that its DRM is generally infallible and could only be cracked by stealing its IP, so Viodentia’s conclusion about its legal tactics seems pretty fair, obvious, and logical to us.

What’s interesting about this continuing saga is how different it is from the normal find-vulnerability-then-patch sequence. The authors of FairUse4WM aren’t finding bugs and figuring out how to exploit them, forcing Microsoft to patch them. This is a sequence of crack, fix, re-crack, re-fix, etc.

The reason we’re seeing this—and this is going to be the norm for DRM systems—is that DRM is fundamentally an impossible problem. Making it work at all involves tricks, and breaking DRM is akin to “fixing” the software so the tricks don’t work. Anyone looking for a demonstation that technical DRM is doomed should watch this story unfold. (If Microsoft has any chance of winning at all, it’s via the legal route.)

Posted on September 28, 2006 at 12:55 PM37 Comments

Torpark

Torpark is a free anonymous web browser. It sounds good:

A group of computer hackers and human rights workers have launched a specially-crafted version of Firefox that claims to give users complete anonymity when they surf the Web.

Dubbed “Torpark” and based on a portable version of Firefox 1.5.0.7, the browser will run from a USB drive, so it leaves no installation tracks on the PC. It protects the user’s privacy by encrypting all in- and outbound data, and also anonymizes the connection by passing all data through the TOR network, which masks the true IP address of the machine.

From the website:

Torpark is a program which allows you to surf the internet anonymously. Download Torpark and put it on a USB Flash keychain. Plug it into any internet terminal whether at home, school, work, or in public. Torpark will launch a Tor circuit connection, which creates an encrypted tunnel from your computer indirectly to a Tor exit computer, allowing you to surf the internet anonymously.

More details here.

Posted on September 28, 2006 at 6:51 AM82 Comments

Indexes to NSA Publications Declassified and Online

In May 2003, Michael Ravnitzky submitted a Freedom of Information Act (FOIA) request to the National Security Agency for a copy of the index to their historical reports at the Center for Cryptologic History and the index to certain journals: the NSA Technical Journal and the Cryptographic Quarterly. These journals had been mentioned in the literature but are not available to the public. Because he thought NSA might be reluctant to release the bibliographic indexes, he also asked for the table of contents to each issue.

The request took more than three years for them to process and declassify—sadly, not atypical—and during the process they asked if he would accept the indexes in lieu of the tables of contents pages: specifically, the cumulative indices that included all the previous material in the earlier indices. He agreed, and got them last month. The results are here.

This is just a sampling of some of the article titles from the NSA Technical Journal:

“The Arithmetic of a Generation Principle for an Electronic Key Generator” · “CATNIP: Computer Analysis – Target Networks Intercept Probability” · “Chatter Patterns: A Last Resort” · “COMINT Satellites – A Space Problem” · “Computers and Advanced Weapons Systems” · “Coupon Collecting and Cryptology” · “Cranks, Nuts, and Screwballs” · “A Cryptologic Fairy Tale” · “Don’t Be Too Smart” · “Earliest Applications of the Computer at NSA” · “Emergency Destruction of Documents” · “Extraterrestrial Intelligence” · “The Fallacy of the One-Time-Pad Excuse” · “GEE WHIZZER” · “The Gweeks Had a Gwoup for It” · “How to Visualize a Matrix” · “Key to the Extraterrestrial Messages” · “A Mechanical Treatment of Fibonacci Sequences” · “Q.E.D.- 2 Hours, 41 Minutes” · “SlGINT Implications of Military Oceanography” · “Some Problems and Techniques in Bookbreaking” · “Upgrading Selected US Codes and Ciphers with a Cover and Deception Capability” · “Weather: Its Role in Communications Intelligence” · “Worldwide Language Problems at NSA”

In the materials the NSA provided, they also included indices to two other publications: Cryptologic Spectrum and Cryptologic Almanac.

The indices to Cryptologic Quarterly and NSA Technical Journal have indices by title, author and keyword. The index to Cryptologic Spectrum has indices by author, title and issue.

Consider these bibliographic tools as stepping stones. If you want an article, send a FOIA request for it. Send a FOIA request for a dozen. There’s a lot of stuff here that would help elucidate the early history of the agency and some interesting cryptographic topics.

Thanks Mike, for doing this work.

Posted on September 26, 2006 at 12:58 PM35 Comments

The Hidden Benefits of Network Attack

An anonymous note in the Harvard Law Review argues that there is a significant benefit from Internet attacks:

This Note argues that computer networks, particularly the Internet, can be thought of as having immune systems that are strengthened by certain attacks. Exploitation of security holes prompts users and vendors to close those holes, vendors to emphasize security in system development, and users to adopt improved security practices. This constant strengthening of security reduces the likelihood of a catastrophic attack—one that would threaten national or even global security. In essence, certain cybercrime can create more benefits than costs, and cybercrime policy should take this concept into account.

Posted on September 26, 2006 at 6:42 AM48 Comments

Germans Spying on British Trash

You can’t make this stuff up:

Electronic spy ‘bugs’ have been secretly planted in hundreds of thousands of household wheelie bins.

The gadgets – mostly installed by companies based in Germany – transmit information about the contents of the bins to a central database which then keeps records on the waste disposal habits of each individual address.

Already some 500,000 bins in council districts across England have been fitted with the bugs – with nearly all areas expected to follow suit within the next couple of years.

Until now, the majority of bins have been altered without the knowledge of their owners. In many cases, councils which ordered the installation of the devices did not even debate the proposals publicly.

The official reason for the bugs is to ‘improve efficiency’ and settle disputes between neighbours over wheelie-bin ownership. But experts say the technology is actually intended to enable councils to impose fines on householders who exceed limits on the amount of non-recyclable waste they put out. New powers for councils to do this are expected to be introduced by the Government shortly.

Posted on September 25, 2006 at 1:35 PM57 Comments

U.S. Visa Application Questions

People applying for a visa to enter the United States have to answer these questions (among others):

Have you ever been arrested of convicted for any offense or crime, even through subject of a pardon, amnesty or other similar legal action? Have you ever unlawfully distributed or sold a controlled substance (drug), or been a prostitute or procurer for prostitutes?

[…]

Did you seek to enter the United States to engage in export control violations, subversive or terrorist activities, or any other unlawful purpose? Are you a member or representative of a terrorist organization as currently designated by the U.S. Secretary of State? Have you ever participated in persecutions directed by the Nazi government or Germany; or have you ever participated in genocide?

Certainly, anyone who is a terrorist or drug dealer wouldn’t worry about lying on his visa application. So, what’s the point of these questions? I used to think it was so that if someone is convicted of one of these activities he can also be convicted of visa-application fraud…but I’m not sure that explanation makes any sense.

Anyone have any better ideas? What is the security benefit of asking these questions?

Posted on September 25, 2006 at 7:26 AM

Expensive Cameras in Checked Luggage

This is a blog post about the problems of being forced to check expensive camera equipment on airplanes:

Well, having lived in Kashmir for 12+ years I am well accustomed to this type of security. We haven’t been able to have hand carries since 1990. We also cannot have batteries in any of our equipment checked or otherwise. At least we have been able to carry our laptops on and recently been able to actually use them (with the batteries). But, if things keep moving in this direction, and I’m sure it will, we need to start thinking now about checking our cameras and computers and how to do it safely.
This is a very unpleasant idea. Two years ago I ordered a Canon 20D and had it “hand carried” over to meet me in England by a friend. My friend put it in their checked bag. The bag never showed up. She did not have insurance and all I got $100 from British Airways for the camera and $500 from American Express (buyers protection) that was it. So now it looks as if we are going to have to check our cameras and our computers involuntarily. OK here are a few thoughts.

Pretty basic stuff, and we all know about the risks of putting expensive stuff in your checked luggage.

The interesting part is one of the blog comments, about halfway down. Another photographer wonders if the TSA rules for firearms could be extended to camera equipment:

Why not just have the TSA adopt the same check in rules for photographic and video equipment as they do for firearms?

All firearms must be in checked baggage, no carry on.

All firearms must be transported in a locked, hard sided case using a non-TSA approved lock. This is to prevent anyone from opening the case after its been screened.

After bringing the equipment to the airline counter and declaring and showing the contents to the airline representative, you take it over to the TSA screening area where it it checked by a screener, relocked in front of you, your key or keys returned to you (if it’s not a combination lock) and put directly on the conveyor belt for loading onto the plane.

No markings, stickers or labels identifying what’s inside are put on the outside of the case or, if packed inside something else, the bag.

Might this solve the problem? I’ve never lost a firearm when flying.

Then someone has the brilliant suggestion of putting a firearm in your camera-equipment case:

A “weapons” is defined as a rifle, shotgun, pistol, airgun, and STARTER PISTOL. Yes, starter pistols – those little guns that fire blanks at track and swim meets – are considered weapons…and do NOT have to be registered in any state in the United States.

I have a starter pistol for all my cases. All I have to do upon check-in is tell the airline ticket agent that I have a weapon to declare…I’m given a little card to sign, the card is put in the case, the case is given to a TSA official who takes my key and locks the case, and gives my key back to me.

That’s the procedure. The case is extra-tracked…TSA does not want to lose a weapons case. This reduces the chance of the case being lost to virtually zero.

It’s a great way to travel with camera gear…I’ve been doing this since Dec 2001 and have had no problems whatsoever.

I have to admit that I am impressed with this solution.

Posted on September 22, 2006 at 12:17 PM120 Comments

Programming ATMs to Believe $20 Bills Are $5 Bills

Clever attack:

Last month, a man reprogrammed an automated teller machine at a gas station on Lynnhaven Parkway to spit out four times as much money as it should.

He then made off with an undisclosed amount of cash.

No one noticed until nine days later, when a customer told the clerk at a Crown gas station that the machine was disbursing more money than it should. Police are now investigating the incident as fraud.

Police spokeswoman Rene Ball said the first withdrawal occurred at 6:17 p.m. Aug. 19. Surveillance footage documented a man about 5-foot-8 with a thin build walking into the gas station on the 2400 block of Lynnhaven Parkway and swiping an ATM card.

The man then punched a series of numbers on the machine’s keypad, breaking the security code. The ATM was programmed to disburse $20 bills. The man reprogrammed the machine so it recorded each $20 bill as a $5 debit to his account.

The suspect returned to the gas station a short time later and took more money, but authorities did not say how much. Because the account was pre-paid and the card could be purchased at several places, police are not sure who is behind the theft.

What’s weird is that it seems that this is easy. The ATM is a Tranax Mini Bank 1500. And you can buy the manuals from the Tranax website. And they’re useful for this sort of thing:

I am holding in my hands a legitimately obtained copy of the manual. There are a lot of security sensitive things inside of this manual. As promised, I am not going to reveal them, but there are:

  • Instructions on how to enter the diagnostic mode
  • Default passwords

  • Default Combinations For the Safe

Do not ask me for them. If you maintain one of these devices, make sure that you are not using the default password. If you are, change it immediately.

This is from an eWeek article:

“If you get your hand on this manual, you can basically reconfigure the ATM if the default password was not changed. My guess is that most of these mini-bank terminals are sitting around with default passwords untouched,” Goldsmith said.

Officials at Tranax did not respond to eWEEK requests for comment. According to a note on the company’s Web site, Tranax has shipped 70,000 ATMs, self-service terminals and transactional kiosks around the country. The majority of those shipments are of the flagship Mini-Bank 1500 machine that was rigged in the Virginia Beach heist.

So, as long as you can use an account that’s not traceable back to you, and you disguise yourself for the ATM cameras, this is a pretty easy crime.

eWeek claims you can get a copy of the manual simply by Googling for it. (Here’s one on eBay.

And Tranax is promising a fix that will force operators to change the default passwords. But honestly, what’s the liklihood that someone who can’t be bothered to change the default password will take the time to install a software patch?

EDITED TO ADD (9/22): Here’s the manual.

Posted on September 22, 2006 at 7:04 AM84 Comments

Screaming Cell Phones

Cell phone security:

Does it pay to scream if your cell phone is stolen? Synchronica, a mobile device management company, thinks so. If you use the company’s Mobile Manager service and your handset is stolen, the company, once contacted, will remotely lockdown your phone, erase all its data and trigger it to emit a blood-curdling scream to scare the bejesus out of the thief.

The general category of this sort of security countermeasure is “benefit denial.” It’s like those dye tags on expensive clothing; if you shoplift the clothing and try to remove the tag, dye spills all over the clothes and makes them unwearable. The effectiveness of this kind of thing relies on the thief knowing that the security measure is there, or is reasonably likely to be there. It’s an effective shoplifting deterrent; my guess is that it will be less effective against cell phone thieves.

Remotely erasing data on stolen cell phones is a good idea regardless, though. And since cell phones are far more often lost than stolen, how about the phone calmly announcing that it is lost and it would like to be returned to its owner?

Posted on September 21, 2006 at 12:12 PM42 Comments

Facebook and Data Control

Earlier this month, the popular social networking site Facebook learned a hard lesson in privacy. It introduced a new feature called “News Feeds” that shows an aggregation of everything members do on the site: added and deleted friends, a change in relationship status, a new favorite song, a new interest, etc. Instead of a member’s friends having to go to his page to view any changes, these changes are all presented to them automatically.

The outrage was enormous. One group, Students Against Facebook News Feeds, amassed over 700,000 members. Members planned to protest at the company’s headquarters. Facebook’s founder was completely stunned, and the company scrambled to add some privacy options.

Welcome to the complicated and confusing world of privacy in the information age. Facebook didn’t think there would be any problem; all it did was take available data and aggregate it in a novel way for what it perceived was its customers’ benefit. Facebook members instinctively understood that making this information easier to display was an enormous difference, and that privacy is more about control than about secrecy.

But on the other hand, Facebook members are just fooling themselves if they think they can control information they give to third parties.

Privacy used to be about secrecy. Someone defending himself in court against the charge of revealing someone else’s personal information could use as a defense the fact that it was not secret. But clearly, privacy is more complicated than that. Just because you tell your insurance company something doesn’t mean you don’t feel violated when that information is sold to a data broker. Just because you tell your friend a secret doesn’t mean you’re happy when he tells others. Same with your employer, your bank, or any company you do business with.

But as the Facebook example illustrates, privacy is much more complex. It’s about who you choose to disclose information to, how, and for what purpose. And the key word there is “choose.” People are willing to share all sorts of information, as long as they are in control.

When Facebook unilaterally changed the rules about how personal information was revealed, it reminded people that they weren’t in control. Its eight million members put their personal information on the site based on a set of rules about how that information would be used. It’s no wonder those members—high school and college kids who traditionally don’t care much about their own privacy—felt violated when Facebook changed the rules.

Unfortunately, Facebook can change the rules whenever it wants. Its Privacy Policy is 2,800 words long, and ends with a notice that it can change at any time. How many members ever read that policy, let alone read it regularly and check for changes? Not that a Privacy Policy is the same as a contract. Legally, Facebook owns all data members upload to the site. It can sell the data to advertisers, marketers, and data brokers. (Note: there is no evidence that Facebook does any of this.) It can allow the police to search its databases upon request. It can add new features that change who can access what personal data, and how.

But public perception is important. The lesson here for Facebook and other companies—for Google and MySpace and AOL and everyone else who hosts our e-mails and webpages and chat sessions—is that people believe they own their data. Even though the user agreement might technically give companies the right to sell the data, change the access rules to that data, or otherwise own that data, we—the users—believe otherwise. And when we who are affected by those actions start expressing our views—watch out.

What Facebook should have done was add the feature as an option, and allow members to opt in if they wanted to. Then, members who wanted to share their information via News Feeds could do so, and everyone else wouldn’t have felt that they had no say in the matter. This is definitely a gray area, and it’s hard to know beforehand which changes need to be implemented slowly and which won’t matter. Facebook, and others, need to talk to its members openly about new features. Remember: members want control.

The lesson for Facebook members might be even more jarring: if they think they have control over their data, they’re only deluding themselves. They can rebel against Facebook for changing the rules, but the rules have changed, regardless of what the company does.

Whenever you put data on a computer, you lose some control over it. And when you put it on the internet, you lose a lot of control over it. News Feeds brought Facebook members face to face with the full implications of putting their personal information on Facebook. It had just been an accident of the user interface that it was difficult to aggregate the data from multiple friends into a single place. And even if Facebook eliminates News Feeds entirely, a third party could easily write a program that does the same thing. Facebook could try to block the program, but would lose that technical battle in the end.

We’re all still wrestling with the privacy implications of the Internet, but the balance has tipped in favor of more openness. Digital data is just too easy to move, copy, aggregate, and display. Companies like Facebook need to respect the social rules of their sites, to think carefully about their default settings—they have an enormous impact on the privacy mores of the online world—and to give users as much control over their personal information as they can.

But we all need to remember that much of that control is illusory.

This essay originally appeared on Wired.com.

Posted on September 21, 2006 at 5:57 AM38 Comments

Did Hezbollah Crack Israeli Secure Radio?

According to Newsday:

Hezbollah guerrillas were able to hack into Israeli radio communications during last month’s battles in south Lebanon, an intelligence breakthrough that helped them thwart Israeli tank assaults, according to Hezbollah and Lebanese officials.

Using technology most likely supplied by Iran, special Hezbollah teams monitored the constantly changing radio frequencies of Israeli troops on the ground. That gave guerrillas a picture of Israeli movements, casualty reports and supply routes. It also allowed Hezbollah anti-tank units to more effectively target advancing Israeli armor, according to the officials.

Read the article. Basically, the problem is operational error:

With frequency-hopping and encryption, most radio communications become very difficult to hack. But troops in the battlefield sometimes make mistakes in following secure radio procedures and can give an enemy a way to break into the frequency-hopping patterns. That might have happened during some battles between Israel and Hezbollah, according to the Lebanese official. Hezbollah teams likely also had sophisticated reconnaissance devices that could intercept radio signals even while they were frequency-hopping.

I agree with this comment from The Register:

Claims that Hezbollah fighters were able to use this intelligence to get some intelligence on troop movement and supply routes are plausible, at least to the layman, but ought to be treated with an appropriate degree of caution as they are substantially corroborated by anonymous sources.

But I have even more skepticism. If indeed Hezbollah was able to do this, the last thing they want is for it to appear in the press. But if Hezbollah can’t do this, then a few good disinformation stories are a good thing.

Posted on September 20, 2006 at 2:35 PM40 Comments

University Networks and Data Security

In general, the problems of securing a university network are no different than those of securing any other large corporate network. But when it comes to data security, universities have their own unique problems. It’s easy to point fingers at students—a large number of potentially adversarial transient insiders. Yet that’s really no different from a corporation dealing with an assortment of employees and contractors—the difference is the culture.

Universities are edge-focused; central policies tend to be weak, by design, with maximum autonomy for the edges. This means they have natural tendencies against centralization of services. Departments and individual professors are used to being semiautonomous. Because these institutions were established long before the advent of computers, when networking did begin to infuse universities, it developed within existing administrative divisions. Some universities have academic departments with separate IT departments, budgets, and staff, with a central IT group providing bandwidth but little or no oversight. Unfortunately, these smaller IT groups don’t generally count policy development and enforcement as part of their core competencies.

The lack of central authority makes enforcing uniform standards challenging, to say the least. Most university CIOs have much less power than their corporate counterparts; university mandates can be a major obstacle in enforcing any security policy. This leads to an uneven security landscape.

There’s also a cultural tendency for faculty and staff to resist restrictions, especially in the area of research. Because most research is now done online—or, at least, involves online access—restricting the use of or deciding on appropriate uses for information technologies can be difficult. This resistance also leads to a lack of centralization and an absence of IT operational procedures such as change control, change management, patch management, and configuration control.

The result is that there’s rarely a uniform security policy. The centralized servers—the core where the database servers live—are generally more secure, whereas the periphery is a hodgepodge of security levels.

So, what to do? Unfortunately, solutions are easier to describe than implement. First, universities should take a top-down approach to securing their infrastructure. Rather than fighting an established culture, they should concentrate on the core infrastructure.

Then they should move personal, financial, and other comparable data into that core. Leave information important to departments and research groups to them, and centrally store information that’s important to the university as a whole. This can be done under the auspices of the CIO. Laws and regulations can help drive consolidation and standardization.

Next, enforce policies for departments that need to connect to the sensitive data in the core. This can be difficult with older legacy systems, but establishing a standard for best practices is better than giving up. All legacy technology is upgraded eventually.

Finally, create distinct segregated networks within the campus. Treat networks that aren’t under the IT department’s direct control as untrusted. Student networks, for example, should be firewalled to protect the internal core from them. The university can then establish levels of trust commensurate with the segregated networks’ adherence to policies. If a research network claims it can’t have any controls, then let the university create a separate virtual network for it, outside the university’s firewalls, and let it live there. Note, though, that if something or someone on that network wants to connect to sensitive data within the core, it’s going to have to agree to whatever security policies that level of data access requires.

Securing university networks is an excellent example of the social problems surrounding network security being harder than the technical ones. But harder doesn’t mean impossible, and there is a lot that can be done to improve security.

This essay originally appeared in the September/October issue of IEEE Security & Privacy.

Posted on September 20, 2006 at 7:37 AM46 Comments

On-Card Displays

This is impressive: a display that works on a flexible credit card.

One of the major security problems with smart cards is that they don’t have their own I/O. That is, you have to trust whatever card reader/writer you stick the card in to faithfully send what you type into the card, and display whatever the card spits back out. Way back in 1999, Adam Shostack and I wrote a paper about this general class of security problem.

Think WYSIWTCS: What You See Is What The Card Says. That’s what an on-card display does.

No, it doesn’t protect against tampering with the card. That’s part of a completely different set of threats.

Posted on September 19, 2006 at 2:18 PM40 Comments

Organized Cybercrime

Cybercrime is getting organized:

Cyberscams are increasingly being committed by organized crime syndicates out to profit from sophisticated ruses rather than hackers keen to make an online name for themselves, according to a top U.S. official.

Christopher Painter, deputy chief of the computer crimes and intellectual property section at the Department of Justice, said there had been a distinct shift in recent years in the type of cybercriminals that online detectives now encounter.

“There has been a change in the people who attack computer networks, away from the ‘bragging hacker’ toward those driven by monetary motives,” Painter told Reuters in an interview this week.

Although media reports often focus on stories about teenage hackers tracked down in their bedroom, the greater danger lies in the more anonymous virtual interlopers.

“There are still instances of these ‘lone-gunman’ hackers but more and more we are seeing organized criminal groups, groups that are often organized online targeting victims via the internet,” said Painter, in London for a cybercrime conference.

I’ve been saying this sort of thing for years, and have long complained that cyberterrorism gets all the press while cybercrime is the real threat. I don’t think this article is fear and hype; it’s a real problem.

Posted on September 19, 2006 at 7:16 AM21 Comments

More on the HP Board Spying Scandal

Two weeks ago I wrote about a spying scandal involving the HP board. There’s more:

A secret investigation of news leaks at Hewlett-Packard was more elaborate than previously reported, and almost from the start involved the illicit gathering of private phone records and direct surveillance of board members and journalists, according to people briefed on the company’s review of the operation.

Given this, I predict a real investigation into the incident:

Those briefed on the company’s review of the operation say detectives tried to plant software on at least one journalist’s computer that would enable messages to be traced, and also followed directors and possibly a journalist in an attempt to identify a leaker on the board.

I’m amazed there isn’t more outcry. Pretexting, planting Trojans…this is the sort of thing that would get a “hacker” immediately arrested. But if the chairman of the HP board does it, suddenly it’s a gray area.

EDITED TO ADD (9/20): More info.

Posted on September 18, 2006 at 2:48 PM38 Comments

Pupillometer

Does this EyeCheck device sound like anything other than snake oil:

The device looks like binoculars, and in seconds it scans an individuals pupils to detect a problem.

“They’ll be able to tell if they’re on drugs, and what kind, whether marijuana, cocaine, or alcohol. Or even in the case of a tractor trailer driver, is he too tired to drive his rig?” said Ohio County Sheriff Tom Burgoyne.

The device can also detect abnormalities from chemical and biological effects, as well as natural disasters.

Here’s the company. The device is called a pupillometer, and “uses patented technologies to deliver reliable pupil measurements in less than five minutes for the detection of drugs and fatigue.” And despite what the article implied, the device doesn’t do this at a distance.

I’m not impressed with the research, but this is not my area of expertise. Anyone?

Posted on September 18, 2006 at 1:39 PM50 Comments

Renew Your Passport Now!

If you have a passport, now is the time to renew it—even if it’s not set to expire anytime soon. If you don’t have a passport and think you might need one, now is the time to get it. In many countries, including the United States, passports will soon be equipped with RFID chips. And you don’t want one of these chips in your passport.

RFID stands for “radio-frequency identification.” Passports with RFID chips store an electronic copy of the passport information: your name, a digitized picture, etc. And in the future, the chip might store fingerprints or digital visas from various countries.

By itself, this is no problem. But RFID chips don’t have to be plugged in to a reader to operate. Like the chips used for automatic toll collection on roads or automatic fare collection on subways, these chips operate via proximity. The risk to you is the possibility of surreptitious access: Your passport information might be read without your knowledge or consent by a government trying to track your movements, a criminal trying to steal your identity or someone just curious about your citizenship.

At first the State Department belittled those risks, but in response to criticism from experts it has implemented some security features. Passports will come with a shielded cover, making it much harder to read the chip when the passport is closed. And there are now access-control and encryption mechanisms, making it much harder for an unauthorized reader to collect, understand and alter the data.

Although those measures help, they don’t go far enough. The shielding does no good when the passport is open. Travel abroad and you’ll notice how often you have to show your passport: at hotels, banks, Internet cafes. Anyone intent on harvesting passport data could set up a reader at one of those places. And although the State Department insists that the chip can be read only by a reader that is inches away, the chips have been read from many feet away.

The other security mechanisms are also vulnerable, and several security researchers have already discovered flaws. One found that he could identify individual chips via unique characteristics of the radio transmissions. Another successfully cloned a chip. The State Department called this a “meaningless stunt,” pointing out that the researcher could not read or change the data. But the researcher spent only two weeks trying; the security of your passport has to be strong enough to last 10 years.

This is perhaps the greatest risk. The security mechanisms on your passport chip have to last the lifetime of your passport. It is as ridiculous to think that passport security will remain secure for that long as it would be to think that you won’t see another security update for Microsoft Windows in that time. Improvements in antenna technology will certainly increase the distance at which they can be read and might even allow unauthorized readers to penetrate the shielding.

Whatever happens, if you have a passport with an RFID chip, you’re stuck. Although popping your passport in the microwave will disable the chip, the shielding will cause all kinds of sparking. And although the United States has said that a nonworking chip will not invalidate a passport, it is unclear if one with a deliberately damaged chip will be honored.

The Colorado passport office is already issuing RFID passports, and the State Department expects all U.S. passport offices to be doing so by the end of the year. Many other countries are in the process of changing over. So get a passport before it’s too late. With your new passport you can wait another 10 years for an RFID passport, when the technology will be more mature, when we will have a better understanding of the security risks and when there will be other technologies we can use to cut the risks. You don’t want to be a guinea pig on this one.

This op ed appeared on Saturday in the Washington Post.

I’ve written about RFID passports many times before (that last link is an op-ed from The International Herald-Tribune), although last year I—mistakenly—withdrew my objections based on the security measures the State Department was taking. I’ve since realized that they won’t be enough.

EDITED TO ADD (9/29): This op ed has appeared in about a dozen newspapers. The San Jose Mercury News published a rebuttal. Kind of lame, I think.

EDITED TO ADD (12/30): Here’s how to disable a RFID passport.

Posted on September 18, 2006 at 6:06 AM145 Comments

New Diebold Vulnerability

Ed Felten and his team at Princeton have analyzed a Diebold machine:

This paper presents a fully independent security study of a Diebold AccuVote-TS voting machine, including its hardware and software. We obtained the machine from a private party. Analysis of the machine, in light of real election procedures, shows that it is vulnerable to extremely serious attacks. For example, an attacker who gets physical access to a machine or its removable memory card for as little as one minute could install malicious code; malicious code on a machine could steal votes undetectably, modifying all records, logs, and counters to be consistent with the fraudulent vote count it creates. An attacker could also create malicious code that spreads automatically and silently from machine to machine during normal election activities—a voting-machine virus. We have constructed working demonstrations of these attacks in our lab. Mitigating these threats will require changes to the voting machine’s hardware and software and the adoption of more rigorous election procedures.

(Executive summary. Full paper. FAQ. Video demonstration.)

Salon said:

Diebold has repeatedly disputed the findings then as speculation. But the Princeton study appears to demonstrate conclusively that a single malicious person could insert a virus into a machine and flip votes. The study also reveals a number of other vulnerabilities, including that voter access cards used on Diebold systems could be created inexpensively on a personal laptop computer, allowing people to vote as many times as they wish.

More news stories.

Posted on September 14, 2006 at 3:32 PM45 Comments

What is a Hacker?

A hacker is someone who thinks outside the box. It’s someone who discards conventional wisdom, and does something else instead. It’s someone who looks at the edge and wonders what’s beyond. It’s someone who sees a set of rules and wonders what happens if you don’t follow them. A hacker is someone who experiments with the limitations of systems for intellectual curiosity.

I wrote that last sentence in the year 2000, in my book Secrets and Lies. And I’m sticking to that definition.

This is what else I wrote in Secrets and Lies (pages 43-44):

Hackers are as old as curiosity, although the term itself is modern. Galileo was a hacker. Mme. Curie was one, too. Aristotle wasn’t. (Aristotle had some theoretical proof that women had fewer teeth than men. A hacker would have simply counted his wife’s teeth. A good hacker would have counted his wife’s teeth without her knowing about it, while she was asleep. A good bad hacker might remove some of them, just to prove a point.)

When I was in college, I knew a group similar to hackers: the key freaks. They wanted access, and their goal was to have a key to every lock on campus. They would study lockpicking and learn new techniques, trade maps of the steam tunnels and where they led, and exchange copies of keys with each other. A locked door was a challenge, a personal affront to their ability. These people weren’t out to do damage—stealing stuff wasn’t their objective—although they certainly could have. Their hobby was the power to go anywhere they wanted to.

Remember the phone phreaks of yesteryear, the ones who could whistle into payphones and make free phone calls. Sure, they stole phone service. But it wasn’t like they needed to make eight-hour calls to Manila or McMurdo. And their real work was secret knowledge: The phone network was a vast maze of information. They wanted to know the system better than the designers, and they wanted the ability to modify it to their will. Understanding how the phone system worked—that was the true prize. Other early hackers were ham-radio hobbyists and model-train enthusiasts.

Richard Feynman was a hacker; read any of his books.

Computer hackers follow these evolutionary lines. Or, they are the same genus operating on a new system. Computers, and networks in particular, are the new landscape to be explored. Networks provide the ultimate maze of steam tunnels, where a new hacking technique becomes a key that can open computer after computer. And inside is knowledge, understanding. Access. How things work. Why things work. It’s all out there, waiting to be discovered.

Computers are the perfect playground for hackers. Computers, and computer networks, are vast treasure troves of secret knowledge. The Internet is an immense landscape of undiscovered information. The more you know, the more you can do.

And it should be no surprise that many hackers have focused their skills on computer security. Not only is it often the obstacle between the hacker and knowledge, and therefore something to be defeated, but also the very mindset necessary to be good at security is exactly the same mindset that hackers have: thinking outside the box, breaking the rules, exploring the limitations of a system. The easiest way to break a security system is to figure out what the system’s designers hadn’t thought of: that’s security hacking.

Hackers cheat. And breaking security regularly involves cheating. It’s figuring out a smart card’s RSA key by looking at the power fluctuations, because the designers of the card never realized anyone could do that. It’s self-signing a piece of code, because the signature-verification system didn’t think someone might try that. It’s using a piece of a protocol to break a completely different protocol, because all previous security analysis only looked at protocols individually and not in pairs.

That’s security hacking: breaking a system by thinking differently.

It all sounds criminal: recovering encrypted text, fooling signature algorithms, breaking protocols. But honestly, that’s just the way we security people talk. Hacking isn’t criminal. All the examples two paragraphs above were performed by respected security professionals, and all were presented at security conferences.

I remember one conversation I had at a Crypto conference, early in my career. It was outside amongst the jumbo shrimp, chocolate-covered strawberries, and other delectables. A bunch of us were talking about some cryptographic system, including Brian Snow of the NSA. Someone described an unconventional attack, one that didn’t follow the normal rules of cryptanalysis. I don’t remember any of the details, but I remember my response after hearing the description of the attack.

“That’s cheating,” I said.

Because it was.

I also remember Brian turning to look at me. He didn’t say anything, but his look conveyed everything. “There’s no such thing as cheating in this business.”

Because there isn’t.

Hacking is cheating, and it’s how we get better at security. It’s only after someone invents a new attack that the rest of us can figure out how to defend against it.

For years I have refused to play the semantic “hacker” vs. “cracker” game. There are good hackers and bad hackers, just as there are good electricians and bad electricians. “Hacker” is a mindset and a skill set; what you do with it is a different issue.

And I believe the best computer security experts have the hacker mindset. When I look to hire people, I look for someone who can’t walk into a store without figuring out how to shoplift. I look for someone who can’t test a computer security program without trying to get around it. I look for someone who, when told that things work in a particular way, immediately asks how things stop working if you do something else.

We need these people in security, and we need them on our side. Criminals are always trying to figure out how to break security systems. Field a new system—an ATM, an online banking system, a gambling machine—and criminals will try to make an illegal profit off it. They’ll figure it out eventually, because some hackers are also criminals. But if we have hackers working for us, they’ll figure it out first—and then we can defend ourselves.

It’s our only hope for security in this fast-moving technological world of ours.

This essay appeared in the Summer 2006 issue of 2600.

Posted on September 14, 2006 at 7:13 AM45 Comments

Burglars Foil Alarm Systems

Clever trick:

Their scheme: Cut a closed store’s phone lines. Hang back while cops respond to the alarm. After officers fail to spot anything wrong and drive away, break into the store and spend as much time as they need to make off with a weekend’s worth of cash.

And one I wrote about in Beyond Fear (page 56):

Attackers commonly force active failures specifically to cause a larger system to fail. Burglars cut an alarm wire at a warehouse and then retreat a safe distance. The police arrive and find nothing, decide that it’s an active failure, and tell the warehouse owner to deal with it in the morning. Then, after the police leave, the burglars reappear and steal everything.

Posted on September 13, 2006 at 11:10 AM82 Comments

Laptop Seizures in Sudan

According to CNN:

Sudanese security forces have begun seizing laptop computers entering the country to check on the information stored on them as part of new security measures.

One state security source said the laptops are searched and returned in one day and that the procedure was introduced because pornographic films and photographs were entering Sudan.

U.N. officials, aid agency workers, businessmen and journalists who regularly visit Sudan worry the security of sensitive and confidential information such as medical, legal and financial records on their computers could be at risk.

Authorities have cracked down on organizations like Medecins Sans Frontieres, the International Rescue Committee who have published reports on huge numbers of rapes in the violent Darfur region.

(More commentary here.)

While the stated reason is pornography, anyone bringing a computer into the country should be concerned about personal information, writing that might be deemed political by the Sudanese authorities, confidential business information, and so on.

And this should be a concern regardless of the border you cross. Your privacy rights when trying to enter a country are minimal, and this kind of thing could happen anywhere. (I have heard anecdotal stories about Israel doing this, but don’t have confirmation.)

If you’re bringing a laptop across an international border, you should clean off all unnecessary files and encrypt the rest.

EDITED TO ADD (9/15): This is legal in the U.S.

EDITED TO ADD (9/30): More about the legality of this in the U.S.

Posted on September 13, 2006 at 6:44 AM54 Comments

Is There Strategic Software?

If you define “critical infrastructure” as “things essential for the functioning of a society and economy,” then software is critical infrastructure. For many companies and individuals, if their computers stop working, they stop working.

It’s a situation that snuck up on us. Everyone knew that the software that flies 747s or targets cruise missiles was critical, but who thought of the airlines’ weight and balance computers, or the operating system running the databases and spreadsheets that determine which cruise missiles get shipped where?

And over the years, common, off-the-shelf, personal- and business-grade software has been used for more and more critical applications. Today we find ourselves in a situation where a well-positioned flaw in Windows, Cisco routers or Apache could seriously affect the economy.

It’s perfectly rational to assume that some programmers—a tiny minority I’m sure—are deliberately adding vulnerabilities and back doors into the code they write. I’m actually kind of amazed that back doors secretly added by the CIA/NSA, MI5, the Chinese, Mossad and others don’t conflict with each other. Even if these groups aren’t infiltrating software companies with back doors, you can be sure they’re scouring products for vulnerabilities they can exploit, if necessary. On the other hand, we’re already living in a world where dozens of new flaws are discovered in common software products weekly, and the economy is humming along. But we’re not talking about this month’s worm from Asia or new phishing software from the Russian mafia—we’re talking national intelligence organizations. “Infowar” is an overhyped term, but the next war will have a cyberspace component, and these organizations wouldn’t be doing their jobs if they weren’t preparing for it.

Marcus is 100 percent correct when he says it’s simply too late to do anything about it. The software industry is international, and no country can start demanding domestic-only software and expect to get anywhere. Nor would that actually solve the problem, which is more about the allegiance of millions of individual programmers than which country they happen to inhabit.

So, what to do? The key here is to remember the real problem: current commercial software practices are not secure enough to reliably detect and delete deliberately inserted malicious code. Once you understand this, you’ll drop the red herring arguments that led to CheckPoint not being able to buy Sourcefire and concentrate on the real solution: defense in depth.

In theory, security software are after-the-fact kludges because the underlying OS and apps are riddled with vulnerabilities. If your software were written properly, you wouldn’t need a firewall—right?

If we were to get serious about critical infrastructure, we’d recognize it’s all critical and start building security software to protect it. We’d build our security based on the principles of safe failure; we’d assume security would fail and make sure it’s OK when it does. We’d use defense in depth and compartmentalization to minimize the effects of failure. Basically, we’d do everything we’re supposed to do now to secure our networks.

It’d be expensive, probably prohibitively so. Maybe it would be easier to continue to ignore the problem, or at least manage geopolitics so that no national military wants to take us down.

This is the second half of a point/counterpoint I did with Marcus Ranum (here’s his half) for the September 2006 issue of Information Security Magazine.

Posted on September 12, 2006 at 10:38 AM30 Comments

Ultimate Secure Home

Wow:

For Sale By Owner – The Ultimate Secure Home:

Strategically located in the awesome San Juan mountains of Southwest Colorado, this patented steel-reinforced concrete earth home was built to withstand almost any natural or man-made disaster you can name. It is more secure, safe, and functional than any conventional house could ever be, yet still has a level of comfort that one might not expect to find in an underground home.

The list of features starts out reasonable, but the description of how it was built and why just kept getting more surreal.

And, of course:

The exact location of the house will only be revealed to serious, pre-screened, and financially pre-qualified prospective buyers at an appropriate time. The owner believes that keeping the exact location secret to the general public is an important part of the home’s security.

What’s your vote? Real or hoax?

Posted on September 12, 2006 at 7:29 AM64 Comments

Notes from the Hash Function Workshop

Last month, NIST hosted the Second Hash Workshop, primarily as a vehicle for discussing a replacement strategy for SHA-1. (I liveblogged NIST’s first Cryptographic Hash Workshop here, here, here, here, and here.)

As I’ve written about before, there are some impressive cryptanalytic results against SHA-1. These attacks are still not practical, and the hash function is still operationally secure, but it makes sense for NIST to start looking at replacement strategies—before these attacks get worse.

The conference covered a wide variety of topics (see the agenda for details) on hash function design, hash function attacks, hash function features, and so on.

Perhaps the most interesting part was a panel discussion called “SHA-256 Today and Maybe Something Else in a Few Years: Effects on Research and Design.” Moderated by Paul Hoffman (VPN Consortium) and Arjen Lenstra (Ecole Polytechnique Federale de Lausanne), the panel consisted of Niels Ferguson (Microsoft), Antoine Joux (Universite de Versailles-Saint-Quentin-en-Yvelines), Bart Preneel (Katholieke Universiteit Leuven), Ron Rivest (MIT), and Adi Shamir (Weismann Institute of Science).

Paul Hoffman has posted a composite set of notes from the panel discussion. If you’re interested in the current state of hash function research, it’s well worth reading.

My opinion is that we need a new hash function, and that a NIST-sponsored contest is a great way to stimulate research in the area. I think we need one function and one function only, because users won’t know how to choose between different functions. (It would be smart to design the function with a couple of parameters that can be easily changed to increase security—increase the number of rounds, for example—but it shouldn’t be a variable that users have to decide whether or not to change.) And I think it needs to be secure in the broadest definitions we can come up with: hash functions are the workhorse of cryptographic protocols, and they’re used in all sorts of places for all sorts of reasons in all sorts of applications. We can’t limit the use of hash functions, so we can’t put one out there that’s only secure if used in a certain way.

Posted on September 11, 2006 at 3:30 PM27 Comments

Media Sanitization and Encryption

Last week NIST released Special Publication 800-88, Guidelines for Media Sanitization.

There is a new paragraph in this document (page 7) that was not in the draft version:

Encryption is not a generally accepted means of sanitization. The increasing power of computers decreases the time needed to crack cipher text and therefore the inability to recover the encrypted data can not be assured.

I have to admit that this doesn’t make any sense to me. If the encryption is done properly, and if the key is properly chosen, then erasing the key—and all copies—is equivalent to erasing the files. And if you’re using full-disk encryption, then erasing the key is equivalent to sanitizing the drive. For that not to be true means that the encryption program isn’t secure.

I think NIST is just confused.

Posted on September 11, 2006 at 11:43 AM65 Comments

More Than 10 Ways to Avoid the Next 9/11

From yesterday’s New York Times, “Ten Ways to Avoid the Next 9/11”:

If we are fortunate, we will open our newspapers this morning knowing that there have been no major terrorist attacks on American soil in nearly five years. Did we just get lucky?

The Op-Ed page asked 10 people with experience in security and counterterrorism to answer the following question: What is one major reason the United States has not suffered a major attack since 2001, and what is the one thing you would recommend the nation do in order to avoid attacks in the future?

Actually, they asked more than 10, myself included. But some of us were cut because they didn’t have enough space. This was my essay:

Despite what you see in the movies and on television, it’s actually very difficult to execute a major terrorist act. It’s hard to organize, plan, and execute an attack, and it’s all too easy to slip up and get caught. Combine that with our intelligence work tracking terrorist cells and interdicting terrorist funding, and you have a climate where major attacks are rare. In many ways, the success of 9/11 was an anomaly; there were many points where it could have failed. The main reason we haven’t seen another 9/11 is that it isn’t as easy as it looks.

Much of our counterterrorist efforts are nothing more than security theater: ineffectual measures that look good. Forget the “war on terror”; the difficulty isn’t killing or arresting the terrorists, it’s finding them. Terrorism is a law enforcement problem, and needs to be treated as such. For example, none of our post-9/11 airline security measures would have stopped the London shampoo bombers. The lesson of London is that our best defense is intelligence and investigation. Rather than spending money on airline security, or sports stadium security—measures that require us to guess the plot correctly in order to be effective—we’re better off spending money on measures that are effective regardless of the plot.

Intelligence and investigation have kept us safe from terrorism in the past, and will continue to do so in the future. If the CIA and FBI had done a better job of coordinating and sharing data in 2001, 9/11 would have been another failed attempt. Coordination has gotten better, and those agencies are better funded—but it’s still not enough. Whenever you read about the billions being spent on national ID cards or massive data mining programs or new airport security measures, think about the number of intelligence agents that the same money could buy. That’s where we’re going to see the greatest return on our security investment.

Posted on September 11, 2006 at 6:36 AM59 Comments

Friday Squid Blogging: Squid Soap

It’s SquidSoap:

SquidSoap works by applying a small ink mark on a person’s hand when they press the pump to dispense the soap. The ink is designed to wash off after the hands are washed for about 15-20 seconds, which is the time recommended by most doctors.

Note the security angle:

Dirty hands are a leading cause of the spread of infection and food-borne illness. Whether it’s due to laziness or lack of education – our failure to wash our hands is costing the U.S. economy billions every year and causing thousands of unnecessary illnesses and deaths.

Never mind about terrorism. It’s dirty hands!

Posted on September 8, 2006 at 3:07 PM27 Comments

Digital Snooping for the Masses

Interesting article from The New York Times:

Flip open your husband’s cellphone and scroll down the log of calls received. Glance over your teenager’s shoulder at his screenful of instant messages. Type in a girlfriend’s password and rifle through her e-mail.

There was a time when unearthing someone’s private thoughts and deeds required sliding a hand beneath a mattress, fishing out a diary and hurriedly skimming its pages. The process was tactile, deliberate and fraught with anxiety: Will I be caught? Is this ethical? What will it do to my relationship with my child or partner?

But digital technology has made uncovering secrets such a painless, antiseptic process that the boundary delineating what is permissible in a relationship appears to be shifting.

In interviews and on blogs across the Web, people report that they snoop and spy on others “friends, family, colleagues” unencumbered by anxiety or guilt.

Posted on September 8, 2006 at 12:39 PM17 Comments

Land Title Fraud

There seems to be a small epidemic of land title fraud in Ontario, Canada.

What happens is someone impersonates the homeowner, and then sells the house out from under him. The former owner is still liable for the mortgage, but can’t get in his former house. Cleaning up the problem takes a lot of time and energy.

The problem is one of economic incentives. If banks were held liable for fraudulent mortgages, then the problem would go away really quickly. But as long as they’re not, they have no incentive to ensure that this fraud doesn’t occur. (They have some incentive, because the fraud costs them money, but as long as the few fraud cases cost less than ensuring the validity of every mortgage, they’ll just ignore the problem and eat the losses when fraud occurs.)

EDITED TO ADD (9/8): Another article.

Posted on September 8, 2006 at 6:43 AM47 Comments

Spying on the HP Board

Fascinating story.

Basically, the chairman of Hewlett-Packard, annoyed at leaks, hired investigators to track down the phone records (including home and cell) of the other HP board members. One board member resigned because of this. The leaker has refused to resign, although he has been outed.

Note that the article says that the investigators used “pretexting,” which is illegal.

The entire episode—beyond its impact on the boardroom of a $100 billion company, Dunn’s ability to continue as chairwoman and the possibility of civil lawsuits claiming privacy invasions and fraudulent misrepresentations—raises questions about corporate surveillance in a digital age. Audio and visual surveillance capabilities keep advancing, both in their ability to collect and analyze data. The Web helps distribute that data efficiently and effortlessly. But what happens when these advances outstrip the
ability of companies (and, for that matter, governments) to reach consensus on ethical limits? How far will companies go to obtain information they seek for competitive gain or better management?

The HP case specifically also sheds another spotlight on the questionable tactics used by security consultants to obtain personal information. HP acknowledged in an internal e-mail sent from its outside counsel to Perkins that it got the paper trail it needed to link the director-leaker to CNET through a controversial practice called “pretexting”; NEWSWEEK obtained a copy of that e-mail. That practice, according to the Federal Trade Commission, involves using “false pretenses” to get another individual’s personal nonpublic information: telephone records, bank and credit-card account numbers, Social Security number and the like.

EDITED TO ADD (9/8): Good commentary.

EDITED TO ADD (9/12): HP Chairman Patricia Dunn was fired.

Posted on September 7, 2006 at 1:47 PM58 Comments

Microsoft and FairUse4WM

If you really want to see Microsoft scramble to patch a hole in its software, don’t look to vulnerabilities that impact countless Internet Explorer users or give intruders control of thousands of Windows machines. Just crack Redmond’s DRM.

Security patches used to be rare. Software vendors were happy to pretend that vulnerabilities in their products were illusory—and then quietly fix the problem in the next software release.

That changed with the full disclosure movement. Independent security researchers started going public with the holes they found, making vulnerabilities impossible for vendors to ignore. Then worms became more common; patching—and patching quickly—became the norm.

But even now, no software vendor likes to issue patches. Every patch is a public admission that the company made a mistake. Moreover, the process diverts engineering resources from new development. Patches annoy users by making them update their software, and piss them off even more if the update doesn’t work properly.

For the vendor, there’s an economic balancing act: how much more will your users be annoyed by unpatched software than they will be by the patch, and is that reduction in annoyance worth the cost of patching?

Since 2003, Microsoft’s strategy to balance these costs and benefits has been to batch patches: instead of issuing them one at a time, it’s been issuing them all together on the second Tuesday of each month. This decreases Microsoft’s development costs and increases the reliability of its patches.

The user pays for this strategy by remaining open to known vulnerabilities for up to a month. On the other hand, users benefit from a predictable schedule: Microsoft can test all the patches that are going out at the same time, which means that patches are more reliable and users are able to install them faster with more confidence.

In the absence of regulation, software liability, or some other mechanism to make unpatched software costly for the vendor, “Patch Tuesday” is the best users are likely to get.

Why? Because it makes near-term financial sense to Microsoft. The company is not a public charity, and if the internet suffers, or if computers are compromised en masse, the economic impact on Microsoft is still minimal.

Microsoft is in the business of making money, and keeping users secure by patching its software is only incidental to that goal.

There’s no better example of this of this principle in action than Microsoft’s behavior around the vulnerability in its digital rights management software PlaysForSure.

Last week, a hacker developed an application called FairUse4WM that strips the copy protection from Windows Media DRM 10 and 11 files.

Now, this isn’t a “vulnerability” in the normal sense of the word: digital rights management is not a feature that users want. Being able to remove copy protection is a good thing for some users, and completely irrelevant for everyone else. No user is ever going to say: “Oh no. I can now play the music I bought for my computer in my car. I must install a patch so I can’t do that anymore.”

But to Microsoft, this vulnerability is a big deal. It affects the company’s relationship with major record labels. It affects the company’s product offerings. It affects the company’s bottom line. Fixing this “vulnerability” is in the company’s best interest; never mind the customer.

So Microsoft wasted no time; it issued a patch three days after learning about the hack. There’s no month-long wait for copyright holders who rely on Microsoft’s DRM.

This clearly demonstrates that economics is a much more powerful motivator than security.

It should surprise no one that the system didn’t stay patched for long. FairUse4WM 1.2 gets around Microsoft’s patch, and also circumvents the copy protection in Windows Media DRM 9 and 11beta2 files.

That was Saturday. Any guess on how long it will take Microsoft to patch Media Player once again? And then how long before the FairUse4WM people update their own software?

Certainly much less time than it will take Microsoft and the recording industry to realize they’re playing a losing game, and that trying to make digital files uncopyable is like trying to make water not wet.

If Microsoft abandoned this Sisyphean effort and put the same development effort into building a fast and reliable patching system, the entire internet would benefit. But simple economics says it probably never will.

This essay originally appeared on Wired.com.

EDITED TO ADD (9/8): Commentary.

EDITED TO ADD (9/9): Microsoft released a patch for FairUse4WM 1.2 on Thursday, September 7th.

EDITED TO ADD (9/13): BSkyB halts download service because of the breaks.

EDITED TO ADD (9/16): Microsoft is threatening legal action against people hosting copies of FairUse4WM.

Posted on September 7, 2006 at 8:33 AM51 Comments

Bomb or Not?

Can you identify the bombs?

In related news, here’s a guy who makes it through security with a live vibrator in his pants.

There’s also a funny video on Dutch TV. A screener scans a passenger’s bag, putting aside several obvious bags of cocaine to warn him about a very tiny nail file.

Here’s where to buy stuff seized at Boston’s Logan Airport. I also read somewhere that some stuff ends up on eBay.

And finally,Quinn Norton said: “I think someone should try to blow up a plane with a piece of ID, just to watch the TSA’s mind implode.”

Posted on September 6, 2006 at 1:48 PM36 Comments

Securing Wireless Networks with Stickers

Does anyone think this California almost-law (it’s awaiting the governor’s signature) will do any good at all?

From 1 October 2007, manufacturers must place warning labels on all equipment capable of receiving Wi-Fi signals, according to the new state law. These can take the form of box stickers, special notification in setup software, notification during the router setup, or through automatic securing of the connection. One warning sticker must be positioned so that it must be removed by a consumer before the product can be used.

Posted on September 5, 2006 at 1:56 PM55 Comments

Recovering Data from Cell Phones

People sell, give away, and throw away their cell phones without even thinking about the data still on them:

A company, Trust Digital of McLean, Virginia, bought 10 different phones on eBay this summer to test phone-security tools it sells for businesses. The phones all were fairly sophisticated models capable of working with corporate e-mail systems.

Curious software experts at Trust Digital resurrected information on nearly all the used phones, including the racy exchanges between guarded lovers.

The other phones contained:

  • One company’s plans to win a multimillion-dollar federal transportation contract.
  • E-mails about another firm’s $50,000 payment for a software license.
  • Bank accounts and passwords.
  • Details of prescriptions and receipts for one worker’s utility payments.

The recovered information was equal to 27,000 pages—a stack of printouts 8 feet high.

“We found just a mountain of personal and corporate data,” said Nick Magliato, Trust Digital’s chief executive.

In many cases, this was data that the owners erased.

A popular practice among sellers, resetting the phone, often means sensitive information appears to have been erased. But it can be resurrected using specialized yet inexpensive software found on the Internet.

More and more, our data is not really under our control. We store it on devices and third-party websites, or on our own computer. We try to erase it, but we really can’t. We try to control its dissemination, but it’s harder and harder.

Posted on September 5, 2006 at 9:38 AM

Scorecard from the War on Terror

This is absolutely essential reading for anyone interested in how the U.S. is prosecuting terrorism. Put aside the rhetoric and the posturing; this is what is actually happening.

Among the key findings about the year-by-year enforcement trends in the period were the following:

  • In the twelve months immediately after 9/11, the prosecution of individuals the government classified as international terrorists surged sharply higher than in the previous year. But timely data show that five years later, in the latest available period, the total number of these prosecutions has returned to roughly what they were just before the attacks. Given the widely accepted belief that the threat of terrorism in all parts of the world is much larger today than it was six or seven years ago, the extent of the recent decline in prosecutions is unexpected. See Figure 1 and supporting table.
  • Federal prosecutors by law and custom are authorized to decline cases that are brought to them for prosecution by the investigative agencies. And over the years the prosecutors have used this power to weed out matters that for one reason or another they felt should be dropped. For international terrorism the declination rate has been high, especially in recent years. In fact, timely data show that in the first eight months of FY 2006 the assistant U.S. Attorneys rejected slightly more than nine out of ten of the referrals. Given the assumption that the investigation of international terrorism must be the single most important target area for the FBI and other agencies, the turn-down rate is hard to understand. See Figure 2 and supporting table.
  • The typical sentences recently imposed on individuals considered to be international terrorists are not impressive. For all those convicted as a result of cases initiated in the two years after 9//11, for example, the median sentence—half got more and half got less—was 28 days. For those referrals that came in more recently—through May 31, 2006—the median sentence was 20 days. For cases started in the two year period before the 9/11 attack, the typical sentence was much longer, 41 months. See Figure 3.

Transactional Records Access Clearinghouse (TRAC) puts this data together by looking at Justice Department records. The data research organization is connected to Syracuse University, and has been doing this sort of thing—tracking what federal agencies actually do rather than what they say they do—for over fifteen years.

I am particularly entertained by the Justice Department’s rebuttal, which basically just calls the study names without offering any substantive criticism:

The Justice Department took issue with the study’s methodology and its conclusions.

The study “ignores the reality of how the war on terrorism is prosecuted in federal courts across the country and the value of early disruption of potential terrorist acts by proactive prosecution,” said Bryan Sierra, a Justice Department spokesman.

“The report presents misleading analysis of Department of Justice statistics to suggest the threat of terrorism may be inaccurate or exaggerated. The Department of Justice disagrees with this suggestion.”

How do I explain it? Most “terrorism” arrests are not for actual terrorism; they’re for other things. The cases are either thrown out for lack of evidence, or the penalties are more in line with the actual crimes. I don’t care what anyone from the Justice Department says: someone who is jailed for four weeks did not commit a terrorist act.

Posted on September 5, 2006 at 6:04 AM40 Comments

Friday Squid Blogging: Four Squid Cartoons

Wondermark.

Sherman’s Lagoon.

Non Sequitur.

Brevity.

If you know of any other squid cartoons, post the links as comments—or e-mail them to me—and I will add them here.

EDITED TO ADD (9/2): Guy & Rodd, Off the Mark, Her! [Girl v. Pig], a New Yorker cartoon, Doc Rat, Schlock Mercenary, and a German cartoon.

EDITED TO ADD (9/4): Demolition Squid, Sausage Squid from Beaver and Steve,Creative Disease, and very funny one from The New Yorker caption contest. Also, nine cartoons from Dr. Fun; search for “squid.”

EDITED TO ADD (9/13): Penny Arcade.

Posted on September 1, 2006 at 4:06 PM26 Comments

Antiterrorism Expert Claims to Have Smuggled Bomb onto Airplane Twice

I don’t know how much of this to believe.

A man wearing a jacket and carrying a bag was able to sneak a bomb onto a flight from Manila to Davao City last month at the height of the nationwide security alert after Britain uncovered a plot to blow up transatlantic planes.

The man pulled off the same stunt on the return flight to Manila.

Had he detonated the bomb, he would have turned the commercial plane into a fireball and killed himself, the crew and hundreds of other passengers.

The man turned out to be a civilian antiterrorism expert tapped by a government official to test security measures at Philippine airports after British police foiled a plan to blow up US-bound planes in midair using liquid explosives.

In particular, if he actually built a working bomb in an airplane lavatory, he’s an idiot. Yes, C4 is stable, but playing with live electrical detonators near high-power radios is just stupid. On the other hand, bringing everything through security and onto the plane is perfectly plausible. Security is so focused on catching people with lipstick and shampoo that they’re ignoring actual threats.

EDITED TO ADD (9/3): More news.

EDITED TO ADD (9/8): The “expert” is Samson Macariola, and he has recanted.

Posted on September 1, 2006 at 12:41 PM34 Comments

New Anonymous Browser

According to Computerworld and InfoWorld, there’s a new Web browser specifically designed not to retain information.

Browzar automatically deletes Internet caches, histories, cookies and auto-complete forms. Auto-complete is the feature that anticipates the search term or Web address a user might enter by relying on information previously entered into the browser.

I know nothing else about this. If you want, download it here.

EDITED TO ADD (9/1): This browser seems to be both fake and full of adware.

Posted on September 1, 2006 at 8:23 AM54 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.