Blog: February 2009 Archives

Friday Squid Blogging: Researching Squid Bacteria

New research:

Intriguingly, that gene is the one that enables the bacteria to form a biofilm, the tightly woven matrix of “slime” which allows bacterial colonies to behave in many ways like a single organism. “The biofilm might be critical for adhering to the light organ, or telling the host that the correct symbiont has arrived,” says Mandel.

Biofilms also seem to be important in another kind of bacterial invasion of animals: disease. Some normally harmless lung bacteria can turn into a nasty infection in humans by forming a biofilm, for example, while many immune defences are aimed at preventing biofilms. And certain bacteria, like Vibrio fischeri, typically invade only certain species and tissues.

Posted on February 27, 2009 at 4:01 PM3 Comments

Privacy in the Age of Persistence

Note: This isn’t the first time I have written about this topic, and it surely won’t be the last. I think I did a particularly good job summarizing the issues this time, which is why I am reprinting it.

Welcome to the future, where everything about you is saved. A future where your actions are recorded, your movements are tracked, and your conversations are no longer ephemeral. A future brought to you not by some 1984-like dystopia, but by the natural tendencies of computers to produce data.

Data is the pollution of the information age. It’s a natural byproduct of every computer-mediated interaction. It stays around forever, unless it’s disposed of. It is valuable when reused, but it must be done carefully. Otherwise, its after effects are toxic.

And just as 100 years ago people ignored pollution in our rush to build the Industrial Age, today we’re ignoring data in our rush to build the Information Age.

Increasingly, you leave a trail of digital footprints throughout your day. Once you walked into a bookstore and bought a book with cash. Now you visit Amazon, and all of your browsing and purchases are recorded. You used to buy a train ticket with coins; now your electronic fare card is tied to your bank account. Your store affinity cards give you discounts; merchants use the data on them to reveal detailed purchasing patterns.

Data about you is collected when you make a phone call, send an e-mail message, use a credit card, or visit a website. A national ID card will only exacerbate this.

More computerized systems are watching you. Cameras are ubiquitous in some cities, and eventually face recognition technology will be able to identify individuals. Automatic license plate scanners track vehicles in parking lots and cities. Color printers, digital cameras, and some photocopy machines have embedded identification codes. Aerial surveillance is used by cities to find building permit violators and by marketers to learn about home and garden size.

As RFID chips become more common, they’ll be tracked, too. Already you can be followed by your cell phone, even if you never make a call. This is wholesale surveillance; not “follow that car,” but “follow every car.”

Computers are mediating conversation as well. Face-to-face conversations are ephemeral. Years ago, telephone companies might have known who you called and how long you talked, but not what you said. Today you chat in e-mail, by text message, and on social networking sites. You blog and you Twitter. These conversations – with family, friends, and colleagues – can be recorded and stored.

It used to be too expensive to save this data, but computer memory is now cheaper. Computer processing power is cheaper, too; more data is cross-indexed and correlated, and then used for secondary purposes. What was once ephemeral is now permanent.

Who collects and uses this data depends on local laws. In the US, corporations collect, then buy and sell, much of this information for marketing purposes. In Europe, governments collect more of it than corporations. On both continents, law enforcement wants access to as much of it as possible for both investigation and data mining.

Regardless of country, more organizations are collecting, storing, and sharing more of it.

More is coming. Keyboard logging programs and devices can already record everything you type; recording everything you say on your cell phone is only a few years away.

A “life recorder” you can clip to your lapel that’ll record everything you see and hear isn’t far behind. It’ll be sold as a security device, so that no one can attack you without being recorded. When that happens, will not wearing a life recorder be used as evidence that someone is up to no good, just as prosecutors today use the fact that someone left his cell phone at home as evidence that he didn’t want to be tracked?

You’re living in a unique time in history: the technology is here, but it’s not yet seamless. Identification checks are common, but you still have to show your ID. Soon it’ll happen automatically, either by remotely querying a chip in your wallets or by recognizing your face on camera.

And all those cameras, now visible, will shrink to the point where you won’t even see them. Ephemeral conversation will all but disappear, and you’ll think it normal. Already your children live much more of their lives in public than you do. Your future has no privacy, not because of some police-state governmental tendencies or corporate malfeasance, but because computers naturally produce data.

Cardinal Richelieu famously said: “If one would give me six lines written by the hand of the most honest man, I would find something in them to have him hanged.” When all your words and actions can be saved for later examination, different rules have to apply.

Society works precisely because conversation is ephemeral; because people forget, and because people don’t have to justify every word they utter.

Conversation is not the same thing as correspondence. Words uttered in haste over morning coffee, whether spoken in a coffee shop or thumbed on a BlackBerry, are not official correspondence. A data pattern indicating “terrorist tendencies” is no substitute for a real investigation. Being constantly scrutinized undermines our social norms; furthermore, it’s creepy. Privacy isn’t just about having something to hide; it’s a basic right that has enormous value to democracy, liberty, and our humanity.

We’re not going to stop the march of technology, just as we cannot un-invent the automobile or the coal furnace. We spent the industrial age relying on fossil fuels that polluted our air and transformed our climate. Now we are working to address the consequences. (While still using said fossil fuels, of course.) This time around, maybe we can be a little more proactive.

Just as we look back at the beginning of the previous century and shake our heads at how people could ignore the pollution they caused, future generations will look back at us – living in the early decades of the information age – and judge our solutions to the proliferation of data.

We must, all of us together, start discussing this major societal change and what it means. And we must work out a way to create a future that our grandchildren will be proud of.

This essay originally appeared on the BBC.com website.

Posted on February 27, 2009 at 6:13 AM80 Comments

Defeating Caller ID Blocking

TrapCall is a new service that reveals the caller ID on anonymous or blocked calls:

TrapCall instructs new customers to reprogram their cellphones to send all rejected, missed and unanswered calls to TrapCall’s own toll-free number. If the user sees an incoming call with Caller ID blocked, he just presses the button on the phone that would normally send it to voicemail. The call invisibly loops through TelTech’s system, then back to the user’s phone, this time with the caller’s number displayed as the Caller ID.

There’s more:

In addition to the free service, branded Fly Trap, a $10-per-month upgrade called Mouse Trap provides human-created transcripts of voicemail messages, and in some cases uses text messaging to send you the name of the caller—information not normally available to wireless customers. Mouse Trap will also send you text messages with the numbers of people who call while your phone was powered off, even if they don’t leave a message.

With the $25-a-month Bear Trap upgrade, you can also automatically record your incoming calls, and get text messages with the billing name and street address of some of your callers, which TelTech says is derived from commercial databases.

Posted on February 26, 2009 at 12:53 PM35 Comments

Electromagnetic Pulse Grenades

There are rumors of a prototype:

Even the highly advanced US forces hadn’t been generally thought to have developed a successful pulse-bomb yet, with most reports indicating that such a capability remains a few years off (as has been the case for decades). Furthermore, the pulse ordnance has usually been seen as large and heavy, in the same league as an aircraft bomb or cruise missile warhead—or in the case of an HPM raygun, of a weapons-pod or aircraft payload size.

Now, however, it appears that in fact the US military has already managed to get the coveted pulse-bomb tech down to grenade size. Colonel Buckhout apparently envisages the Army electronic warfare troopers of tomorrow lobbing a pulse grenade through the window of an enemy command post or similar, so knocking out all their comms.

Posted on February 26, 2009 at 6:48 AM57 Comments

The Doghouse: Singularics

This is priceless:

Our advances in Prime Number Theory have led to a new branch of mathematics called Neutronics. Neutronic functions make possible for the first time the ability to analyze regions of mathematics commonly thought to be undefined, such as the point where one is divided by zero. In short, we have developed a new way to analyze the undefined point at the singularity which appears throughout higher mathematics.

This new analytic technique has given us profound insight into the way that prime numbers are distributed throughout the integers. According to RSA’s website, there are over 1 billion licensed instances of RSA public-key encryption in use in the world today. Each of these instances of the prime number based RSA algorithm can now be deciphered using Neutronic analysis. Unlike RSA, Neutronic Encryption is not based on two large prime numbers but rather on the Neutronic forces that govern the distribution of the primes themselves. The encryption that results from Singularic’s Neutronic public-key algorithm is theoretically impossible to break.

You’d think that anyone who claims to be able to decrypt RSA at the key lengths in use today would, maybe, um, demonstrate that at least once. Otherwise, this can all be safely ignored as snake oil.

The founder and CTO also claims to have proved the Riemann Hypothesis, if you care to wade through the 63-page paper.

EDITED TO ADD (3/30): The CTO has responded to me.

Posted on February 25, 2009 at 2:00 PM108 Comments

Melissa Hathaway Interview

President Obama has tasked Melissa Hathaway with conducting a 60-day review of the nation’s cybersecurity policies.

Who is she?

Hathaway has been working as a cybercoordination executive for the Office of the Director of National Intelligence. She chaired a multiagency group called the National Cyber Study Group that was instrumental in developing the Comprehensive National Cyber Security Initiative, which was approved by former President George W. Bush early last year. Since then, she has been in charge of coordinating and monitoring the CNCI’s implementation.

Although, honestly, the best thing to read to get an idea of how she thinks is this interview from IEEE Security & Privacy:

In the technology field, concern to be first to market often does trump the need for security to be built in up front. Most of the nation’s infrastructure is owned, operated, and developed by the commercial sector. We depend on this sector to address the nation’s broader needs, so we’ll need a new information-sharing environment. Private-sector risk models aren’t congruent with the needs for national security. We need to think about a way to do business that meets both sets of needs. The proposed revisions to Federal Information Security Management Act [FISMA] legislation will raise awareness of vulnerabilities within broader-based commercial systems.

Increasingly, we see industry jointly addressing these vulnerabilities, such as with the Industry Consortium for Advancement of Security on the Internet to share common vulnerabilities and response mechanisms. In addition, there’s the Software Assurance Forum for Excellence in Code, an alliance of vendors who seek to improve software security. Industry is beginning to understand that [it has a] shared risk and shared responsibilities and sees the advantage of coordinating and collaborating up front during the development stage, so that we can start to address vulnerabilities from day one. We also need to look for niche partnerships to enhance product development and build trust into components. We need to understand when and how we introduce risk into the system and ask ourselves whether that risk is something we can live with.

The government is using its purchasing power to influence the market toward better security. We’re already seeing results with the Federal Desktop Core Configuration [FDCC] initiative, a mandated security configuration for federal computers set by the OMB. The Department of Commerce is working with several IT vendors on standardizing security settings for a wide variety of IT products and environments. Because a broad population of the government is using Windows XP and Vista, the FDCC imitative worked with Microsoft and others to determine security needs up front.

Posted on February 24, 2009 at 12:36 PM25 Comments

Another Conficker Variant

This is one well-designed piece of malware:

Conficker B++ is somewhat similar to Conficker B, with 294 of 297 sub-routines the same and 39 additional subroutines. The latest variant, first spotted on 16 February, is even more sneaky than its previous incarnations, SRI explains.

Conficker B++ is no longer limited to reinfection by similarly structured Conficker DLLs, but can now push new self-contained Win32 applications. These executables can infiltrate the host using methods that are not detected by the latest anti-Conficker security applications.

[…]

The malware also creates an additional backdoor on compromise machines to create an altogether trickier infectious agent, SRI explains.

In Conficker A and B, there appeared only one method to submit Win32 binaries to the digital signature validation path, and ultimately to the CreateProcess API call. This path required the use of the Internet rendezvous point to download the binary through an HTTP transaction.

Under Conficker B++, two new paths to binary validation and execution have been introduced to Conficker drones, both of which bypass the use of Internet Rendezvous points: an extension to the netapi32.dll patch and the new named pipe backdoor. These changes suggest a desire by the Conficker’s authors to move away from a reliance on Internet rendezvous points to support binary update, and toward a more direct flash approach.

SRI reckons that Conficker-A has infected 4.7m machines, at one time or another, while Conficker-B has hit 6.7m IP addresses. These figures, as with previous estimates, come from an analysis of the number of machines that have ever tried to call into malware update sites. The actual number of infected hosts at any one time is lower than that. SRI estimates the botnet controlled by Conficker-A and Conficker-B is around 1m and 3m hosts, respectively, or a third of the raw estimate.

Posted on February 24, 2009 at 5:23 AM27 Comments

Is Megan's Law Worth It?

A study from New Jersey shows that Megan’s Law—laws designed to identity sex offenders to the communities they live in—is ineffective in reducing sex crimes or deterring recidivists.

The study, funded by the National Institute of Justice, examined the cases of 550 sex offenders who were broken into two groups—those released from prison before the passage of Megan’s Law and those released afterward.

The researchers found no statistically significant difference between the groups in whether the offenders committed new sex crimes.

Among those released before the passage of Megan’s Law, 10 percent were re-arrested on sex-crime charges. Among the other group, 7.6 percent were re-arrested for such crimes.

Similarly, the researchers found no significant difference in the number of victims of the two groups. Together, the offenders had 796 victims, ages 1 to 87. Most of the offenders had prior relationships with their new victims, and nearly half were family members. In just 16 percent of the cases, the offender was a stranger.

One complicating factor for the researchers is that sex crimes had started to decline even before the adoption of Megan’s Law, making it difficult to pinpoint cause and effect. In addition, sex offenses vary from county to county, rising and falling from year to year.

Even so, the researchers noted an “accelerated” decline in sex offenses in the years after the law’s passage.

“Although the initial decline cannot be attributed to Megan’s Law, the continued decline may, in fact, be related in some way to registration and notification activities,” the authors wrote. Elsewhere in the report, they noted that notification and increased surveillance of offenders “may have a general deterrent effect.”

Posted on February 23, 2009 at 12:28 PM67 Comments

NSA Wants Help Eavesdropping on Skype

At least, according to an anonymous “industry source”:

The spybiz exec, who preferred to remain anonymous, confirmed that Skype continues to be a major problem for government listening agencies, spooks and police. This was already thought to be the case, following requests from German authorities for special intercept/bugging powers to help them deal with Skype-loving malefactors. Britain’s GCHQ has also stated that it has severe problems intercepting VoIP and internet communication in general.

Skype in particular is a serious problem for spooks and cops. Being P2P, the network can’t be accessed by the company providing it and the authorities can’t gain access by that route. The company won’t disclose details of its encryption, either, and isn’t required to as it is Europe based. This lack of openness prompts many security pros to rubbish Skype on “security through obscurity” grounds: but nonetheless it remains a popular choice with those who think they might find themselves under surveillance. Rumour suggests that America’s NSA may be able to break Skype encryption—assuming they have access to a given call or message—but nobody else.

The NSA may be able to do that: but it seems that if so, this uses up too much of the agency’s resources at present.

I’m sure this is a real problem. Here’s an article claiming that Italian criminals are using Skype more than the telephone because of eavesdropping concerns.

Posted on February 23, 2009 at 6:51 AM37 Comments

The "Broken Windows" Theory of Crimefighting

Evidence of its effectiveness:

Researchers, working with police, identified 34 crime hot spots. In half of them, authorities set to work—clearing trash from the sidewalks, fixing street lights, and sending loiterers scurrying. Abandoned buildings were secured, businesses forced to meet code, and more arrests made for misdemeanors. Mental health services and homeless aid referrals expanded.

In the remaining hot spots, normal policing and services continued.

Then researchers from Harvard and Suffolk University sat back and watched, meticulously recording criminal incidents in each of the hot spots.

The results, just now circulating in law enforcement circles, are striking: A 20 percent plunge in calls to police from the parts of town that received extra attention. It is seen as strong scientific evidence that the long-debated “broken windows” theory really works—that disorderly conditions breed bad behavior, and that fixing them can help prevent crime.

[…]

Many police departments across the country already use elements of the broken windows theory, or focus on crime hot spots. The Lowell experiment offers guidance on what seems to work best. Cleaning up the physical environment was very effective; misdemeanor arrests less so, and boosting social services had no apparent impact.

EDITED TO ADD (3/13): The paper.

Posted on February 20, 2009 at 12:03 PM61 Comments

Another Password Analysis

Here’s an analysis of 30,000 passwords from phpbb.com, similar to my analysis of 34,000 MySpace passwords:

The striking different between the two incidents is that the phpbb passwords are simpler. MySpace requires that passwords “must be between 6 and 10 characters, and contain at least 1 number or punctuation character.” Most people satisfied this requirement by simply appending “1” to the ends of their passwords. The phpbb site has no such restrictions—the passwords are shorter and rarely contain anything more than a dictionary word.

Seems like we still can’t choose good passwords. Conficker.B exploits this, trying about 200 common passwords to help spread itself.

Posted on February 20, 2009 at 7:31 AM60 Comments

Balancing Security and Usability in Authentication

Since January, the Conficker.B worm has been spreading like wildfire across the Internet: infecting the French Navy, hospitals in Sheffield, the court system in Houston, and millions of computers worldwide. One of the ways it spreads is by cracking administrator passwords on networks. Which leads to the important question: Why in the world are IT administrators still using easy-to-guess passwords?

Computer authentication systems have two basic requirements. They need to keep the bad guys from accessing your account, and they need to allow you to access your account. Both are important, and every authentication system is a balancing act between the two. Too little security, and the bad guys will get in too easily. But if the authentication system is too complicated, restrictive, or hard to use, you won’t be able to—or won’t bother to—use it.

Passwords are the most common authentication system, and a good place to start. They’re very easy to implement and use, which is why they’re so popular. But as computers have become faster, password guessing has become easier. Most people don’t choose passwords that are complicated enough to remain secure against modern password-guessing attacks. Conficker.B is even less clever; it just tries a list of about 200 common passwords.

To combat password guessing, many systems force users to choose harder-to-guess passwords—requiring minimum lengths, non alpha-numeric characters, etc.—and change their passwords more frequently. The first makes guessing harder, and the second makes a guessed password less valuable. This, of course, makes the system more annoying, so users respond by writing their passwords down and taping them to their monitors, or simply forgetting them more often. Smarter users write them down and put them in their wallets, or use a secure password database like Password Safe.

Users forgetting their passwords can be expensive—sysadmins or customer service reps have to field phone calls and reset passwords—so some systems include a backup authentication system: a secret question. The idea is that if you forget your password, you can authenticate yourself with some personal information that only you know. Your mother’s maiden name was traditional, but these days there are all sorts of secret questions: your favourite schoolteacher, favourite colour, street you grew up on, name of your first pet, and so on. This might make the system more usable, but it also makes it much less secure: answers can be easily guessable, and are often known by people close to you.

A common enhancement is a one-time password generator, like a SecurID token. This is a small device with a screen that displays a password that changes automatically once a minute. Adding this is called two-factor authentication, and is much more secure, because this token—”something you have”—is combined with a password—”something you know.” But it’s less usable, because the tokens have to be purchased and distributed to all users, and far too often it’s “something you lost or forgot.” And it costs money. Tokens are far more frequently used in corporate environments, but banks and some online gaming worlds have taken to using them—sometimes only as an option, because people don’t like them.

In most cases, how an authentication system works when a legitimate user tries to log on is much more important than how it works when an impostor tries to log on. No security system is perfect, and there is some level of fraud associated with any of these authentication methods. But the instances of fraud are rare compared to the number of times someone tries to log on legitimately. If a given authentication system let the bad guys in one in a hundred times, a bank could decide to live with the problem—or try to solve it in some other way. But if the same authentication system prevented legitimate customers from logging on even one in a thousand times, the number of complaints would be enormous and the system wouldn’t survive one week.

Balancing security and usability is hard, and many organizations get it wrong. But it’s also evolving; organizations needing to tighten their security continue to push more involved authentication methods, and more savvy Internet users are willing to accept them. And certainly IT administrators need to be leading that evolutionary change.

A version of this essay was originally published in The Guardian.

Posted on February 19, 2009 at 1:44 PM60 Comments

Terrorism Common Sense from MI6

Refreshing commentary from Nigel Inkster, former Assistant Chief and Director of Operations and Intelligence of MI6:

“Efforts to establish a global repository of counterterrorist information are unlikely ever to succeed. We need to be wary of rebuilding our world to deal with just one problem, one which might not be by any means the most serious we face.”

Asked what dangers were more serious than terrorism, Mr Inkster suggested that British government planners were more concerned regarding the possible results of global pandemics, or perhaps the worst-case outcomes of climate change.

“We need to keep terrorism in some kind of context,” he said. “For example, every year in the UK, more people die in road accidents than have been killed by terrorists in all of recorded history.”

The secret-service mandarin suggested that the Global War On Terror initiated by the Bush administration could never be won.

“We can’t kill or arrest our way out of this problem… we will never solve this issue and live in a terrorism-free world. It has to be managed.”

Inkster said that there was definitely a need for police and sometimes military action in fighting terrorism, but suggested that it was now widely acknowledged in the spook community that the Iraq invasion—and now the Israeli assault on Gaza – were definite factors in radicalisation of British domestic terrorists.

“A move away from the rhetoric of GWOT will help,” he said, saying that the “more nuanced message” of the Obama administration was already showing results.

As for recommendations, Inkster said that it was important to promote good government and economic opportunity around the world.

“If I hear one more speaker suggest that the root of terrorism is poverty I’ll probably become a terrorist myself,” he joked. “But we have to acknowledge that it’s a factor.”

As for the West, he said: “We should keep our nerve and our faith in our own values. Our own behaviour—especially with respect to the rule of law—is very important.”

Posted on February 19, 2009 at 6:17 AM40 Comments

HIPAA Accountability in Stimulus Bill

On page 379 of the current stimulus bill, there’s a bit about establishing a website of companies that lost patient information:

(4) POSTING ON HHS PUBLIC WEBSITE—The Secretary shall make available to the public on the Internet website of the Department of Health and Human Services a list that identifies each covered entity involved in a breach described in subsection (a) in which the unsecured protected health information of more than 500 individuals is acquired or disclosed.

I’m not sure if this passage survived the final bill, but it will be interesting if it is now law.

EDITED TO ADD (3/13): It’s law.

Posted on February 18, 2009 at 12:28 PM25 Comments

Computer Virus Epidemiology

WiFi networks and malware epidemiology,” by Hao Hu, Steven Myers, Vittoria Colizza, and Alessandro Vespignani.

Abstract

In densely populated urban areas WiFi routers form a tightly interconnected proximity network that can be exploited as a substrate for the spreading of malware able to launch massive fraudulent attacks. In this article, we consider several scenarios for the deployment of malware that spreads over the wireless channel of major urban areas in the US. We develop an epidemiological model that takes into consideration prevalent security flaws on these routers. The spread of such a contagion is simulated on real-world data for georeferenced wireless routers. We uncover a major weakness of WiFi networks in that most of the simulated scenarios show tens of thousands of routers infected in as little as 2 weeks, with the majority of the infections occurring in the first 24–48 h. We indicate possible containment and prevention measures and provide computational estimates for the rate of encrypted routers that would stop the spreading of the epidemics by placing the system below the percolation threshold.

Honestly, I’m not sure I understood most of the article. And I don’t think that their model is all that great. But I like to see these sorts of methods applied to malware and infection rates.

EDITED TO ADD (3/13): Earlier—but free—version of the paper.

Posted on February 18, 2009 at 5:53 AM9 Comments

Difficult-to-Pronounce Things are Judged to Be More Risky

Do I have any readers left who think humans are rational about risks?

Abstract

Low processing fluency fosters the impression that a stimulus is unfamiliar, which in turn results in perceptions of higher risk, independent of whether the risk is desirable or undesirable. In Studies 1 and 2, ostensible food additives were rated as more harmful when their names were difficult to pronounce than when their names were easy to pronounce; mediation analyses indicated that this effect was mediated by the perceived novelty of the substance. In Study 3, amusement-park rides were rated as more likely to make one sick (an undesirable risk) and also as more exciting and adventurous (a desirable risk) when their names were difficult to pronounce than when their names were easy to pronounce.

Posted on February 17, 2009 at 1:56 PM47 Comments

Los Alamos Explains Their Security Problems

They’ve lost 80 computers: no idea if they’re stolen, or just misplaced. Typical story—not even worth commenting on—but this great comment by Los Alamos explains a lot about what was wrong with their security policy:

The letter, addressed to Department of Energy security officials, contends that “cyber security issues were not engaged in a timely manner” because the computer losses were treated as a “property management issue.”

The real risk in computer losses is the data, not the hardware. I thought everyone knew that.

Posted on February 17, 2009 at 5:00 AM37 Comments

Insiders

Rajendrasinh Makwana was a UNIX contractor for Fannie Mae. On October 24, he was fired. Before he left, he slipped a logic bomb into the organization’s network. The bomb would have “detonated” on January 31. It was programmed to disable access to the server on which it was running, block any network monitoring software, systematically and irretrievably erase everything—and then replicate itself on all 4,000 Fannie Mae servers. Court papers claim the damage would have been in the millions of dollars, a number that seems low. Fannie Mae would have been shut down for at least a week.

Luckily—and it does seem it was pure luck—another programmer discovered the script a week later, and disabled it.

Insiders are a perennial problem. They have access, and they’re known by the system. They know how the system and its security works, and its weak points. They have opportunity. Bank heists, casino thefts, large-scale corporate fraud, train robberies: many of the most impressive criminal attacks involve insiders. And, like Makwana’s attempt at revenge, these insiders can have pretty intense motives—motives that can only intensify as the economy continues to suffer and layoffs increase.

Insiders are especially pernicious attackers because they’re trusted. They have access because they’re supposed to have access. They have opportunity, and an understanding of the system, because they use it—or they designed, built, or installed it. They’re already inside the security system, making them much harder to defend against.

It’s not possible to design a system without trusted people. They’re everywhere. In offices, employees are trusted people given access to facilities and resources, and allowed to act—sometimes broadly, sometimes narrowly—in the company’s name. In stores, employees are allowed access to the back room and the cash register; and customers are trusted to walk into the store and touch the merchandise. IRS employees are trusted with personal tax information; hospital employees are trusted with personal health information. Banks, airports, and prisons couldn’t operate without trusted people.

Replacing trusted people with computers doesn’t make the problem go away; it just moves it around and makes it even more complex. The computer, software, and network designers, implementers, coders, installers, maintainers, etc. are all trusted people. See any analysis of the security of electronic voting machines, or some of the frauds perpetrated against computerized gambling machines, for some graphic examples of the risks inherent in replacing people with computers.

Of course, this problem is much, much older than computers. And the solutions haven’t changed much throughout history, either. There are five basic techniques to deal with trusted people:

1. Limit the number of trusted people. This one is obvious. The fewer people who have root access to the computer system, know the combination to the safe, or have the authority to sign checks, the more secure the system is.

2. Ensure that trusted people are also trustworthy. This is the idea behind background checks, lie detector tests, personality profiling, prohibiting convicted felons from getting certain jobs, limiting other jobs to citizens, the TSA’s no-fly list, and so on, as well as behind bonding employees, which means there are deep pockets standing behind them if they turn out not to be trustworthy.

3. Limit the amount of trust each person has. This is compartmentalization; the idea here is to limit the amount of damage a person can do if he ends up not being trustworthy. This is the concept behind giving people keys that only unlock their office or passwords that only unlock their account, as well as “need to know” and other levels of security clearance.

4. Give people overlapping spheres of trust. This is what security professionals call defense in depth. It’s why it takes two people with two separate keys to launch nuclear missiles, and two signatures on corporate checks over a certain value. It’s the idea behind bank tellers requiring management overrides for high-value transactions, double-entry bookkeeping, and all those guards and cameras at casinos. It’s why, when you go to a movie theater, one person sells you a ticket and another person standing a few yards away tears it in half: It makes it much harder for one employee to defraud the system. It’s why key bank employees need to take their two-week vacations all at once—so their replacements have a chance to uncover any fraud.

5. Detect breaches of trust after the fact and prosecute the guilty. In the end, the four previous techniques can only do so well. Trusted people can subvert a system. Most of the time, we discover the security breach after the fact and then punish the perpetrator through the legal system: publicly, so as to provide a deterrence effect and increase the overall level of security in society. This is why audit is so vital.

These security techniques don’t only protect against fraud or sabotage; they protect against the more common problem: mistakes. Trusted people aren’t perfect; they can inadvertently cause damage. They can make a mistake, or they can be tricked into making a mistake through social engineering.

Good security systems use multiple measures, all working together. Fannie Mae certainly limits the number of people who have the ability to slip malicious scripts into their computer systems, and certainly limits the access that most of these people have. It probably has a hiring process that makes it less likely that malicious people come to work at Fannie Mae. It obviously doesn’t have an audit process by which a change one person makes on the servers is checked by someone else; I’m sure that would be prohibitively expensive. Certainly the company’s IT department should have terminated Makwana’s network access as soon as he was fired, and not at the end of the day.

In the end, systems will always have trusted people who can subvert them. It’s important to keep in mind that incidents like this don’t happen very often; that most people are honest and honorable. Security is very much designed to protect against the dishonest minority. And often little things—like disabling access immediately upon termination—can go a long way.

This essay originally appeared on the Wall Street Journal website.

Posted on February 16, 2009 at 12:20 PM53 Comments

Using Fear to Sell Pens, Part Two

This ad, for a Uni-ball pen that’s hard to erase, is kind of surreal. They’re using fear to sell pens—again—but it’s the wrong fear. They’re confusing check-washing fraud, where someone takes a check and changes the payee and maybe the amount, with identity theft. And how can someone steal money from me by erasing and changing information on a tax form? Are they going to cause my refund check to be sent to another address? This is getting awfully Byzantine.

Posted on February 16, 2009 at 7:28 AM29 Comments

Worldwide Browser Patch Rates

Interesting research:

Abstract:

Although there is an increasing trend for attacks against popular Web browsers, only little is known about the actual patch level of daily used Web browsers on a global scale. We conjecture that users in large part do not actually patch their Web browsers based on recommendations, perceived threats, or any security warnings. Based on HTTP useragent header information stored in anonymized logs from Google’s web servers, we measured the patch dynamics of about 75% of the world’s Internet users for over a year. Our focus was on the Web browsers Firefox and Opera. We found that the patch level achieved is mainly determined by the ergonomics and default settings of built-in auto-update mechanisms. Firefox’ auto-update is very effective: most users installed a new version within three days. However, the maximum share of the latest, most secure version never exceeded 80% for Firefox users and 46% for Opera users at any day in 2007. This makes about 50 million Firefox users with outdated browsers an easy target for attacks. Our study is the result of the first global scale measurement of the patch dynamics of a popular browser.

Posted on February 13, 2009 at 6:27 AM21 Comments

Cheating at Disneyworld

Interesting discussion of different ways to cheat and skip the lines at Disney theme parks. Most of the tricks involve their FastPass system for virtual queuing:

Moving toward the truly disingenuous, we’ve got the “FastPass Switcheroo.” To do this, simply get your FastPass like normal for Splash Mountain. You notice that the return time is two hours away, in the afternoon. Wait two hours, then return here and get another set of FP tickets, this time for later in the evening. But at this moment, your first set of FP tickets are active. Use them to get by the FP guard at the front, but when prompted to turn in your tickets at the front of the FP line, hand over the ones for this evening instead. 99.9% of the time, they do not look at these tickets whatsoever in this point in the line; they just add them to the pile in their hand and impatiently gesture you forward. All the examining of the tickets takes place at the start of the line, not the end. Voila, you’ve cheated the system. After this ride, you can get off and immediately ride again, since you’ve held on to the afternoon FPs and can use them in the normal fashion now.

Posted on February 12, 2009 at 1:24 PM45 Comments

Billboards that Watch you Back

Creepy:

Small cameras can now be embedded in the screen or hidden around it, tracking who looks at the screen and for how long. The makers of the tracking systems say the software can determine the viewer’s gender, approximate age range and, in some cases, ethnicity—and can change the ads accordingly.

That could mean razor ads for men, cosmetics ads for women and video-game ads for teens.

And even if the ads don’t shift based on which people are watching, the technology’s ability to determine the viewers’ demographics is golden for advertisers who want to know how effectively they’re reaching their target audience.

While the technology remains in limited use for now, advertising industry analysts say it is finally beginning to live up to its promise. The manufacturers say their systems can accurately determine gender 85 to 90 percent of the time, while accuracy for the other measures continues to be refined.

These are ads at eye level: on the streets, in malls, in train stations.

Posted on February 11, 2009 at 2:53 PM46 Comments

Cloning RFID Passports

It’s easy to clone RFID passports. (To make it clear, the attacker didn’t actually create fake passports; he just stole the data off the RFID chips.) Not that this hasn’t been done before.

I’ve long been opposed to RFID chips in passports, and have written op eds about them in the International Herald Tribune and several other papers.

EDITED TO ADD (2/11): I got some details wrong. Chris Paget, the researcher, is cloning Western Hemisphere Travel Initiative (WHTI) compliant documents such as the passport card and Electronic Drivers License (EDL), and not the passport itself. Here is the link to Paget’s talk at ShmooCon.

Posted on February 11, 2009 at 5:09 AM62 Comments

Self-Propelled Semi-Submersibles

They’re used to smuggle drugs into the U.S.

Since the vessels have a low profile—the hulls only rise about a foot above the waterline—they are hard to see from a distance and produce a small radar signature. U.S. counterdrug officials estimate that SPSS are responsible for 32% of all cocaine movement in the transit zone.

But let’s not forget the terrorism angle:

“What worries me [about the SPSS] is if you can move that much cocaine, what else can you put in that semi-submersible. Can you put a weapon of mass destruction in it?” Navy Adm. Jim Stavridis, Commander, U.S. Southern Command

Posted on February 10, 2009 at 12:59 PM44 Comments

Man Arrested by Amtrak Police for Taking Photographs for Amtrak Photography Contest

You can’t make this stuff up. Even Stephen Colbert made fun of it.

This isn’t the first time Amtrak police have been idiots.

And in related news, in the U.K. it soon might be illegal to photograph the police.

EDITED TO ADD (2/10): The photographer’s page about the incident has been replaced with the words “No comment!” Anyone have a link to a copy? In the meantime, here’s an entry about the incident on a photo activist’s blog.

EDITED AGAIN: Thanks to Phil M. in comments for finding these Google Cache links from Duane Kerzic’s site:

Phil adds: “The main Amtrak page on his site has since been crawled, so Google now has the ‘no comment’ note cached.”

Posted on February 10, 2009 at 6:19 AM43 Comments

Monster.com Data Breach

Monster.com was hacked, and people’s personal data was stolen. Normally I wouldn’t bother even writing about this—it happens all the time—but an AP reporter called me yesterday to comment. I said:

Monster’s latest breach “shouldn’t have happened,” said Bruce Schneier, chief security technology officer for BT Group. “But you can’t understand a company’s network security by looking at public events—that’s a bad metric. All the public events tell you are, these are attacks that were successful enough to steal data, but were unsuccessful in covering their tracks.”

Thinking about it, it’s even more complex than that. To assess an organization’s network security, you need to actually analyze it. You can’t get a lot of information from the list of attacks that were successful enough to steal data but not successful enough to cover their tracks, and which the company’s attorneys couldn’t figure out a reason not to disclose to the public.

Posted on February 9, 2009 at 6:47 AM23 Comments

Hard Drive Encryption Specification

There’s a new hard drive encryption standard, which will make it easier for manufacturers to build encryption into drives.

Honestly, I don’t think this is really needed. I use PGP Disk, and I haven’t noticed any slowdown due to having encryption done in software. And I worry about yet another standard with its inevitable flaws and security vulnerabilities.

EDITED TO ADD (2/13): Perceptive comment about how the real benefit is regulatory compliance.

Posted on February 5, 2009 at 7:13 AM82 Comments

Racial Profiling No Better than Random Screening

Not that this is any news, but there’s some new research to back it up:

The study was performed by William Press, who does bioinformatics research at the University of Texas, Austin, with a joint appointment at Los Alamos National Labs. His background in statistics is apparent in his ability to handle various mathematical formulae with aplomb, but he’s apparently used to explaining his work to biologists, since the descriptions that surround those formulae make the general outlines of the paper fairly accessible.

Press starts by examining what could be viewed as an idealized situation, at least from the screening perspective: a single perpetrator living under an authoritarian government that has perfect records on its citizens. Applying a profile to those records should allow the government to rank those citizens in order of risk, and it can screen them one-by-one until it identifies the actual perpetrator. Those circumstances lead to a pretty rapid screening process, and they can be generalized out to a situation where there are multiple likely perpetrators.

Things go rapidly sour for this system, however, as soon as you have an imperfect profile. In that case, which is more likely to reflect reality, there’s a finite chance that the screening process misses a likely security risk. Since it works its way through the list of individuals iteratively, it never goes back to rescreen someone that’s made it through the first pass. The impact of this flaw grows rapidly as the ability to accurately match the profile to the data available on an individual gets worse. Since we’ve already said that making a profile is challenging, and we know that even authoritarian governments don’t have perfect information on their citizens, this system is probably worse than random screening in the real world.

In the real world, of course, most of us aren’t going through security checks run by authoritarian governments. In Press’ phrasing, democracies resample with replacement, in that they don’t keep records of who goes through careful security screening at places like airports, so people get placed back on the list to go through the screening process again. One consequence of this is that, since screening resources are never infinite, we can only resample a small subset of the total population at any given moment.

Press then examines the effect of what he terms a strong profiling strategy, one in which a limited set of screening resources is deployed solely based the risk probabilities identified through profiling. It turns out that this also works poorly as the population size goes up. “The reason that this strong profiling strategy is inefficient,” Press writes, “is that, on average, it keeps retesting the same innocent individuals who happen to have large pj [risk profile match] values.”

According to Press, the solution is something that’s widely recognized by the statistics community: identify individuals for robust screening based on the square root of their risk value. That gives the profile some weight, but distributes the screening much more broadly through the population, and uses limited resources more effectively. It’s so widely used in mathematical circles that Press concludes his paper by writing, “It seems peculiar that the method is not better known.”

Other articles on the research here, here, and here. Me on profiling.

Posted on February 4, 2009 at 12:50 PM34 Comments

Confessions Corrupt Eyewitnesses

People confess to crimes they don’t commit. They do it a lot. What’s interesting about this research is that confessions—whether false or true—corrupt other eyewitnesses:

Abstract

A confession is potent evidence, persuasive to judges and juries. Is it possible that a confession can also affect other evidence? The present study tested the hypothesis that a confession will alter eyewitnesses’ identification decisions. Two days after witnessing a staged theft and making an identification decision from a lineup that did not include the thief, participants were told that certain lineup members had confessed or denied guilt during a subsequent interrogation. Among those participants who had made a selection but were told that another lineup member confessed, 61% changed their identifications. Among those participants who had not made an identification, 50% went on to select the confessor when his identity was known. These findings challenge the presumption in law that different forms of evidence are independent and suggest an important overlooked mechanism by which innocent confessors are wrongfully convicted: Potentially exculpatory evidence is corrupted by a confession itself.

More:

When asked to explain their change, subjects revealed they were actually convinced by the confessor, and not simply complying with it, saying, “His face now looks more familiar than the one I chose before.”

Posted on February 4, 2009 at 6:35 AM17 Comments

Cost of the U.S. No-Fly List

Someone did the analysis:

As will be analyzed below, it is estimated that the costs of the no-fly list, since 2002, range from approximately $300 million (a conservative estimate) to $966 million (an estimate on the high end). Using those figures as low and high potentials, a reasonable estimate is that the U.S. government has spent over $500 million on the project since the September 11, 2001 terrorist attacks. Using annual data, this article suggests that the list costs taxpayers somewhere between $50 million and $161 million a year, with a reasonable compromise of those figures at approximately $100 million.

Posted on February 3, 2009 at 1:01 PM58 Comments

Making Cameras Go Click

There’s a bill in Congress—unlikely to go anywhere—to force digital cameras to go “click.” The idea is that this will make surreptitious photography harder:

The bill’s text says that Congress has found that “children and adolescents have been exploited by photographs taken in dressing rooms and public places with the use of a camera phone.”

This is so silly it defies comment.

EDITED TO ADD (2/13): Apparently this is already law in Japan.

Posted on February 3, 2009 at 6:08 AM82 Comments

Evaluating Risks of Low-Probability High-Cost Events

Probing the Improbable: Methodological Challenges for Risks with Low Probabilities and High Stakes,” by Toby Ord, Rafaela Hillerbrand, Anders Sandberg.

Abstract:

Some risks have extremely high stakes. For example, a worldwide pandemic or asteroid impact could potentially kill more than a billion people. Comfortingly, scientific calculations often put very low probabilities on the occurrence of such catastrophes. In this paper, we argue that there are important new methodological problems which arise when assessing global catastrophic risks and we focus on a problem regarding probability estimation. When an expert provides a calculation of the probability of an outcome, they are really providing the probability of the outcome occurring, given that their argument is watertight. However, their argument may fail for a number of reasons such as a flaw in the underlying theory, a flaw in the modeling of the problem, or a mistake in the calculations. If the probability estimate given by an argument is dwarfed by the chance that the argument itself is flawed, then the estimate is suspect. We develop this idea formally, explaining how it differs from the related distinctions of model and parameter uncertainty. Using the risk estimates from the Large Hadron Collider as a test case, we show how serious the problem can be when it comes to catastrophic risks and how best to address it.

Posted on February 2, 2009 at 1:26 PM24 Comments

Airlines Defining Anyone Disruptive as Terrorists

From the Los Angeles Times:

Freeman is one of at least 200 people on flights who have been convicted under the amended law. In most of the cases, there was no evidence that the passengers had attempted to hijack the airplane or physically attack any of the flight crew. Many have simply involved raised voices, foul language and drunken behavior.

Some security experts say the use of the law by airlines and their employees has run amok, criminalizing incidents that did not start out as a threat to public safety, much less an act of terrorism.

In one case, a couple was arrested after an argument with a flight attendant, who claimed the couple was engaged in “overt sexual activity”—an FBI affidavit said the two were “embracing, kissing and acting in a manner that made other passengers uncomfortable.”

EDITED TO ADD (2/2): Blog post showing that the article is a lot more hyperbole than fact. And commentary on the commentary.

Posted on February 2, 2009 at 6:47 AM44 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.