Blog: May 2012 Archives

Tax Return Identity Theft

I wrote about this sort of thing in 2006 in the UK, but it’s even bigger business here:

The criminals, some of them former drug dealers, outwit the Internal Revenue Service by filing a return before the legitimate taxpayer files. Then the criminals receive the refund, sometimes by check but more often though a convenient but hard-to-trace prepaid debit card.

The government-approved cards, intended to help people who have no bank accounts, are widely available in many places, including tax preparation companies. Some of them are mailed, and the swindlers often provide addresses for vacant houses, even buying mailboxes for them, and then collect the refunds there.

[…]

The fraud, which has spread around the country, is costing taxpayers hundreds of millions of dollars annually, federal and state officials say. The I.R.S. sometimes, in effect, pays two refunds instead of one: first to the criminal who gets a claim approved, and then a second to the legitimate taxpayer, who might have to wait as long as a year while the agency verifies the second claim.

J. Russell George, the Treasury inspector general for tax administration, testified before Congress this month that the I.R.S. detected 940,000 fake returns for 2010 in which identity thieves would have received $6.5 billion in refunds. But Mr. George said the agency missed an additional 1.5 million returns with possibly fraudulent refunds worth more than $5.2 billion.

The problem is that it doesn’t take much identity information to file a tax return with the IRS, and the agency automatically corrects your mistakes if you make them—and does the calculations for you if you don’t want to do them yourself. So it’s pretty easy to file a fake return for someone. And the IRS has no way to check if the taxpayer’s address is real, so it sends refunds out to whatever address or account you give them.

Posted on May 31, 2012 at 1:19 PM42 Comments

The Psychology of Immoral (and Illegal) Behavior

When I talk about Liars and Outliers to security audiences, one of the things I stress is our traditional security focus—on technical countermeasures—is much narrower than it could be. Leveraging moral, reputational, and institutional pressures are likely to be much more effective in motivating cooperative behavior.

This story illustrates the point. It’s about the psychology of fraud, “why good people do bad things.”

There is, she says, a common misperception that at moments like this, when people face an ethical decision, they clearly understand the choice that they are making.

“We assume that they can see the ethics and are consciously choosing not to behave ethically,” Tenbrunsel says.

This, generally speaking, is the basis of our disapproval: They knew. They chose to do wrong.

But Tenbrunsel says that we are frequently blind to the ethics of a situation.

Over the past couple of decades, psychologists have documented many different ways that our minds fail to see what is directly in front of us. They’ve come up with a concept called “bounded ethicality”: That’s the notion that cognitively, our ability to behave ethically is seriously limited, because we don’t always see the ethical big picture.

One small example: the way a decision is framed. “The way that a decision is presented to me,” says Tenbrunsel, “very much changes the way in which I view that decision, and then eventually, the decision it is that I reach.”

Essentially, Tenbrunsel argues, certain cognitive frames make us blind to the fact that we are confronting an ethical problem at all.

Tenbrunsel told us about a recent experiment that illustrates the problem. She got together two groups of people and told one to think about a business decision. The other group was instructed to think about an ethical decision. Those asked to consider a business decision generated one mental checklist; those asked to think of an ethical decision generated a different mental checklist.

Tenbrunsel next had her subjects do an unrelated task to distract them. Then she presented them with an opportunity to cheat.

Those cognitively primed to think about business behaved radically different from those who were not—no matter who they were, or what their moral upbringing had been.

“If you’re thinking about a business decision, you are significantly more likely to lie than if you were thinking from an ethical frame,” Tenbrunsel says.

According to Tenbrunsel, the business frame cognitively activates one set of goals—to be competent, to be successful; the ethics frame triggers other goals. And once you’re in, say, a business frame, you become really focused on meeting those goals, and other goals can completely fade from view.

Also:

Typically when we hear about large frauds, we assume the perpetrators were driven by financial incentives. But psychologists and economists say financial incentives don’t fully explain it. They’re interested in another possible explanation: Human beings commit fraud because human beings like each other.

We like to help each other, especially people we identify with. And when we are helping people, we really don’t see what we are doing as unethical.

The article even has some concrete security ideas:

Now if these psychologists and economists are right, if we are all capable of behaving profoundly unethically without realizing it, then our workplaces and regulations are poorly organized. They’re not designed to take into account the cognitively flawed human beings that we are. They don’t attempt to structure things around our weaknesses.

Some concrete proposals to do that are on the table. For example, we know that auditors develop relationships with clients after years of working together, and we know that those relationships can corrupt their audits without them even realizing it. So there is a proposal to force businesses to switch auditors every couple of years to address that problem.

Another suggestion: A sentence should be placed at the beginning of every business contract that explicitly says that lying on this contract is unethical and illegal, because that kind of statement would get people into the proper cognitive frame.

Along similar lines, some years ago Ross Anderson made the suggestion that the webpages of peoples’ online bank accounts should include their photographs, based on the research that it’s harder to commit fraud against someone who you identify with as a person.

Two excellent papers on this topic:

Abstract of the second paper:

Dishonesty plays a large role in the economy. Causes for (dis)honest behavior seem to be based partially on external rewards, and partially on internal rewards. Here, we investigate how such external and internal rewards work in concert to produce (dis)honesty. We propose and test a theory of self-concept maintenance that allows people to engage to some level in dishonest behavior, thereby benefiting from external benefits of dishonesty, while maintaining their positive view about themselves in terms of being honest individuals. The results show that (1) given the opportunity to engage in beneficial dishonesty, people will engage in such behaviors; (2) the amount of dishonesty is largely insensitive to either the expected external benefits or the costs associated with the deceptive acts; (3) people know about their actions but do not update their self-concepts; (4) causing people to become more aware of their internal standards for honesty decreases their tendency for deception; and (5) increasing the “degrees of freedom” that people have to interpret their actions increases their tendency for deception. We suggest that dishonesty governed by self-concept maintenance is likely to be prevalent in the economy, and understanding it has important implications for designing effective methods to curb dishonesty.

Posted on May 30, 2012 at 12:54 PM33 Comments

The Problem of False Alarms

The context is tornado warnings:

The basic problem, Smith says, it that sirens are sounded too often in most places. Sometimes they sound in an entire county for a warning that covers just a sliver of it; sometimes for other thunderstorm phenomena like large hail and/or strong straight-line winds; and sometimes for false alarm warnings ­ warnings for tornadoes that were incorrectly detected.

The residents of Joplin, Smith contends, were numbed by the too frequent blaring of sirens. As a result of too many past false alarms, he writes: “The citizens of Joplin were unwittingly being trained to NOT act when the sirens sounded.”

Posted on May 30, 2012 at 6:44 AM42 Comments

Backdoor Found (Maybe) in Chinese-Made Military Silicon Chips

We all knew this was possible, but researchers have found the exploit in the wild:

Claims were made by the intelligence agencies around the world, from MI5, NSA and IARPA, that silicon chips could be infected. We developed breakthrough silicon chip scanning technology to investigate these claims. We chose an American military chip that is highly secure with sophisticated encryption standard, manufactured in China. Our aim was to perform advanced code breaking and to see if there were any unexpected features on the chip. We scanned the silicon chip in an affordable time and found a previously unknown backdoor inserted by the manufacturer. This backdoor has a key, which we were able to extract. If you use this key you can disable the chip or reprogram it at will, even if locked by the user with their own key. This particular chip is prevalent in many systems from weapons, nuclear power plants to public transport. In other words, this backdoor access could be turned into an advanced Stuxnet weapon to attack potentially millions of systems. The scale and range of possible attacks has huge implications for National Security and public infrastructure.

Here’s the draft paper:

Abstract. This paper is a short summary of the first real world detection of a backdoor in a military grade FPGA. Using an innovative patented technique we were able to detect and analyse in the first documented case of its kind, a backdoor inserted into the Actel/Microsemi ProASIC3 chips. The backdoor was found to exist on the silicon itself, it was not present in any firmware loaded onto the chip. Using Pipeline Emission Analysis (PEA), a technique pioneered by our sponsor, we were able to extract the secret key to activate the backdoor. This way an attacker can disable all the security on the chip, reprogram crypto and access keys, modify low-level silicon features, access unencrypted configuration bitstream or permanently damage the device. Clearly this means the device is wide open to intellectual property theft, fraud, re-programming as well as reverse engineering of the design which allows the introduction of a new backdoor or Trojan. Most concerning, it is not possible to patch the backdoor in chips already deployed, meaning those using this family of chips have to accept the fact it can be easily compromised or it will have to be physically replaced after a redesign of the silicon itself.

The chip in question was designed in the U.S. by a U.S. company, but manufactured in China. News stories. Comment threads.

One researcher maintains that this is not malicious:

Backdoors are a common problem in software. About 20% of home routers have a backdoor in them, and 50% of industrial control computers have a backdoor. The cause of these backdoors isn’t malicious, but a byproduct of software complexity. Systems need to be debugged before being shipped to customers. Therefore, the software contains debuggers. Often, programmers forget to disable the debugger backdoors before shipping. This problem is notoriously bad for all embedded operating systems (VxWorks, QNX, WinCE, etc.).

[…]

It could just be part of the original JTAG building-block. Actel didn’t design their own, but instead purchased the JTAG design and placed it on their chips. They are not aware of precisely all the functionality in that JTAG block, or how it might interact with the rest of the system.

But I’m betting that Microsemi/Actel know about the functionality, but thought of it as a debug feature, rather than a backdoor.

It’s remotely possible that the Chinese manufacturer added the functionality, but highly improbable. It’s prohibitively difficult to change a chip design to add functionality of this complexity. On the other hand, it’s easy for a manufacturer to flip bits. Consider that the functionality is part of the design, but that Actel intended to disable it by flipping a bit turning it off. A manufacturer could easily flip a bit and turn it back on again. In other words, it’s extraordinarily difficult to add complex new functionality, but they may get lucky and be able to make small tweaks to accomplish their goals.

EDITED TO ADD (5/29): Two more articles.

EDITED TO ADD (6/8): Three more articles.

EDITED TO ADD (6/10): A response from the chip manufacturer.

The researchers assertion is that with the discovery of a security key, a hacker can gain access to a privileged internal test facility typically reserved for initial factory testing and failure analysis. Microsemi verifies that the internal test facility is disabled for all shipped devices. The internal test mode can only be entered in a customer-programmed device when the customer supplies their passcode, thus preventing unauthorized access by Microsemi or anyone else. In addition, Microsemi’s customers who are concerned about the possibility of a hacker using DPA have the ability to program their FPGAs with its highest level of security settings. This security setting will disable the use of any type of passcode to gain access to all device configuration, including the internal test facility.

A response from the researchers.

In order to gain access to the backdoor and other features a special key is required. This key has very robust DPA protection, in fact, one of the best silicon-level protections we have ever encountered. With our breakthrough PEA technique we extracted the key in one day and we found that the key is the same in all ProASIC3, Igloo, Fusion and SmartFusion FPGAs. Customers have an option to program their chosen passcode to increase the security; however, Actel/Microsemi does not tell its customers that a special fuse must be programmed in order to get the backdoor protected with both the passcode and backdoor keys. At the same time, the passcode key can be extracted with our PEA technique which is public and covered in our patent so everyone can independently verify our claims. That means that given physical access to the device an attacker can extract all the embedded IP within hours.

There is an option for the highest level of security settings – Permanent Lock. However, if the AES reprogramming option is left it still exposes the device to IP stealing. If not, the Permanent Lock itself is vulnerable to fault attacks and can be disabled opening up the path to the backdoor access as before, but without the need for any passcode.

Posted on May 29, 2012 at 2:07 PM67 Comments

Interview with a Safecracker

The legal kind. It’s interesting:

Q: How realistic are movies that show people breaking into vaults?

A: Not very! In the movies it takes five minutes of razzle-dazzle; in real life it’s usually at least a couple of hours of precision work for an easy, lost combination lockout.

[…]

Q: Have you ever met a lock you couldn’t pick?

A: There are several types of locks that are designed to be extremely pick-resistant, as there are combination safe locks that can slow down my efforts at manipulation.

I’ve never met a safe or lock that kept me out for very long. Not saying I can’t be stumped. Unknown mechanical malfunctions inside a safe or vault are the most challenging things I have to contend with and I will probably see one of those tomorrow since you just jinxed me with that question.

Posted on May 29, 2012 at 6:03 AM25 Comments

Friday Squid Blogging: Squid Ink from the Jurassic

Seems that squid ink hasn’t changed much in 160 million years. From this, researchers argue that the security mechanism of spraying ink into the water and escaping is also that old.

Simon and his colleagues used a combination of direct, high-resolution chemical techniques to determine that the melanin had been preserved. The researchers also compared the chemical composition of the ancient squid ink remains to that of modern squid ink from Sepia officinalis, a squid common to the Mediterranean, North and Baltic seas.

“It’s close enough that I would argue that the pigmentation in this class of animals has not evolved in 160 million years,” Simon said. “The whole machinery apparently has been locked in time and passed down through succeeding generations of squid. It’s a very optimized system for this animal and has been optimized for a long time.”

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Posted on May 25, 2012 at 4:01 PM47 Comments

The Explosive from the Latest Foiled Al Qaeda Underwear Bomb Plot

Interesting:

Although the plot was disrupted before a particular airline was targeted and tickets were purchased, al Qaeda’s continued attempts to attack the U.S. speak to the organization’s persistence and willingness to refine specific approaches to killing. Unlike Abdulmutallab’s bomb, the new device contained lead azide, an explosive often used as a detonator. If the new underwear bomb had been used, the bomber would have ignited the lead azide, which would have triggered a more powerful explosive, possibly military-grade explosive pentaerythritol tetranitrate (PETN).

Lead azide and PETN were key components in a 2010 plan to detonate two bombs sent from Yemen and bound for Chicago—one in a cargo aircraft and the other in the cargo hold of a passenger aircraft. In that plot, al-Qaeda hid bombs in printer cartridges, allowing them to slip past cargo handlers and airport screeners. Both bombs contained far more explosive material than the 80 grams of PETN that Abdulmutallab smuggled onto his Northwest Airlines flight.

With the latest device, al Asiri appears to have been able to improve on the underwear bomb supplied to Abdulmutallab, says Joan Neuhaus Schaan, a fellow in homeland security and terrorism for Rice University’s James A. Baker III Institute for Public Policy.

The interview is also interesting, and I am especially pleased to see this last answer:

What has been the most effective means of disrupting terrorism attacks?
As with bombs that were being sent from Yemen to Chicago as cargo, this latest plot was discovered using human intelligence rather than screening procedures and technologies. These plans were disrupted because of proactive mechanisms put in place to stop terrorism rather than defensive approaches such as screening.

Posted on May 25, 2012 at 6:43 AM20 Comments

The Ubiquity of Cyber-Fears

A new study concludes that more people are worried about cyber threats than terrorism.

…the three highest priorities for Americans when it comes to security issues in the presidential campaign are:

  1. Protecting government computer systems against hackers and criminals (74 percent)
  2. Protecting our electric power grid, water utilities and transportation systems against computer or terrorist attacks (73 percent)
  3. Homeland security issues such as terrorism (68 percent)

Posted on May 24, 2012 at 11:31 AM16 Comments

The Banality of Surveillance Photos

Interesting essay on a trove on surveillance photos from Cold War-era Prague.

Cops, even secret cops, are for the most part ordinary people. Working stiffs concerned with holding down jobs and earning a living. Even those who thought it was important to find enemies recognized the absurdity of their task.

I take photos all the time and these empty blurry frames tell me that they were made intentionally. Shot out of boredom, as little acts of defiance, the secret police wandered the streets of Prague for twenty years taking lousy pictures of people from far away because a job is a job.

Occasionally something interesting happened, like spotting a hot stylish, American made Ford Mustang Sally. However, it must have been an awful job, with dull days that turned into months and years, of killing time between lunch and dinner.

Posted on May 24, 2012 at 6:17 AM21 Comments

Lessons in Trust from Web Hoaxes

Interesting discussion of trust in this article on web hoaxes.

Kelly’s students, like all good con artists, built their stories out of small, compelling details to give them a veneer of veracity. Ultimately, though, they aimed to succeed less by assembling convincing stories than by exploiting the trust of their marks, inducing them to lower their guard. Most of us assess arguments, at least initially, by assessing those who make them. Kelly’s students built blogs with strong first-person voices, and hit back hard at skeptics. Those inclined to doubt the stories were forced to doubt their authors. They inserted articles into Wikipedia, trading on the credibility of that site. And they aimed at very specific communities: the “beer lovers of Baltimore” and Reddit.

That was where things went awry. If the beer lovers of Baltimore form a cohesive community, the class failed to reach it. And although most communities treat their members with gentle regard, Reddit prides itself on winnowing the wheat from the chaff. It relies on the collective judgment of its members, who click on arrows next to contributions, elevating insightful or interesting content, and demoting less worthy contributions. Even Mills says he was impressed by the way in which redditors “marshaled their collective bits of expert knowledge to arrive at a conclusion that was largely correct.” It’s tough to con Reddit.

[…]

If there’s a simple lesson in all of this, it’s that hoaxes tend to thrive in communities which exhibit high levels of trust. But on the Internet, where identities are malleable and uncertain, we all might be well advised to err on the side of skepticism.

Posted on May 23, 2012 at 12:32 PM13 Comments

Privacy Concerns Around "Social Reading"

Interesting paper: “The Perils of Social Reading,” by Neil M. Richards, from the Georgetown Law Journal.

Abstract: Our law currently treats records of our reading habits under two contradictory rules ­ rules mandating confidentiality, and rules permitting disclosure. Recently, the rise of the social Internet has created more of these records and more pressures on when and how they should be shared. Companies like Facebook, in collaboration with many newspapers, have ushered in the era of “social reading,” in which what we read may be “frictionlessly shared” with our friends and acquaintances. Disclosure and sharing are on the rise.

This Article sounds a cautionary note about social reading and frictionless sharing. Social reading can be good, but the ways in which we set up the defaults for sharing matter a great deal. Our reader records implicate our intellectual privacy ­ the protection of reading from surveillance and interference so that we can read freely, widely, and without inhibition. I argue that the choices we make about how to share have real consequences, and that “frictionless sharing” is not frictionless, nor it is really sharing. Although sharing is important, the sharing of our reading habits is special. Such sharing should be conscious and only occur after meaningful notice.

The stakes in this debate are immense. We are quite literally rewiring the public and private spheres for a new century. Choices we make now about the boundaries between our individual and social selves, between consumers and companies, between citizens and the state, will have unforeseeable ramifications for the societies our children and grandchildren inherit. We should make choices that preserve our intellectual privacy, not destroy it. This Article suggests practical ways to do just that.

Posted on May 23, 2012 at 7:25 AM17 Comments

Racism as a Vestigal Remnant of a Security Mechanism

Roots of Racism,” by Elizabeth Culotta in Science:

Our attitudes toward outgroups are part of a threat-detection system that allows us to rapidly determine friend from foe, says psychologist Steven Neuberg of ASU Tempe. The problem, he says, is that like smoke detectors, the system is designed to give many false alarms rather than miss a true threat. So outgroup faces alarm us even when there is no danger.

Lots of interesting stuff in the article. Unfortunately, it requires registration to access.

Posted on May 22, 2012 at 1:10 PM52 Comments

Security Incentives and Advertising Fraud

Details are in the article, but here’s the general idea:

Let’s follow the flow of the users:

  1. Scammer buys user traffic from PornoXo.com and sends it to HQTubeVideos.
  2. HQTubeVideos loads, in invisible iframes, some parked domains with innocent-sounding names (relaxhealth.com, etc).
  3. In the parked domains, ad networks serve display and PPC ads.
  4. The click-fraud sites click on the ads that appear within the parked domains.
  5. The legitimate publishers gets invisible/fraudulent traffic through the (fraudulently) clicked ads from parked domains.
  6. Brand advertisers place their ad on the websites of the legitimate publishers, which in reality appear within the (invisible) iframe of HQTubeVideos.
  7. AdSafe detects the attempted placement within the porn website, and prevents the ads of the brand publisher from appearing in the legitimate website, which is hosted within the invisible frame of the porn site.

Notice how nicely orchestrated is the whole scheme: The parked domains “launder” the porn traffic. The ad networks place the ads in some legitimately-sounding parked domains, not in a porn site. The publishers get traffic from innocent domains such as RelaxHealth, not from porn sites. The porn site loads a variety of publishers, distributing the fraud across many publishers and many advertisers.

The most clever part of this is that it makes use of the natural externalities of the Internet.

And now let’s see who has the incentives to fight this. It is fraud, right? But I think it is well-executed type of fraud. It targets and defrauds the player that has the least incentives to fight the scam.

Who is affected? Let’s follow the money:

  • The big brand advertisers (Continental, Coca Cola, Verizon, Vonage,…) pay the publishers and the ad networks for running their campaigns.
  • The publishers pay the ad network and the scammer for the fraudulent clicks.
  • The scammer pays PornoXo and TrafficHolder for the traffic.

The ad networks see clicks on their ads, they get paid, so not much to worry about. They would worry if their advertisers were not happy. But here we have a piece of genius:

The scammer did not target sites that would measure conversions or cost-per-acquisition. Instead, the scammer was targeting mainly sites that sell pay-per-impression ads and video ads. If the publishers display CPM ads paid by impression, any traffic is good, all impressions count. It is not an accident that the scammer targets publishers with video content, and plenty of pay-per-impression video ads. The publishers have no reason to worry if they get traffic and the cost-per-visit is low.

Effectively, the only one hurt in this chain are the big brand advertisers, who feed the rest of the advertising chain.

Do the big brands care about this type of fraud? Yes and no, but not really deeply. Yes, they pay for some “invisible impressions”. But this is a marketing campaign. In any case, not all marketing attempts are successful. Do all readers of Economist look at the printed ads? Hardly. Do all web users pay attention to the banner ads? I do not think so. Invisible ads are just one of the things that make advertising a little bit more expensive and harder. Consider it part of the cost of doing business. In any case, compared to the overall marketing budget of these behemoths, the cost of such fraud is peanuts.

The big brands do not want their brand to be hurt. If the ads do not appear in places inappropriate for the brand, things are fine. Fighting the fraud publicly? This will just associate the brand with fraud. No marketing department wants that.

Posted on May 22, 2012 at 6:24 AM24 Comments

Kip Hawley Reviews Liars and Outliers

In his blog:

I think the most important security issues going forward center around identity and trust. Before knowing I would soon encounter Bruce again in the media, I bought and read his new book Liars & Outliers and it is a must-read book for people looking forward into our security future and thinking about where this all leads. For my colleagues inside the government working the various identity management, security clearance, and risk-based- security issues, L&O should be required reading.

[…]

L&O is fresh thinking about live fire issues of today as well as moral issues that are ahead. Whatever your policy bent, this book will help you. Trust me on this, you don’t have to buy everything Bruce says about TSA to read this book, take it to work, put it down on the table and say, “this is brilliant stuff.”

I’m hosting Kip Hawley on FireDogLake’s Book Salon on Sunday at 5:00 – 7:00 PM EDT. Join me and we’ll ask him some tough questions about his new book.

Posted on May 18, 2012 at 6:06 AM17 Comments

Rules for Radicals

It was written in 1971, but this still seems like a cool book:

For an elementary illustration of tactics, take parts of your face as the point of reference; your eyes, your ears, and your nose. First the eyes: if you have organized a vast, mass-based people’s organization, you can parade it visibly before the enemy and openly show your power. Second the ears; if your organization is small in numbers, then do what Gideon did: conceal the members in the dark but raise a din and clamor that will make the listener believe that your organization numbers many more than it does. Third, the nose; if your organization is too tiny even for noise, stink up the place.

Always remember the first rule of power tactics: Power is not only what you have but what the enemy thinks you have.

The second rule is: Never go outside the experience of your people. When an action or tactic is outside the experience of the people, the result is confusion, fear, and retreat. It also means a collapse of communication, as we have notes.

The third rule is: Wherever possible go outside the experience of the enemy. Here you want to cause confusion, fear, and retreat.

The fourth rule is: Make the enemy live up to their own book of rules. You can kill them with this, for they can no more obey their own rules than the Christian church can live up to Christianity.

The fourth rule carries within in the fifth rule: Ridicule is man’s most potent weapon. It is almost impossible to counterattack ridicule. Also it infuriates the opposition, who then react to your advantage.

The sixth rule is: A good tactic is one that your people enjoy. If your people are not having a ball doing it, there is something very wrong with the tactic.

The seventh rule: A tactic that drags on too long becomes a drag.

[…]

The twelfth rule: The price of a successful attack is a constructive alternative. You cannot risk being trapped by the enemy in his sudden agreement with your demand and saying “You’re right—we don’t know what to do about this issue. Now you tell us.”

The thirteenth rule: Pick the target, freeze it, personalize it, and polarize it.

Posted on May 17, 2012 at 7:20 AM74 Comments

Security Vulnerabilities in Airport Full-Body Scanners

According to a report from the DHS Office of Inspector General:

Federal investigators “identified vulnerabilities in the screening process” at domestic airports using so-called “full body scanners,” according to a classified internal Department of Homeland Security report.

EPIC obtained an unclassified version of the report in a FOIA response. Here’s the summary.

Posted on May 16, 2012 at 6:15 AM15 Comments

U.S. Exports Terrorism Fears

To New Zealand:

United States Secretary of Homeland Security Janet Napolitano has warned the New Zealand Government about the latest terrorist threat known as “body bombers.”

[…]

“Do we have specific credible evidence of a [body bomb] threat today? I would not say that we do, however, the importance is that we all lean forward.”

Why the headline of this article is “NZ warned over ‘body bombers,'” and not “Napolitano admits ‘no credible evidence’ of body bomber threat” is beyond me.

Posted on May 15, 2012 at 6:17 AM55 Comments

The Trouble with Airport Profiling

Why do otherwise rational people think it’s a good idea to profile people at airports? Recently, neuroscientist and best-selling author Sam Harris related a story of an elderly couple being given the twice-over by the TSA, pointed out how these two were obviously not a threat, and recommended that the TSA focus on the actual threat: “Muslims, or anyone who looks like he or she could conceivably be Muslim.”

This is a bad idea. It doesn’t make us any safer—and it actually puts us all at risk.

The right way to look at security is in terms of cost-benefit trade-offs. If adding profiling to airport checkpoints allowed us to detect more threats at a lower cost, than we should implement it. If it didn’t, we’d be foolish to do so. Sometimes profiling works. Consider a sheep in a meadow, happily munching on grass. When he spies a wolf, he’s going to judge that individual wolf based on a bunch of assumptions related to the past behavior of its species. In short, that sheep is going to profile…and then run away. This makes perfect sense, and is why evolution produced sheep—and other animals—that react this way. But this sort of profiling doesn’t work with humans at airports, for several reasons.

First, in the sheep’s case the profile is accurate, in that all wolves are out to eat sheep. Maybe a particular wolf isn’t hungry at the moment, but enough wolves are hungry enough of the time to justify the occasional false alarm. However, it isn’t true that almost all Muslims are out to blow up airplanes. In fact, almost none of them are. Post 9/11, we’ve had 2 Muslim terrorists on U.S airplanes: the shoe bomber and the underwear bomber. If you assume 0.8% (that’s one estimate of the percentage of Muslim Americans) of the 630 million annual airplane fliers are Muslim and triple it to account for others who look Semitic, then the chances any profiled flier will be a Muslim terrorist is 1 in 80 million. Add the 19 9/11 terrorists—arguably a singular event—that number drops to 1 in 8 million. Either way, because the number of actual terrorists is so low, almost everyone selected by the profile will be innocent. This is called the “base rate fallacy,” and dooms any type of broad terrorist profiling, including the TSA’s behavioral profiling.

Second, sheep can safely ignore animals that don’t look like the few predators they know. On the other hand, to assume that only Arab-appearing people are terrorists is dangerously naive. Muslims are black, white, Asian, and everything else—most Muslims are not Arab. Recent terrorists have been European, Asian, African, Hispanic, and Middle Eastern; male and female; young and old. Underwear bomber Umar Farouk Abdul Mutallab was Nigerian. Shoe bomber Richard Reid was British with a Jamaican father. One of the London subway bombers, Germaine Lindsay, was Afro-Caribbean. Dirty bomb suspect Jose Padilla was Hispanic-American. The 2002 Bali terrorists were Indonesian. Both Timothy McVeigh and the Unabomber were white Americans. The Chechen terrorists who blew up two Russian planes in 2004 were female. Focusing on a profile increases the risk that TSA agents will miss those who don’t match it.

Third, wolves can’t deliberately try to evade the profile. A wolf in sheep’s clothing is just a story, but humans are smart and adaptable enough to put the concept into practice. Once the TSA establishes a profile, terrorists will take steps to avoid it. The Chechens deliberately chose female suicide bombers because Russian security was less thorough with women. Al Qaeda has tried to recruit non-Muslims. And terrorists have given bombs to innocent—and innocent-looking—travelers. Randomized secondary screening is more effective, especially since the goal isn’t to catch every plot but to create enough uncertainty that terrorists don’t even try.

And fourth, sheep don’t care if they offend innocent wolves; the two species are never going to be friends. At airports, though, there is an enormous social and political cost to the millions of false alarms. Beyond the societal harms of deliberately harassing a minority group, singling out Muslims alienates the very people who are in the best position to discover and alert authorities about Muslim plots before the terrorists even get to the airport. This alone is reason enough not to profile.

I too am incensed—but not surprised—when the TSA singles out four-year old girls, children with cerebral palsy, pretty women, the elderly, and wheelchair users for humiliation, abuse, and sometimes theft. Any bureaucracy that processes 630 million people per year will generate stories like this. When people propose profiling, they are really asking for a security system that can apply judgment. Unfortunately, that’s really hard. Rules are easier to explain and train. Zero tolerance is easier to justify and defend. Judgment requires better-educated, more expert, and much-higher-paid screeners. And the personal career risks to a TSA agent of being wrong when exercising judgment far outweigh any benefits from being sensible.

The proper reaction to screening horror stories isn’t to subject only “those people” to it; it’s to subject no one to it. (Can anyone even explain what hypothetical terrorist plot could successfully evade normal security, but would be discovered during secondary screening?) Invasive TSA screening is nothing more than security theater. It doesn’t make us safer, and it’s not worth the cost. Even more strongly, security isn’t our society’s only value. Do we really want the full power of government to act out our stereotypes and prejudices? Have we Americans ever done something like this and not been ashamed later? This is what we have a Constitution for: to help us live up to our values and not down to our fears.

This essay previously appeared on Forbes.com and Sam Harris’s blog.

Posted on May 14, 2012 at 6:19 AM84 Comments

Smart Phone Privacy App

MobileScope looks like a great tool for monitoring and controlling what information third parties get from your smart phone apps:

We built MobileScope as a proof-of-concept tool that automates much of what we were doing manually; monitoring mobile devices for surprising traffic and highlighting potentially privacy-revealing flows

[…]

Unlike PCs, we have little control over the underlying privacy and security features of our mobile devices. They come pre-installed with locked-down operating systems that often restrict their owners from exercising meaningful control unless they’re willing to void their warranty and jailbreak the device.

Our current plans are to release MobileScope in the coming weeks and allow interested consumers, developers, regulators, and press to see what information their mobile devices can transmit.

Posted on May 11, 2012 at 6:42 AM29 Comments

RuggedCom Inserts Backdoor into Its Products

All RuggedCom equipment comes with a built-in backdoor:

The backdoor, which cannot be disabled, is found in all versions of the Rugged Operating System made by RuggedCom, according to independent researcher Justin W. Clarke, who works in the energy sector. The login credentials for the backdoor include a static username, “factory,” that was assigned by the vendor and can’t be changed by customers, and a dynamically generated password that is based on the individual MAC address, or media access control address, for any specific device.

This seems like a really bad idea.

No word from the company about whether they’re going to replace customer units.

EDITED TO ADD (5/11): RuggedCom’s response.

Posted on May 9, 2012 at 6:24 AM37 Comments

Overreacting to Potential Bombs

This is a ridiculous overreaction:

The police bomb squad was called to 2 World Financial Center in lower Manhattan at midday when a security guard reported a package that seemed suspicious. Brookfield Properties, which runs the property, ordered an evacuation as a precaution.

That’s the entire building, a 44-story, 2.5-million-square-foot office building. And why?

The bomb squad determined the package was a fake explosive that looked like a 1940s-style pineapple grenade. It was mounted on a plaque that said “Complaint department: Take a number,” with a number attached to the pin.

It was addressed to someone at one of the financial institutions housed there and discovered by someone in the mail room.

If the grenade had been real, it could have destroyed—what?—a room. Of course, there’s no downside to Brookfield Properties overreacting.

Posted on May 8, 2012 at 7:03 AM67 Comments

Naval Drones

With all the talk about airborne drones like the Predator, it’s easy to forget that drones can be in the water as well. Meet the Common Unmanned Surface Vessel (CUSV):

The boat—painted in Navy gray and with a striking resemblance to a PT boat—is 39 feet long and can reach a top speed of 28 knots. Using a modified version of the unmanned Shadow surveillance aircraft technology that logged 700,000 hours of duty in the Middle East, the boat can be controlled remotely from 10 to 12 miles away from a command station on land, at sea or in the air, Haslett said.

Farther out, it can be switched to a satellite control system, which Textron said could expand its range to 1,200 miles. The boat could be launched from virtually any large Navy vessel.

[…]

Using diesel fuel, the boat could operate for up to 72 hours without refueling, depending upon its traveling speed and the weight of equipment being carried, said Stanley DeGeus, senior business development director for AAI’s advanced systems. The fuel supply could be extended for up to a week on slow-moving reconnaissance missions, he said.

Posted on May 7, 2012 at 6:52 AM30 Comments

Facial Recognition of Avatars

I suppose this sort of thing might be useful someday.

In Second Life, avatars are easily identified by their username, meaning police can just ask San Francisco-based Linden Labs, which runs the virtual world, to look up a particular user. But what happens when virtual worlds start running on peer-to-peer networks, leaving no central authority to appeal to? Then there would be no way of linking an avatar username to a human user.

Yampolskiy and colleagues have developed facial recognition techniques specifically tailored to avatars, since current algorithms only work on humans. “Not all avatars are human looking, and even with those that are humanoid there is a huge diversity of colour,” Yampolskiy says, so his software uses those colours to improve avatar recognition.

Posted on May 4, 2012 at 6:31 AM24 Comments

Criminal Intent Prescreening and the Base Rate Fallacy

I’ve often written about the base rate fallacy and how it makes tests for rare events—like airplane terrorists—useless because the false positives vastly outnumber the real positives. This essay uses that argument to demonstrate why the TSA’s FAST program is useless:

First, predictive software of this kind is undermined by a simple statistical problem known as the false-positive paradox. Any system designed to spot terrorists before they commit an act of terrorism is, necessarily, looking for a needle in a haystack. As the adage would suggest, it turns out that this is an incredibly difficult thing to do. Here is why: let’s assume for a moment that 1 in 1,000,000 people is a terrorist about to commit a crime. Terrorists are actually probably much much more rare, or we would have a whole lot more acts of terrorism, given the daily throughput of the global transportation system. Now lets imagine the FAST algorithm correctly classifies 99.99 percent of observations—an incredibly high rate of accuracy for any big data-based predictive model. Even with this unbelievable level of accuracy, the system would still falsely accuse 99 people of being terrorists for every one terrorist it finds. Given that none of these people would have actually committed a terrorist act yet distinguishing the innocent false positives from the guilty might be a non-trivial, and invasive task.

Of course FAST has nowhere near a 99.99 percent accuracy rate. I imagine much of the work being done here is classified, but a writeup in Nature reported that the first round of field tests had a 70 percent accuracy rate. From the available material it is difficult to determine exactly what this number means. There are a couple of ways to interpret this, since both the write-up and the DHS documentation (all pdfs) are unclear. This might mean that the current iteration of FAST correctly classifies 70 percent of people it observes—which would produce false positives at an abysmal rate, given the rarity of terrorists in the population. The other way of interpreting this reported result is that FAST will call a terrorist a terrorist 70 percent of the time. This second option tells us nothing about the rate of false positives, but it would likely be quite high. In either case, it is likely that the false-positive paradox would be in full force for FAST, ensuring that any real terrorists identified are lost in a sea of falsely accused innocents.

It’s that final sentence in the first quoted paragraph that really points to how bad this idea is. If FAST determines you are guilty of a crime you have not yet committed, how do you exonerate yourself?

Posted on May 3, 2012 at 6:22 AM40 Comments

Al Qaeda Steganography

The reports are still early, but it seems that a bunch of terrorist planning documents were found embedded in a digital file of a porn movie.

Several weeks later, after laborious efforts to crack a password and software to make the file almost invisible, German investigators discovered encoded inside the actual video a treasure trove of intelligence—more than 100 al Qaeda documents that included an inside track on some of the terror group’s most audacious plots and a road map for future operations.

Posted on May 2, 2012 at 12:41 PM60 Comments

Cybercrime as a Tragedy of the Commons

Two very interesting points in this essay on cybercrime. The first is that cybercrime isn’t as big a problem as conventional wisdom makes it out to be.

We have examined cybercrime from an economics standpoint and found a story at odds with the conventional wisdom. A few criminals do well, but cybercrime is a relentless, low-profit struggle for the majority. Spamming, stealing passwords or pillaging bank accounts might appear a perfect business. Cybercriminals can be thousands of miles from the scene of the crime, they can download everything they need online, and there’s little training or capital outlay required. Almost anyone can do it.

Well, not really. Structurally, the economics of cybercrimes like spam and password-stealing are the same as those of fishing. Economics long ago established that common-access resources make for bad business opportunities. No matter how large the original opportunity, new entrants continue to arrive, driving the average return ever downward. Just as unregulated fish stocks are driven to exhaustion, there is never enough “easy money” to go around.

The second is that exaggerating the effects of cybercrime is a direct result of how the estimates are generated.

For one thing, in numeric surveys, errors are almost always upward: since the amounts of estimated losses must be positive, there’s no limit on the upside, but zero is a hard limit on the downside. As a consequence, respondent errors—­ or outright lies—cannot be canceled out. Even worse, errors get amplified when researchers scale between the survey group and the overall population.

Suppose we asked 5,000 people to report their cybercrime losses, which we will then extrapolate over a population of 200 million. Every dollar claimed gets multiplied by 40,000. A single individual who falsely claims $25,000 in losses adds a spurious $1 billion to the estimate. And since no one can claim negative losses, the error can’t be canceled.

[…]

A cybercrime where profits are slim and competition is ruthless also offers simple explanations of facts that are otherwise puzzling. Credentials and stolen credit-card numbers are offered for sale at pennies on the dollar for the simple reason that they are hard to monetize. Cybercrime billionaires are hard to locate because there aren’t any. Few people know anyone who has lost substantial money because victims are far rarer than the exaggerated estimates would imply.

Posted on May 2, 2012 at 7:10 AM27 Comments

When Investigation Fails to Prevent Terrorism

I’ve long advocated investigation, intelligence, and emergency response as the places where we can most usefully spend our counterterrorism dollars. Here’s an example where that didn’t work:

Starting in April 1991, three FBI agents posed as members of an invented racist militia group called the Veterans Aryan Movement. According to their cover story, VAM members robbed armored cars, using the proceeds to buy weapons and support racist extremism. The lead agent was a Vietnam veteran with a background in narcotics, using the alias Dave Rossi.

Code-named PATCON, for “Patriot-conspiracy,” the investigation would last more than two years, crossing state and organizational lines in search of intelligence on the so-called Patriot movement, the label applied to a wildly diverse collection of racist, ultra-libertarian, right-wing and/or pro-gun activists and extremists who, over the years, have found common cause in their suspicion and fear of the federal government.

The undercover agents met some of the most infamous names in the movement, but their work never led to a single arrest. When McVeigh walked through the middle of the investigation in 1993, he went unnoticed.

The whole article is worth reading.

Posted on May 1, 2012 at 7:31 AM22 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.