Schneier on Security
A blog covering security and security technology.
May 2012 Archives
The criminals, some of them former drug dealers, outwit the Internal Revenue Service by filing a return before the legitimate taxpayer files. Then the criminals receive the refund, sometimes by check but more often though a convenient but hard-to-trace prepaid debit card.
The problem is that it doesn't take much identity information to file a tax return with the IRS, and the agency automatically corrects your mistakes if you make them -- and does the calculations for you if you don't want to do them yourself. So it's pretty easy to file a fake return for someone. And the IRS has no way to check if the taxpayer's address is real, so it sends refunds out to whatever address or account you give them.
A particularly clever form of retail theft -- especially when salesclerks are working fast and don't know the products -- is to switch bar codes. This particular thief stole Lego sets. If you know Lego, you know there's a vast price difference between the small sets and the large ones. He was caught by in-store surveillance.
EDITED TO ADD (6/12): I posted about a similar story in 2008.
When I talk about Liars and Outliers to security audiences, one of the things I stress is our traditional security focus -- on technical countermeasures -- is much narrower than it could be. Leveraging moral, reputational, and institutional pressures are likely to be much more effective in motivating cooperative behavior.
This story illustrates the point. It's about the psychology of fraud, "why good people do bad things."
There is, she says, a common misperception that at moments like this, when people face an ethical decision, they clearly understand the choice that they are making.
Typically when we hear about large frauds, we assume the perpetrators were driven by financial incentives. But psychologists and economists say financial incentives don't fully explain it. They're interested in another possible explanation: Human beings commit fraud because human beings like each other.
The article even has some concrete security ideas:
Now if these psychologists and economists are right, if we are all capable of behaving profoundly unethically without realizing it, then our workplaces and regulations are poorly organized. They're not designed to take into account the cognitively flawed human beings that we are. They don't attempt to structure things around our weaknesses.
Along similar lines, some years ago Ross Anderson made the suggestion that the webpages of peoples' online bank accounts should include their photographs, based on the research that it's harder to commit fraud against someone who you identify with as a person.
Two excellent papers on this topic:
Abstract of the second paper:
Dishonesty plays a large role in the economy. Causes for (dis)honest behavior seem to be based partially on external rewards, and partially on internal rewards. Here, we investigate how such external and internal rewards work in concert to produce (dis)honesty. We propose and test a theory of self-concept maintenance that allows people to engage to some level in dishonest behavior, thereby benefiting from external benefits of dishonesty, while maintaining their positive view about themselves in terms of being honest individuals. The results show that (1) given the opportunity to engage in beneficial dishonesty, people will engage in such behaviors; (2) the amount of dishonesty is largely insensitive to either the expected external benefits or the costs associated with the deceptive acts; (3) people know about their actions but do not update their self-concepts; (4) causing people to become more aware of their internal standards for honesty decreases their tendency for deception; and (5) increasing the "degrees of freedom" that people have to interpret their actions increases their tendency for deception. We suggest that dishonesty governed by self-concept maintenance is likely to be prevalent in the economy, and understanding it has important implications for designing effective methods to curb dishonesty.
The context is tornado warnings:
The basic problem, Smith says, it that sirens are sounded too often in most places. Sometimes they sound in an entire county for a warning that covers just a sliver of it; sometimes for other thunderstorm phenomena like large hail and/or strong straight-line winds; and sometimes for false alarm warnings warnings for tornadoes that were incorrectly detected.
We all knew this was possible, but researchers have found the exploit in the wild:
Claims were made by the intelligence agencies around the world, from MI5, NSA and IARPA, that silicon chips could be infected. We developed breakthrough silicon chip scanning technology to investigate these claims. We chose an American military chip that is highly secure with sophisticated encryption standard, manufactured in China. Our aim was to perform advanced code breaking and to see if there were any unexpected features on the chip. We scanned the silicon chip in an affordable time and found a previously unknown backdoor inserted by the manufacturer. This backdoor has a key, which we were able to extract. If you use this key you can disable the chip or reprogram it at will, even if locked by the user with their own key. This particular chip is prevalent in many systems from weapons, nuclear power plants to public transport. In other words, this backdoor access could be turned into an advanced Stuxnet weapon to attack potentially millions of systems. The scale and range of possible attacks has huge implications for National Security and public infrastructure.
Here's the draft paper:
Abstract. This paper is a short summary of the first real world detection of a backdoor in a military grade FPGA. Using an innovative patented technique we were able to detect and analyse in the first documented case of its kind, a backdoor inserted into the Actel/Microsemi ProASIC3 chips. The backdoor was found to exist on the silicon itself, it was not present in any firmware loaded onto the chip. Using Pipeline Emission Analysis (PEA), a technique pioneered by our sponsor, we were able to extract the secret key to activate the backdoor. This way an attacker can disable all the security on the chip, reprogram crypto and access keys, modify low-level silicon features, access unencrypted configuration bitstream or permanently damage the device. Clearly this means the device is wide open to intellectual property theft, fraud, re-programming as well as reverse engineering of the design which allows the introduction of a new backdoor or Trojan. Most concerning, it is not possible to patch the backdoor in chips already deployed, meaning those using this family of chips have to accept the fact it can be easily compromised or it will have to be physically replaced after a redesign of the silicon itself.
One researcher maintains that this is not malicious:
Backdoors are a common problem in software. About 20% of home routers have a backdoor in them, and 50% of industrial control computers have a backdoor. The cause of these backdoors isn't malicious, but a byproduct of software complexity. Systems need to be debugged before being shipped to customers. Therefore, the software contains debuggers. Often, programmers forget to disable the debugger backdoors before shipping. This problem is notoriously bad for all embedded operating systems (VxWorks, QNX, WinCE, etc.).
EDITED TO ADD (6/10): A response from the chip manufacturer.
The researchers assertion is that with the discovery of a security key, a hacker can gain access to a privileged internal test facility typically reserved for initial factory testing and failure analysis. Microsemi verifies that the internal test facility is disabled for all shipped devices. The internal test mode can only be entered in a customer-programmed device when the customer supplies their passcode, thus preventing unauthorized access by Microsemi or anyone else. In addition, Microsemi's customers who are concerned about the possibility of a hacker using DPA have the ability to program their FPGAs with its highest level of security settings. This security setting will disable the use of any type of passcode to gain access to all device configuration, including the internal test facility.
A response from the researchers.
In order to gain access to the backdoor and other features a special key is required. This key has very robust DPA protection, in fact, one of the best silicon-level protections we have ever encountered. With our breakthrough PEA technique we extracted the key in one day and we found that the key is the same in all ProASIC3, Igloo, Fusion and SmartFusion FPGAs. Customers have an option to program their chosen passcode to increase the security; however, Actel/Microsemi does not tell its customers that a special fuse must be programmed in order to get the backdoor protected with both the passcode and backdoor keys. At the same time, the passcode key can be extracted with our PEA technique which is public and covered in our patent so everyone can independently verify our claims. That means that given physical access to the device an attacker can extract all the embedded IP within hours.
The legal kind. It's interesting:
Q: How realistic are movies that show people breaking into vaults?
Remember my rebuttal of Sam Harris's essay advocating the profiling of Muslims at airports? That wasn't the end of it. Harris and I conducted a back-and-forth e-mail discussion, the results of which are here. At 14,000+ words, I only recommend it for the most stalwart of readers.
Seems that squid ink hasn't changed much in 160 million years. From this, researchers argue that the security mechanism of spraying ink into the water and escaping is also that old.
Simon and his colleagues used a combination of direct, high-resolution chemical techniques to determine that the melanin had been preserved. The researchers also compared the chemical composition of the ancient squid ink remains to that of modern squid ink from Sepia officinalis, a squid common to the Mediterranean, North and Baltic seas.
As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.
Although the plot was disrupted before a particular airline was targeted and tickets were purchased, al Qaeda's continued attempts to attack the U.S. speak to the organization's persistence and willingness to refine specific approaches to killing. Unlike Abdulmutallab's bomb, the new device contained lead azide, an explosive often used as a detonator. If the new underwear bomb had been used, the bomber would have ignited the lead azide, which would have triggered a more powerful explosive, possibly military-grade explosive pentaerythritol tetranitrate (PETN).
The interview is also interesting, and I am especially pleased to see this last answer:
What has been the most effective means of disrupting terrorism attacks?
A new study concludes that more people are worried about cyber threats than terrorism.
...the three highest priorities for Americans when it comes to security issues in the presidential campaign are:
Interesting essay on a trove on surveillance photos from Cold War-era Prague.
Cops, even secret cops, are for the most part ordinary people. Working stiffs concerned with holding down jobs and earning a living. Even those who thought it was important to find enemies recognized the absurdity of their task.
Interesting discussion of trust in this article on web hoaxes.
Kelly's students, like all good con artists, built their stories out of small, compelling details to give them a veneer of veracity. Ultimately, though, they aimed to succeed less by assembling convincing stories than by exploiting the trust of their marks, inducing them to lower their guard. Most of us assess arguments, at least initially, by assessing those who make them. Kelly's students built blogs with strong first-person voices, and hit back hard at skeptics. Those inclined to doubt the stories were forced to doubt their authors. They inserted articles into Wikipedia, trading on the credibility of that site. And they aimed at very specific communities: the "beer lovers of Baltimore" and Reddit.
Interesting paper: "The Perils of Social Reading," by Neil M. Richards, from the Georgetown Law Journal.
Abstract: Our law currently treats records of our reading habits under two contradictory rules rules mandating confidentiality, and rules permitting disclosure. Recently, the rise of the social Internet has created more of these records and more pressures on when and how they should be shared. Companies like Facebook, in collaboration with many newspapers, have ushered in the era of “social reading,” in which what we read may be “frictionlessly shared” with our friends and acquaintances. Disclosure and sharing are on the rise.
"Roots of Racism," by Elizabeth Culotta in Science:
Our attitudes toward outgroups are part of a threat-detection system that allows us to rapidly determine friend from foe, says psychologist Steven Neuberg of ASU Tempe. The problem, he says, is that like smoke detectors, the system is designed to give many false alarms rather than miss a true threat. So outgroup faces alarm us even when there is no danger.
Lots of interesting stuff in the article. Unfortunately, it requires registration to access.
Details are in the article, but here's the general idea:
Let's follow the flow of the users:
The most clever part of this is that it makes use of the natural externalities of the Internet.
And now let's see who has the incentives to fight this. It is fraud, right? But I think it is well-executed type of fraud. It targets and defrauds the player that has the least incentives to fight the scam.
Interesting article from Wired.
As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.
In his blog:
I think the most important security issues going forward center around identity and trust. Before knowing I would soon encounter Bruce again in the media, I bought and read his new book Liars & Outliers and it is a must-read book for people looking forward into our security future and thinking about where this all leads. For my colleagues inside the government working the various identity management, security clearance, and risk-based- security issues, L&O should be required reading.
I like this essay because it nicely illustrates the security mindset.
It was written in 1971, but this still seems like a cool book:
For an elementary illustration of tactics, take parts of your face as the point of reference; your eyes, your ears, and your nose. First the eyes: if you have organized a vast, mass-based people's organization, you can parade it visibly before the enemy and openly show your power. Second the ears; if your organization is small in numbers, then do what Gideon did: conceal the members in the dark but raise a din and clamor that will make the listener believe that your organization numbers many more than it does. Third, the nose; if your organization is too tiny even for noise, stink up the place.
Need some pre-industrial security for your USB drive? How about a wax seal? Neat, but I recommend combining it with encryption for even more security!
According to a report from the DHS Office of Inspector General:
Federal investigators "identified vulnerabilities in the screening process" at domestic airports using so-called "full body scanners," according to a classified internal Department of Homeland Security report.
To New Zealand:
United States Secretary of Homeland Security Janet Napolitano has warned the New Zealand Government about the latest terrorist threat known as "body bombers."
Why the headline of this article is "NZ warned over 'body bombers,'" and not "Napolitano admits 'no credible evidence' of body bomber threat" is beyond me.
Why do otherwise rational people think it's a good idea to profile people at airports? Recently, neuroscientist and best-selling author Sam Harris related a story of an elderly couple being given the twice-over by the TSA, pointed out how these two were obviously not a threat, and recommended that the TSA focus on the actual threat: "Muslims, or anyone who looks like he or she could conceivably be Muslim."
This is a bad idea. It doesn’t make us any safer -- and it actually puts us all at risk.
The right way to look at security is in terms of cost-benefit trade-offs. If adding profiling to airport checkpoints allowed us to detect more threats at a lower cost, than we should implement it. If it didn't, we'd be foolish to do so. Sometimes profiling works. Consider a sheep in a meadow, happily munching on grass. When he spies a wolf, he's going to judge that individual wolf based on a bunch of assumptions related to the past behavior of its species. In short, that sheep is going to profile...and then run away. This makes perfect sense, and is why evolution produced sheep -- and other animals -- that react this way. But this sort of profiling doesn't work with humans at airports, for several reasons.
First, in the sheep's case the profile is accurate, in that all wolves are out to eat sheep. Maybe a particular wolf isn't hungry at the moment, but enough wolves are hungry enough of the time to justify the occasional false alarm. However, it isn't true that almost all Muslims are out to blow up airplanes. In fact, almost none of them are. Post 9/11, we’ve had 2 Muslim terrorists on U.S airplanes: the shoe bomber and the underwear bomber. If you assume 0.8% (that’s one estimate of the percentage of Muslim Americans) of the 630 million annual airplane fliers are Muslim and triple it to account for others who look Semitic, then the chances any profiled flier will be a Muslim terrorist is 1 in 80 million. Add the 19 9/11 terrorists -- arguably a singular event -- that number drops to 1 in 8 million. Either way, because the number of actual terrorists is so low, almost everyone selected by the profile will be innocent. This is called the "base rate fallacy," and dooms any type of broad terrorist profiling, including the TSA’s behavioral profiling.
Second, sheep can safely ignore animals that don't look like the few predators they know. On the other hand, to assume that only Arab-appearing people are terrorists is dangerously naive. Muslims are black, white, Asian, and everything else -- most Muslims are not Arab. Recent terrorists have been European, Asian, African, Hispanic, and Middle Eastern; male and female; young and old. Underwear bomber Umar Farouk Abdul Mutallab was Nigerian. Shoe bomber Richard Reid was British with a Jamaican father. One of the London subway bombers, Germaine Lindsay, was Afro-Caribbean. Dirty bomb suspect Jose Padilla was Hispanic-American. The 2002 Bali terrorists were Indonesian. Both Timothy McVeigh and the Unabomber were white Americans. The Chechen terrorists who blew up two Russian planes in 2004 were female. Focusing on a profile increases the risk that TSA agents will miss those who don't match it.
Third, wolves can't deliberately try to evade the profile. A wolf in sheep’s clothing is just a story, but humans are smart and adaptable enough to put the concept into practice. Once the TSA establishes a profile, terrorists will take steps to avoid it. The Chechens deliberately chose female suicide bombers because Russian security was less thorough with women. Al Qaeda has tried to recruit non-Muslims. And terrorists have given bombs to innocent -- and innocent-looking -- travelers. Randomized secondary screening is more effective, especially since the goal isn't to catch every plot but to create enough uncertainty that terrorists don’t even try.
And fourth, sheep don't care if they offend innocent wolves; the two species are never going to be friends. At airports, though, there is an enormous social and political cost to the millions of false alarms. Beyond the societal harms of deliberately harassing a minority group, singling out Muslims alienates the very people who are in the best position to discover and alert authorities about Muslim plots before the terrorists even get to the airport. This alone is reason enough not to profile.
I too am incensed -- but not surprised -- when the TSA singles out four-year old girls, children with cerebral palsy, pretty women, the elderly, and wheelchair users for humiliation, abuse, and sometimes theft. Any bureaucracy that processes 630 million people per year will generate stories like this. When people propose profiling, they are really asking for a security system that can apply judgment. Unfortunately, that's really hard. Rules are easier to explain and train. Zero tolerance is easier to justify and defend. Judgment requires better-educated, more expert, and much-higher-paid screeners. And the personal career risks to a TSA agent of being wrong when exercising judgment far outweigh any benefits from being sensible.
The proper reaction to screening horror stories isn't to subject only "those people" to it; it's to subject no one to it. (Can anyone even explain what hypothetical terrorist plot could successfully evade normal security, but would be discovered during secondary screening?) Invasive TSA screening is nothing more than security theater. It doesn't make us safer, and it's not worth the cost. Even more strongly, security isn't our society's only value. Do we really want the full power of government to act out our stereotypes and prejudices? Have we Americans ever done something like this and not been ashamed later? This is what we have a Constitution for: to help us live up to our values and not down to our fears.
As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.
MobileScope looks like a great tool for monitoring and controlling what information third parties get from your smart phone apps:
We built MobileScope as a proof-of-concept tool that automates much of what we were doing manually; monitoring mobile devices for surprising traffic and highlighting potentially privacy-revealing flows
All RuggedCom equipment comes with a built-in backdoor:
The backdoor, which cannot be disabled, is found in all versions of the Rugged Operating System made by RuggedCom, according to independent researcher Justin W. Clarke, who works in the energy sector. The login credentials for the backdoor include a static username, "factory," that was assigned by the vendor and can't be changed by customers, and a dynamically generated password that is based on the individual MAC address, or media access control address, for any specific device.
This seems like a really bad idea.
No word from the company about whether they're going to replace customer units.
EDITED TO ADD (5/11): RuggedCom's response.
We don't know much, but here are my predictions:
This is a ridiculous overreaction:
The police bomb squad was called to 2 World Financial Center in lower Manhattan at midday when a security guard reported a package that seemed suspicious. Brookfield Properties, which runs the property, ordered an evacuation as a precaution.
That's the entire building, a 44-story, 2.5-million-square-foot office building. And why?
The bomb squad determined the package was a fake explosive that looked like a 1940s-style pineapple grenade. It was mounted on a plaque that said "Complaint department: Take a number," with a number attached to the pin.
If the grenade had been real, it could have destroyed -- what? -- a room. Of course, there's no downside to Brookfield Properties overreacting.
With all the talk about airborne drones like the Predator, it's easy to forget that drones can be in the water as well. Meet the Common Unmanned Surface Vessel (CUSV):
The boat -- painted in Navy gray and with a striking resemblance to a PT boat -- is 39 feet long and can reach a top speed of 28 knots. Using a modified version of the unmanned Shadow surveillance aircraft technology that logged 700,000 hours of duty in the Middle East, the boat can be controlled remotely from 10 to 12 miles away from a command station on land, at sea or in the air, Haslett said.
I suppose this sort of thing might be useful someday.
In Second Life, avatars are easily identified by their username, meaning police can just ask San Francisco-based Linden Labs, which runs the virtual world, to look up a particular user. But what happens when virtual worlds start running on peer-to-peer networks, leaving no central authority to appeal to? Then there would be no way of linking an avatar username to a human user.
I've often written about the base rate fallacy and how it makes tests for rare events -- like airplane terrorists -- useless because the false positives vastly outnumber the real positives. This essay uses that argument to demonstrate why the TSA's FAST program is useless:
First, predictive software of this kind is undermined by a simple statistical problem known as the false-positive paradox. Any system designed to spot terrorists before they commit an act of terrorism is, necessarily, looking for a needle in a haystack. As the adage would suggest, it turns out that this is an incredibly difficult thing to do. Here is why: let's assume for a moment that 1 in 1,000,000 people is a terrorist about to commit a crime. Terrorists are actually probably much much more rare, or we would have a whole lot more acts of terrorism, given the daily throughput of the global transportation system. Now lets imagine the FAST algorithm correctly classifies 99.99 percent of observations -- an incredibly high rate of accuracy for any big data-based predictive model. Even with this unbelievable level of accuracy, the system would still falsely accuse 99 people of being terrorists for every one terrorist it finds. Given that none of these people would have actually committed a terrorist act yet distinguishing the innocent false positives from the guilty might be a non-trivial, and invasive task.
It's that final sentence in the first quoted paragraph that really points to how bad this idea is. If FAST determines you are guilty of a crime you have not yet committed, how do you exonerate yourself?
The reports are still early, but it seems that a bunch of terrorist planning documents were found embedded in a digital file of a porn movie.
Several weeks later, after laborious efforts to crack a password and software to make the file almost invisible, German investigators discovered encoded inside the actual video a treasure trove of intelligence -- more than 100 al Qaeda documents that included an inside track on some of the terror group's most audacious plots and a road map for future operations.
Two very interesting points in this essay on cybercrime. The first is that cybercrime isn't as big a problem as conventional wisdom makes it out to be.
We have examined cybercrime from an economics standpoint and found a story at odds with the conventional wisdom. A few criminals do well, but cybercrime is a relentless, low-profit struggle for the majority. Spamming, stealing passwords or pillaging bank accounts might appear a perfect business. Cybercriminals can be thousands of miles from the scene of the crime, they can download everything they need online, and there’s little training or capital outlay required. Almost anyone can do it.
The second is that exaggerating the effects of cybercrime is a direct result of how the estimates are generated.
For one thing, in numeric surveys, errors are almost always upward: since the amounts of estimated losses must be positive, there’s no limit on the upside, but zero is a hard limit on the downside. As a consequence, respondent errors -- or outright lies -- cannot be canceled out. Even worse, errors get amplified when researchers scale between the survey group and the overall population.
I've long advocated investigation, intelligence, and emergency response as the places where we can most usefully spend our counterterrorism dollars. Here's an example where that didn't work:
Starting in April 1991, three FBI agents posed as members of an invented racist militia group called the Veterans Aryan Movement. According to their cover story, VAM members robbed armored cars, using the proceeds to buy weapons and support racist extremism. The lead agent was a Vietnam veteran with a background in narcotics, using the alias Dave Rossi.
The whole article is worth reading.
Powered by Movable Type. Photo at top by Per Ervland.
Schneier.com is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc.