Schneier on Security
A blog covering security and security technology.
April 2011 Archives
Friday Squid Blogging: Giant Squid Eye Preserved in a Jar
Great picture from the Smithsonian Institution.
This is a surprise. My TED talk made it to the website. It's a surprise because I didn't speak at TED. I spoke last year at a regional TED event, TEDxPSU. And not all talks from the regional events get on the main site, only the good ones.
EDITED TO ADD (5/13): A transcript.
EDITED TO ADD (5/14): Motley Fool article about the talk.
The Cyberwar Arms Race
Good paper: "Loving the Cyber Bomb? The Dangers of Threat Inflation in Cybersecurity Policy," by Jerry Brito and Tate Watkins.
Over the past two years there has been a steady drumbeat of alarmist rhetoric coming out of Washington about potential catastrophic cyber threats. For example, at a Senate Armed Services Committee hearing last year, Chairman Carl Levin said that "cyberweapons and cyberattacks potentially can be devastating, approaching weapons of mass destruction in their effects." Proposed responses include increased federal spending on cybersecurity and the regulation of private network security practices.
Also worth reading is an earlier paper by Sean Lawson: "Beyond Cyber Doom."
EDITED TO ADD (5/3): Good article on the paper.
Social Solidarity as an Effect of the 9/11 Terrorist Attacks
It's standard sociological theory that a group experiences social solidarity in response to external conflict. This paper studies the phenomenon in the United States after the 9/11 terrorist attacks.
Conflict produces group solidarity in four phases: (1) an initial few days of shock and idiosyncratic individual reactions to attack; (2) one to two weeks of establishing standardized displays of solidarity symbols; (3) two to three months of high solidarity plateau; and (4) gradual decline toward normalcy in six to nine months. Solidarity is not uniform but is clustered in local groups supporting each other's symbolic behavior. Actual solidarity behaviors are performed by minorities of the population, while vague verbal claims to performance are made by large majorities. Commemorative rituals intermittently revive high emotional peaks; participants become ranked according to their closeness to a center of ritual attention. Events, places, and organizations claim importance by associating themselves with national solidarity rituals and especially by surrounding themselves with pragmatically ineffective security ritual. Conflicts arise over access to centers of ritual attention; clashes occur between pragmatists deritualizing security and security zealots attempting to keep up the level of emotional intensity. The solidarity plateau is also a hysteria zone; as a center of emotional attention, it attracts ancillary attacks unrelated to the original terrorists as well as alarms and hoaxes. In particular historical circumstances, it becomes a period of atrocities.
This certainly makes sense as a group survival mechanism: self-interest giving way to group interest in face of a threat to the group. It's the kind of thing I am talking about in my new book.
Paper also available here.
Security Risks of Running an Open WiFi Network
The three stories all fall along the same theme: a Buffalo man, Sarasota man, and Syracuse man all found themselves being raided by the FBI or police after their wireless networks were allegedly used to download child pornography. "You're a creep... just admit it," one FBI agent was quoted saying to the accused party. In all three cases, the accused ended up getting off the hook after their files were examined and neighbors were found to be responsible for downloading child porn via unsecured WiFi networks.
EDITED TO ADD (4/29): The EFF is calling for an open wireless movement. I approve.
Hard-Drive Steganography through Fragmentation
Khan and his colleagues have written software that ensures clusters of a file, rather than being positioned at the whim of the disc drive controller chip, as is usually the case, are positioned according to a code. All the person at the other end needs to know is which file's cluster positions have been encoded.
Friday Squid Blogging: Squid Prints
Okay, this is a little weird:
This year's Earth Day will again include the celebrated "squid printing" activity with two big, beautiful Pacific Humboldt squid donated from the Gulf of the Farallones National Marine Sanctuary. We'll be inking them up and laying them out on paper to create fascinating one-of-a- kind imprints of their bodies.
I don't know what's worse: that they're making prints from squid bodies, or that they're doing it "again."
Declassified World War I Security Documents
The CIA has just declassified six (1, 2, 3, 4, 5, and 6) documents about World War I security techniques. (The media is reporting they're CIA documents, but the CIA didn't exist before 1947.) Lots of stuff about secret writing and pre-computer tradecraft.
Large-Scale Food Theft
A criminal gang is stealing truckloads of food:
Late last month, a gang of thieves stole six tractor-trailer loads of tomatoes and a truck full of cucumbers from Florida growers. They also stole a truckload of frozen meat. The total value of the illegal haul: about $300,000.
It's a professional operation. The group knew how wholesale foodstuff trucking worked. They set up a bogus trucking company. They bid for jobs, collected the trailers, and disappeared. Presumably they knew how to fence the goods, too.
Costs of Security
Interesting blog post on the security costs for the $50B Air Force bomber program -- estimated to be $8B. This isn't all computer security, but the original article specifically calls out Chinese computer espionage as a primary threat.
Software as Evidence
Increasingly, chains of evidence include software steps. It's not just the RIAA suing people -- and getting it wrong -- based on automatic systems to detect and identify file sharers. It's forensic programs used to collect and analyze data from computers and smart phones. It's audit logs saved and stored by ISPs and websites. It's location data from cell phones. It's e-mails and IMs and comments posted to social networking sites. It's tallies from digital voting machines. It's images and meta-data from surveillance cameras. The list goes on and on. We in the security field know the risks associated with trusting digital data, but this evidence is routinely assumed by courts to be accurate.
Sergey Bratus is starting to look at this problem. His paper, written with Ashlyn Lembree and Anna Shubina, is "Software on the Witness Stand: What Should it Take for Us to Trust it?"
We discuss the growing trend of electronic evidence, created automatically by autonomously running software, being used in both civil and criminal court cases. We discuss trustworthiness requirements that we believe should be applied to such software and platforms it runs on. We show that courts tend to regard computer-generated materials as inherently trustworthy evidence, ignoring many software and platform trustworthiness problems well known to computer security researchers. We outline the technical challenges in making evidence-generating software trustworthy and the role Trusted Computing can play in addressing them.
From a presentation he gave on the subject:
Constitutionally, criminal defendants have the right to confront accusers. If software is the accusing agent, what should the defendant be entitled to under the Confrontation Clause?
WikiLeaks Cable about Chinese Hacking of U.S. Networks
Secret U.S. State Department cables, obtained by WikiLeaks and made available to Reuters by a third party, trace systems breaches -- colorfully code-named "Byzantine Hades" by U.S. investigators -- to the Chinese military. An April 2009 cable even pinpoints the attacks to a specific unit of China's People's Liberation Army.
By the way, reading this blog entry might be illegal under the U.S. Espionage Act:
Dear Americans: If you are not "authorized" personnel, but you have read, written about, commented upon, tweeted, spread links by "liking" on Facebook, shared by email, or otherwise discussed "classified" information disclosed from WikiLeaks, you could be implicated for crimes under the U.S. Espionage Act -- or so warns a legal expert who said the U.S. Espionage Act could make "felons of us all."
Maybe I should have warned you at the top of this post.
Friday Squid Blogging: Omega 3 Oil from Squid
New health supplement.
Anyone, from the most clueless amateur to the best cryptographer, can create an algorithm that he himself can't break.
In 2004, Cory Doctorow called this Schneier's law:
...what I think of as Schneier's Law: "any person can invent a security system so clever that she or he can't think of how to break it."
Few false ideas have more firmly gripped the minds of so many intelligent men than the one that, if they just tried, they could invent a cipher that no one could break.
The idea is even older. Back in 1864, Charles Babbage wrote:
One of the most singular characteristics of the art of deciphering is the strong conviction possessed by every person, even moderately acquainted with it, that he is able to construct a cipher which nobody else can decipher.
My phrasing is different, though. Here's my original quote in context:
Anyone, from the most clueless amateur to the best cryptographer, can create an algorithm that he himself can't break. It's not even hard. What is hard is creating an algorithm that no one else can break, even after years of analysis. And the only way to prove that is to subject the algorithm to years of analysis by the best cryptographers around.
And here's me in 2006:
Anyone can invent a security system that he himself cannot break. I've said this so often that Cory Doctorow has named it "Schneier's Law": When someone hands you a security system and says, "I believe this is secure," the first thing you have to ask is, "Who the hell are you?" Show me what you've broken to demonstrate that your assertion of the system's security means something.
And that's the point I want to make. It's not that people believe they can create an unbreakable cipher; it's that people create a cipher that they themselves can't break, and then use that as evidence they've created an unbreakable cipher.
EDITED TO ADD (4/16): This is an example of the Dunning-Kruger effect, named after the authors of this paper: "Unskilled and Unaware of It: How Difficulties in recognizing One's Own Incompetence Lead to Inflated Self-Assessments."
Abstract: People tend to hold overly favorable views of their abilities in many social and intellectual domains. The authors suggest that this overestimation occurs, in part, because people who are unskilled in these domains suffer a dual burden: Not only do these people reach erroneous conclusions and make unfortunate choices, but their incompetence robs them of the metacognitive ability to realize it. Across 4 studies, the authors found that participants scoring in the bottom quartile on tests of humor, grammar, and logic grossly overestimated their test performance and ability. Although their test scores put them in the 12th percentile, they estimated themselves to be in the 62nd. Several analyses linked this miscalibration to deficits in metacognitive skill, or the capacity to distinguish accuracy from error. Paradoxically, improving the skills of participants, and thus increasing their metacognitive competence, helped them recognize the limitations of their abilities.
EDITED TO ADD (4/18): If I have any contribution to this, it's to generalize it to security systems and not just to cryptographic algorithms. Because anyone can design a security system that he cannot break, evaluating the security credentials of the designer is an essential aspect of evaluating the system's security.
Unanticipated Security Risk of Keeping Your Money in a Home Safe
In Japan, lots of people -- especially older people -- keep their life savings in cash in their homes. (The country's banks pay very low interest rates, so the incentive to deposit that money into bank accounts is lower than in other countries.) This is all well and good, until a tsunami destroys your home and washes your money out to sea. Then, when it washes up onto the beach, the police collect it:
One month after the March 11 tsunami devastated Ofunato and other nearby cities, police departments already stretched thin now face the growing task of managing lost wealth.
After three months, the money goes to the government.
Changing Incentives Creates Security Risks
One of the things I am writing about in my new book is how security equilibriums change. They often change because of technology, but they sometimes change because of incentives.
An interesting example of this is the recent scandal in the Washington, DC, public school system over teachers changing their students' test answers.
In the U.S., under the No Child Left Behind Act, students have to pass certain tests; otherwise, schools are penalized. In the District of Columbia, things went further. Michelle Rhee, chancellor of the public school system from 2007 to 2010, offered teachers $8,000 bonuses -- and threatened them with termination -- for improving test scores. Scores did increase significantly during the period, and the schools were held up as examples of how incentives affect teaching behavior.
It turns out that a lot of those score increases were faked. In addition to teaching students, teachers cheated on their students' tests by changing wrong answers to correct ones. That's how the cheating was discovered; researchers looked at the actual test papers and found more erasures than usual, and many more erasures from wrong answers to correct ones than could be explained by anything other than deliberate manipulation.
Teachers were always able to manipulate their students' test answers, but before, there wasn't much incentive to do so. With Rhee's changes, there was a much greater incentive to cheat.
The point is that whatever security measures were in place to prevent teacher cheating before the financial incentives and threats of firing wasn't sufficient to prevent teacher cheating afterwards. Because Rhee significantly increased the costs of cooperation (by threatening to fire teachers of poorly performing students) and increased the benefits of defection ($8,000), she created a security risk. And she should have increased security measures to restore balance to those incentives.
This is not isolated to DC. It has happened elsewhere as well.
Security Fears of Wi-Fi in London Underground
The London Underground is getting Wi-Fi. Of course there are security fears:
But Will Geddes, founder of ICP Group which specialises in reducing terror or technology-related threats, said the plan was problematic.
This is just silly. We could have a similar conversation regarding any piece of our infrastructure. Yes, the bad guys could use it, just as they use telephones and automobiles and all-night restaurants. If we didn't deploy technologies because of this fear, we'd still be living in the Middle Ages.
Euro Coin Recycling Scam
This story is just plain weird. Regularly, damaged coins are taken out of circulation. They're destroyed and then sold to scrap metal dealers. That makes sense, but it seems that one- and two-euro coins aren't destroyed very well. They're both bi-metal designs, and they're just separated into an inner core and an outer ring and then sold to Chinese scrap metal dealers. The dealers, being no dummies, put the two parts back together and sold them back to a German bank at face value. The bank was chosen because they accept damaged coins and don't inspect them very carefully.
Is this not entirely predictable? If you're going to take coins out of circulation, you had better use a metal shredder. (Except for pennies, which are worth more in component metals.)
Israel's Counter-Cyberterrorism Unit
You'd think the country would already have one of these:
Israel is mulling the creation of a counter-cyberterrorism unit designed to safeguard both government agencies and core private sector firms against hacking attacks.
How did the CIA and FBI Know that Australian Government Computers were Hacked?
Newspapers are reporting that, for about a month, hackers had access to computers "of at least 10 federal ministers including the Prime Minister, Foreign Minister and Defence Minister."
That's not much of a surprise. What is odd is the statement that "Australian intelligence agencies were tipped off to the cyber-spy raid by US intelligence officials within the Central Intelligence Agency and the Federal Bureau of Investigation."
How did the CIA and the FBI know? Did they see some intelligence traffic and assume that those computers were where the stolen e-mails were coming from? Or something else?
New French Law Reduces Website Security
I didn't know about this:
The law obliges a range of e-commerce sites, video and music services and webmail providers to keep a host of data on customers.
The social benefits of anonymity aside, we're all more secure if these websites do not have a file of everyone's plaintext password.
EDITED TO ADD (4/12): Seems that the BBC article misstated the law. Companies have to retain information they already collect for a year after it is no longer required. So if they're not already storing plaintext passwords, they don't have to start.
The CIA and Assassinations
The former CIA general counsel, John A. Rizzo, talks about his agency's assassination program, which has increased dramatically under the Obama administration:
The hub of activity for the targeted killings is the CIA’s Counterterrorist Center, where lawyers -- there are roughly 10 of them, says Rizzo -- write a cable asserting that an individual poses a grave threat to the United States. The CIA cables are legalistic and carefully argued, often running up to five pages. Michael Scheuer, who used to be in charge of the CIA’s Osama bin Laden unit, describes “a dossier,” or a “two-page document,” along with “an appendix with supporting information, if anybody wanted to read all of it.” The dossier, he says, “would go to the lawyers, and they would decide. They were very picky.” Sometimes, Scheuer says, the hurdles may have been too high. “Very often this caused a missed opportunity. The whole idea that people got shot because someone has a hunchI only wish that was true. If it were, there would be a lot more bad guys dead.”
And the ACLU Deputy Legal Director on the interview:
What was most remarkable about the interview, though, was not what Rizzo said but that it was Rizzo who said it. For more than six years until his retirement in December 2009, Rizzo was the CIA's acting general counsel -- the agency's chief lawyer. On his watch the CIA had sought to quash a Freedom of Information Act lawsuit by arguing that national security would be harmed irreparably if the CIA were to acknowledge any detail about the targeted killing program, even the program's mere existence.
Friday Squid Blogging: A New Book About Squid
Kraken is the traditional name for gigantic sea monsters, and this book introduces one of the most charismatic, enigmatic, and curious inhabitants of the sea: the squid. The pages take the reader on a wild narrative ride through the world of squid science and adventure, along the way addressing some riddles about what intelligence is, and what monsters lie in the deep. In addition to squid, both giant and otherwise, Kraken examines other equally enthralling cephalopods, including the octopus and the cuttlefish, and explores their otherworldly abilities, such as camouflage and bioluminescence. Accessible and entertaining, Kraken is also the first substantial volume on the subject in more than a decade and a must for fans of popular science.
Seems to be getting good reviews.
Get Your Terrorist Alerts on Facebook and Twitter
Colors are so last decade:
The U.S. government's new system to replace the five color-coded terror alerts will have two levels of warnings elevated and imminent that will be relayed to the public only under certain circumstances for limited periods of time, sometimes using Facebook and Twitter, according to a draft Homeland Security Department plan obtained by The Associated Press.
Specific and limited are good. Twitter and Facebook: I'm not so sure.
But what could go wrong?
An errant keystroke touched off a brief panic Thursday at the University of Illinois at Urbana-Champaign when an emergency message accidentally was sent out saying an "active shooter" was on campus.
Pinpointing a Computer to Within 690 Meters
This is impressive, and scary:
Every computer connected to the web has an internet protocol (IP) address, but there is no simple way to map this to a physical location. The current best system can be out by as much as 35 kilometres.
Our brains are specially designed to deal with cheating in social exchanges. The evolutionary psychology explanation is that we evolved brain heuristics for the social problems that our prehistoric ancestors had to deal with. Once humans became good at cheating, they then had to become good at detecting cheating -- otherwise, the social group would fall apart.
Perhaps the most vivid demonstration of this can be seen with variations on what's known as the Wason selection task, named after the psychologist who first studied it. Back in the 1960s, it was a test of logical reasoning; today, it's used more as a demonstration of evolutionary psychology. But before we get to the experiment, let's get into the mathematical background.
Propositional calculus is a system for deducing conclusions from true premises. It uses variables for statements because the logic works regardless of what the statements are. College courses on the subject are taught by either the mathematics or the philosophy department, and they're not generally considered to be easy classes. Two particular rules of inference are relevant here: modus ponens and modus tollens. Both allow you to reason from a statement of the form, "if P, then Q." (If Socrates was a man, then Socrates was mortal. If you are to eat dessert, then you must first eat your vegetables. If it is raining, then Gwendolyn had Crunchy Wunchies for breakfast. That sort of thing.) Modus ponens goes like this:
If P, then Q. P. Therefore, Q.
In other words, if you assume the conditional rule is true, and if you assume the antecedent of that rule is true, then the consequent is true. So,
If Socrates was a man, then Socrates was mortal. Socrates was a man. Therefore, Socrates was mortal.
Modus tollens is more complicated:
If P, then Q. Not Q. Therefore, not P.
If Socrates was a man, then Socrates was mortal. Socrates was not mortal. Therefore, Socrates was not a man.
This makes sense: if Socrates was not mortal, then he was a demigod or a stone statue or something.
Both are valid forms of logical reasoning. If you know "if P, then Q" and "P," then you know "Q." If you know "if P, then Q" and "not Q," then you know "not P." (The other two similar forms don't work. If you know "if P, then Q" and "Q," you don't know anything about "P." And if you know "if P, then Q" and "not P," then you don't know anything about "Q.")
If I explained this in front of an audience full of normal people, not mathematicians or philosophers, most of them would be lost. Unsurprisingly, they would have trouble either explaining the rules or using them properly. Just ask any grad student who has had to teach a formal logic class; people have trouble with this.
Consider the Wason selection task. Subjects are presented with four cards next to each other on a table. Each card represents a person, with each side listing some statement about that person. The subject is then given a general rule and asked which cards he would have to turn over to ensure that the four people satisfied that rule. For example, the general rule might be, "If a person travels to Boston, then he or she takes a plane." The four cards might correspond to travelers and have a destination on one side and a mode of transport on the other. On the side facing the subject, they read: "went to Boston," "went to New York," "took a plane," and "took a car." Formal logic states that the rule is violated if someone goes to Boston without taking a plane. Translating into propositional calculus, there's the general rule: if P, then Q. The four cards are "P," "not P," "Q," and "not Q." To verify that "if P, then Q" is a valid rule, you have to verify modus ponens by turning over the "P" card and making sure that the reverse says "Q." To verify modus tollens, you turn over the "not Q" card and make sure that the reverse doesn't say "P."
Shifting back to the example, you need to turn over the "went to Boston" card to make sure that person took a plane, and you need to turn over the "took a car" card to make sure that person didn't go to Boston. You don't -- as many people think -- need to turn over the "took a plane" card to see if it says "went to Boston" because you don't care. The person might have been flying to Boston, New York, San Francisco, or London. The rule only says that people going to Boston fly; it doesn't break the rule if someone flies elsewhere.
If you're confused, you aren't alone. When Wason first did this study, fewer than 10 percent of his subjects got it right. Others replicated the study and got similar results. The best result I've seen is "fewer than 25 percent." Training in formal logic doesn't seem to help very much. Neither does ensuring that the example is drawn from events and topics with which the subjects are familiar. People are just bad at the Wason selection task. They also tend to only take college logic classes upon requirement.
This isn't just another "math is hard" story. There's a point to this. The one variation of this task that people are surprisingly good at getting right is when the rule has to do with cheating and privilege. For example, change the four cards to children in a family -- "gets dessert," "doesn't get dessert," "ate vegetables," and "didn't eat vegetables" -- and change the rule to "If a child gets dessert, he or she ate his or her vegetables." Many people -- 65 to 80 percent -- get it right immediately. They turn over the "ate dessert" card, making sure the child ate his vegetables, and they turn over the "didn't eat vegetables" card, making sure the child didn't get dessert. Another way of saying this is that they turn over the "benefit received" card to make sure the cost was paid. And they turn over the "cost not paid" card to make sure no benefit was received. They look for cheaters.
The difference is startling. Subjects don't need formal logic training. They don't need math or philosophy. When asked to explain their reasoning, they say things like the answer "popped out at them."
Researchers, particularly evolutionary psychologists Leda Cosmides and John Tooby, have run this experiment with a variety of wordings and settings and on a variety of subjects: adults in the US, UK, Germany, Italy, France, and Hong Kong; Ecuadorian schoolchildren; and Shiriar tribesmen in Ecuador. The results are the same: people are bad at the Wason selection task, except when the wording involves cheating.
In the world of propositional calculus, there's absolutely no difference between a rule about traveling to Boston by plane and a rule about eating vegetables to get dessert. But in our brains, there's an enormous difference: the first is a arbitrary rule about the world, and the second is a rule of social exchange. It's of the form "If you take Benefit B, you must first satisfy Requirement R."
Our brains are optimized to detect cheaters in a social exchange. We're good at it. Even as children, we intuitively notice when someone gets a benefit he didn't pay the cost for. Those of us who grew up with a sibling have experienced how the one child not only knew that the other cheated, but felt compelled to announce it to the rest of the family. As adults, we might have learned that life isn't fair, but we still know who among our friends cheats in social exchanges. We know who doesn't pay his or her fair share of a group meal. At an airport, we might not notice the rule "If a plane is flying internationally, then it boards 15 minutes earlier than domestic flights." But we'll certainly notice who breaks the "If you board first, then you must be a first-class passenger" rule.
EDITED TO ADDD (4/14): Another explanation of the Wason Selection Task, with a possible correlation with psychopathy.
Optical Stun Ray
It's been patented; no idea if it actually works.
...newly patented device can render an assailant helpless with a brief flash of high-intensity light. It works by overloading the neural networks connected to the retina, saturating the target’s world in a blinding pool of white light. "It’s the inverse of blindness—the technical term is a loss of contrast sensitivity," says Todd Eisenberg, the engineer who invented the device. "The typical response is for the person to freeze. Law enforcement can easily walk up and apprehend [the suspect]."
Counterterrorism Security Cost-Benefit Analysis
"Terror, Security, and Money: Balancing the Risks, Benefits, and Costs of Homeland Security," by John Mueller and Mark Stewart:
Abstract:The cumulative increase in expenditures on US domestic homeland security over the decade since 9/11 exceeds one trillion dollars. It is clearly time to examine these massive expenditures applying risk assessment and cost-benefit approaches that have been standard for decades. Thus far, officials do not seem to have done so and have engaged in various forms of probability neglect by focusing on worst case scenarios; adding, rather than multiplying, the probabilities; assessing relative, rather than absolute, risk; and inflating terrorist capacities and the importance of potential terrorist targets. We find that enhanced expenditures have been excessive: to be deemed cost-effective in analyses that substantially bias the consideration toward the opposite conclusion, they would have to deter, prevent, foil, or protect against 1,667 otherwise successful Times-Square type attacks per year, or more than four per day. Although there are emotional and political pressures on the terrorism issue, this does not relieve politicians and bureaucrats of the fundamental responsibility of informing the public of the limited risk that terrorism presents and of seeking to expend funds wisely. Moreover, political concerns may be over-wrought: restrained reaction has often proved to be entirely acceptable politically.
Yes, millions of names and e-mail addresses might have been stolen. Yes, other customer information might have been stolen, too. Yes, this personal information could be used to create more personalized and better targeted phishing attacks.
So what? These sorts of breaches happen all the time, and even more personal information is stolen.
I get that over 50 companies were affected, and some of them are big names. But the hack of the century? Hardly.
Reducing Bribery by Legalizing the Giving of Bribes
Here's some very clever thinking from India's chief economic adviser. In order to reduce bribery, he proposes legalizing the giving of bribes:
Under the current law, discussed in some detail in the next section, once a bribe is given, the bribe giver and the bribe taker become partners in crime. It is in their joint interest to keep this fact hidden from the authorities and to be fugitives from the law, because, if caught, both expect to be punished. Under the kind of revised law that I am proposing here, once a bribe is given and the bribe giver collects whatever she is trying to acquire by giving the money, the interests of the bribe taker and bribe giver become completely orthogonal to each other. If caught, the bribe giver will go scot free and will be able to collect his bribe money back. The bribe taker, on the other hand, loses the booty of bribe and faces a hefty punishment.
He notes that this only works for a certain class of bribes: when you have to bribe officials for something you are already entitled to receive. It won't work for any long-term bribery relationship, or in any situation where the briber would otherwise not want the bribe to become public.
Interesting post -- and discussion -- on Making Light about ebook fraud. Currently there are two types of fraud. The first is content farming, discussed in these two interesting blog posts. People are creating automatically generated content, web-collected content, or fake content, turning it into a book, and selling it on an ebook site like Amazon.com. Then they use multiple identities to give it good reviews. (If it gets a bad review, the scammer just relists the same content under a new name.) That second blog post contains a screen shot of something called "Autopilot Kindle Cash," which promises to teach people how to post dozens of ebooks to Amazon.com per day.
The second type of fraud is stealing a book and selling it as an ebook. So someone could scan a real book and sell it on an ebook site, even though he doesn't own the copyright. It could be a book that isn't already available as an ebook, or it could be a "low cost" version of a book that is already available. Amazon doesn't seem particularly motivated to deal with this sort of fraud. And it too is suitable for automation.
Broadly speaking, there's nothing new here. All complex ecosystems have parasites, and every open communications system we've ever built gets overrun by scammers and spammers. Far from making editors superfluous, systems that democratize publishing have an even greater need for editors. The solutions are not new, either: reputation-based systems, trusted recommenders, white lists, takedown notices. Google has implemented a bunch of security countermeasures against content farming; ebook sellers should implement them as well. It'll be interesting to see what particular sort of mix works in this case.
Friday Squid Blogging: Shower Squid
34 SCADA Vulnerabilities Published
It's hard to tell how serious this is.
Computer security experts who examined the code say the vulnerabilities are not highly dangerous on their own, because they would mostly just allow an attacker to crash a system or siphon sensitive data, and are targeted at operator viewing platforms, not the backend systems that directly control critical processes. But experts caution that the vulnerabilities could still allow an attacker to gain a foothold on a system to find additional security holes that could affect core processes.
Powered by Movable Type. Photo at top by Geoffrey Stone.
Schneier.com is a personal website. Opinions expressed are not necessarily those of BT.