Schneier on Security
A blog covering security and security technology.
September 2009 Archives
New experiment demonstrates what we already knew:
That's because people tend to view their immediate emotions, such as their perceptions of threats or risks, as more intense and important than their previous emotions.
The most important issue of any encryption product is the 'bit key strength'. To date the strongest known algorithm has a 448-bit key. Crypteto now offers a 49,152-bit key. This means that for every extra 1 bit increase that Crypteto has over its competition makes it 100% stronger. The security and privacy this offers is staggering.
Yes, every key bit doubles an algorithm's strength against brute-force attacks. But it's hard to find any real meaning in a work factor of 249152.
Coupled with this truly remarkable breakthrough Crypteto does not compromise on encryption speed. In the past, incremental key strength improvements have effected the speed that data is encrypted. The usual situation was that for every 1 bit increase in key strength there was a consequent reduction in encryption speed by 50%.
That's not even remotely true. It's not at all obvious how key length is related to encryption speed. Blowfish has the same speed, regardless of key length. AES-192 is about 20% slower than AES-128, and AES-256 is about 40% slower. Threefish, the block cipher inside Skein, encrypts data at 7.6 clock cycles/byte with a 256-bit key, 6.1 clock cycles/byte with a 512-bit key, and 6.5 clock cycles/byte with a 1024-bit key. I'm not claiming that Threefish is secure and ready for commercial use -- at any keylength -- but there simply isn't a chance that encryption speed will drop by half for every key bit added.
This is a fundamental asymmetry of cryptography, and it's important to get right. The cost to encrypt is linear as a function of key length, while cost to break is geometric. It's one of the reasons why, of all the links in a security chain, cryptography is the strongest.
Normally I wouldn't bother with this kind of thing, but they explicitly asked me to comment:
But Hawthorne Davies has overcome this issue. By offering an algorithm with an unequalled key strength of 49,152 bits, we are able to encrypt and decrypt data at speeds in excess of 8 megabytes per second. This means that the aforementioned Gigabyte of data would take 2 minutes 13 seconds. If Bruce Schneier, the United State's foremost cryptologist, were to increase his Blowfish 448 bit encryption algorithm to Blowfish 49152, he would be hard pressed to encrypt one Gigabyte in 4 hours.
I'm not a doctor of anything, but sure. Read my 1999 essay on snake-oil cryptography:
Warning Sign #5: Ridiculous key lengths.
Or read what I wrote about symmetric key lengths in 1996, in Applied Cryptography (pp. 157–8):
One of the consequences of the second law of thermodynamics is that a certain amount of energy is necessary to represent information. To record a single bit by changing the state of a system requires an amount of energy no less than kT, where T is the absolute temperature of the system and k is the Boltzman constant. (Stick with me; the physics lesson is almost over.)
Ten years later, there is still no reason to use anything more than a 256-bit symmetric key. I gave the same advice in 2003 Practical Cryptography (pp. 65-6). Even a mythical quantum computer won't be able to brute-force that large a keyspace. (Public keys are different, of course -- see Table 2.2 of this NIST document for recommendations).
Of course, in the real world there are smarter ways than to brute-force keysearch. And the whole point of cipher cryptanalysis is to find shortcuts to brute-force search (like this attack on AES), but a 49,152-bit key is just plain stupid.
EDITED TO ADD (9/30): Now this is funny:
Some months ago I sent individual emails to each of seventeen experts in cryptology, all with the title of Doctor or Professor. My email was a first announcement to the academic world of the TOUAREG Encryption Algorithm, which, somewhat unusually, has a session key strength of over 49,000 bits and yet runs at 3 Megabytes per second. Bearing in mind that the strongest version of BLOWFISH has a session key of 448 bits and that every additional bit doubles the task of key-crashing, I imagined that my announcement would create more than a mild flutter of interest.
Much to his surprise, no one responded.
Here's some more advice: my 1998 essay, "Memo to the Amateur Cipher Designer." Anyone can design a cipher that he himself cannot break. It's not even hard. So when you tell a cryptographer that you've designed a cipher that you can't break, his first question will be "who the hell are you?" In other words, why should the fact that you can't break a cipher be considered evidence of the cipher's security?
If you want to design algorithms, start by breaking the ones out there. Practice by breaking algorithms that have already been broken (without peeking at the answers). Break something no one else has broken. Break another. Get your breaks published. When you have established yourself as someone who can break algorithms, then you can start designing new algorithms. Before then, no one will take you seriously.
EDITED TO ADD (9/30): I just did the math. An encryption speed of 8 megabytes per second on a 3.33 GHz CPU translates to about 400 clock cycles per byte. This is much, much slower than any of the AES finalists ten years ago, or any of the SHA-3 second round candidates today. It's kind of embarrassingly slow, really.
Technology moves so quickly we can barely keep up, and our legal system moves so slowly it can't keep up with itself. By design, the law is built up over time by court decisions, statutes and regulations. Sometimes even criminal laws are left vague, to be defined case by case. Technology exacerbates the problem of laws so open and vague that they are hard to abide by, to the point that we have all become potential criminals.
EDITED TO ADD (10/12): Audio interview with Harvey Silvergate.
Turns out "gaydar" can be automated:
Using data from the social network Facebook, they made a striking discovery: just by looking at a person's online friends, they could predict whether the person was gay. They did this with a software program that looked at the gender and sexuality of a person's friends and, using statistical analysis, made a prediction. The two students had no way of checking all of their predictions, but based on their own knowledge outside the Facebook world, their computer program appeared quite accurate for men, they said. People may be effectively "outing" themselves just by the virtual company they keep.
This sort of thing can be generalized:
The work has not been published in a scientific journal, but it provides a provocative warning note about privacy. Discussions of privacy often focus on how to best keep things secret, whether it is making sure online financial transactions are secure from intruders, or telling people to think twice before opening their lives too widely on blogs or online profiles. But this work shows that people may reveal information about themselves in another way, and without knowing they are making it public. Who we are can be revealed by, and even defined by, who our friends are: if all your friends are over 45, you're probably not a teenager; if they all belong to a particular religion, it's a decent bet that you do, too. The ability to connect with other people who have something in common is part of the power of social networks, but also a possible pitfall. If our friends reveal who we are, that challenges a conception of privacy built on the notion that there are things we tell, and things we don't.
EDITED TO ADD (9/29): Better information from the MIT Newspaper.
In computer security, a lot of effort is spent on the authentication problem. Whether it's passwords, secure tokens, secret questions, image mnemonics, or something else, engineers are continually coming up with more complicated—and hopefully more secure—ways for you to prove you are who you say you are over the Internet.
This is important stuff, as anyone with an online bank account or remote corporate network knows. But a lot less thought and work have gone into the other end of the problem: how do you tell the system on the other end of the line that you're no longer there? How do you unauthenticate yourself?
My home computer requires me to log out or turn my computer off when I want to unauthenticate. This works for me because I know enough to do it, but lots of people just leave their computers on and running when they walk away. As a result, many office computers are left logged in when people go to lunch, or when they go home for the night. This, obviously, is a security vulnerability.
The most common way to combat this is by having the system time out. I could have my computer log me out automatically after a certain period of inactivity—five minutes, for example. Getting it right requires some fine tuning, though. Log the person out too quickly, and he gets annoyed; wait too long before logging him out, and the system could be vulnerable during that time. My corporate e-mail server logs me out after 10 minutes or so, and I regularly get annoyed at my corporate e-mail system.
Some systems have experimented with a token: a USB authentication token that has to be plugged in for the computer to operate, or an RFID token that logs people out automatically when the token moves more than a certain distance from the computer. Of course, people will be prone to just leave the token plugged in to their computer all the time; but if you attach it to their car keys or the badge they have to wear at all times when walking around the office, the risk is minimized.
That's expensive, though. A research project used a Bluetooth device, like a cellphone, and measured its proximity to a computer. The system could be programmed to lock the computer if the Bluetooth device moved out of range.
Some systems log people out after every transaction. This wouldn't work for computers, but it can work for ATMs. The machine spits my card out before it gives me my cash, or just requires a card swipe, and makes sure I take it out of the machine. If I want to perform another transaction, I have to reinsert my card and enter my PIN a second time.
There's a physical analogue that everyone can explain: door locks. Does your door lock behind you when you close the door, or does it remain unlocked until you lock it? The first instance is a system that automatically logs you out, and the second requires you to log out manually. Both types of locks are sold and used, and which one you choose depends on both how you use the door and who you expect to try to break in.
Designing systems for usability is hard, especially when security is involved. Almost by definition, making something secure makes it less usable. Choosing an unauthentication method depends a lot on how the system is used as well as the threat model. You have to balance increasing security with pissing the users off, and getting that balance right takes time and testing, and is much more an art than a science.
This essay originally appeared on ThreatPost.
Nobody tell the TSA, but last month someone tried to assassinate a Saudi prince by exploding a bomb stuffed in his rectum. He pretended to be a repentant militant, when in fact he was a Trojan horse:
The resulting explosion ripped al-Asiri to shreds but only lightly injured the shocked prince -- the target of al-Asiri's unsuccessful assassination attempt.
For years, I have made the joke about Richard Reid: "Just be glad that he wasn't the underwear bomber." Now, sadly, we have an example of one.
Lewis Page, an "improvised-device disposal operator tasked in support of the UK mainland police from 2001-2004," pointed out that this isn't much of a threat for three reasons: 1) you can't stuff a lot of explosives into a body cavity, 2) detonation is, um, problematic, and 3) the human body can stifle an explosion pretty effectively (think of someone throwing himself on a grenade to save his friends).
But who ever accused the TSA of being rational?
First one sighted in the Gulf since 1954:
The new specimen, weighing 103 pounds, was found during a preliminary survey of the Gulf during which scientists hope to identify the types of fish and squid that sperm whales feed on.
Texas Instruments' calculators use RSA digital signatures to authenticate any updates to their operating system. Unfortunately, their signing keys are too short: 512-bits. Earlier this month, a collaborative effort factored the moduli and published the private keys. Texas Instruments responded by threatening websites that published the keys with the DMCA, but it's too late.
So far, we have the operating-system signing keys for the TI-92+, TI-73, TI-89, TI-83+/TI-83+ Silver Edition, Voyage 200, TI-89 Titanium, and the TI-84+/TI-84 Silver Edition, and the date-stamp signing key for the TI-73, Explorer, TI-83 Plus, TI-83 Silver Edition, TI-84 Plus, TI-84 Silver Edition, TI-89, TI-89 Titanium, TI-92 Plus, and the Voyage 200.
Moral: Don't assume that if your application is obscure, or if there's no obvious financial incentive for doing so, that your cryptography won't be broken if you use too-short keys.
It's not just hackers who steal financial and medical information:
Between April 2007 and January 2008, visitors to the Kmart and Sears web sites were invited to join an "online community" for which they would be paid $10 with the idea they would be helping the company learn more about their customers. It turned out they learned a lot more than participants realized or that the feds thought was reasonable.
After purchasing an Anastacia CD, the plaintiff played it in his computer but his anti-virus software set off an alert saying the disc was infected with a rootkit. He went on to test the CD on three other computers. As a result, the plaintiff ended up losing valuable data.
Included in the items the German army allowed humanitarian groups to distribute in care packages to imprisoned soldiers, the game was too innocent to raise suspicion. But it was the ideal size for a top-secret escape kit that could help spring British POWs from German war camps.
This is a good thing:
An Illinois district court has allowed a couple to sue their bank on the novel grounds that it may have failed to sufficiently secure their account, after an unidentified hacker obtained a $26,500 loan on the account using the customers' user name and password.
As I've previously written, this is the only way to mitigate this kind of fraud:
Fraudulent transactions have nothing to do with the legitimate account holders. Criminals impersonate legitimate users to financial institutions. That means that any solution can't involve the account holders. That leaves only one reasonable answer: financial institutions need to be liable for fraudulent transactions. They need to be liable for sending erroneous information to credit bureaus based on fraudulent transactions.
It's an important security principle: ensure that the person who has the ability to mitigate the risk is responsible for the risk. In this case, the account holders had nothing to do with the security of their account. They could not audit it. They could not improve it. The bank, on the other hand, has the ability to improve security and mitigate the risk, but because they pass the cost on to their customers, they have no incentive to do so. Litigation like this has the potential to fix the externality and improve security.
This is an important development:
Shor's algorithm was first demonstrated in a computing system based on nuclear magnetic resonance -- manipulating molecules in a solution with strong magnetic fields. It was later demonstrated with quantum optical methods but with the use of bulk components like mirrors and beam splitters that take up an unwieldy area of several square meters.
I've written about quantum computing (and quantum cryptography):
Several groups are working on designing and building a quantum computer, which is fundamentally different from a classical computer. If one were built -- and we're talking science fiction here -- then it could factor numbers and solve discrete-logarithm problems very quickly. In other words, it could break all of our commonly used public-key algorithms. For symmetric cryptography it's not that dire: A quantum computer would effectively halve the key length, so that a 256-bit key would be only as secure as a 128-bit key today. Pretty serious stuff, but years away from being practical.
Here's a really good essay on the realities of quantum computing and its potential effects on public-key cryptography.
Back in 2005, I wrote about the failure of two-factor authentication to mitigate banking fraud:
Here are two new active attacks we're starting to see:
Here's an example:
The theft happened despite Ferma's use of a one-time password, a six-digit code issued by a small electronic device every 30 or 60 seconds. Online thieves have adapted to this additional security by creating special programs--real-time Trojan horses--that can issue transactions to a bank while the account holder is online, turning the one-time password into a weak link in the financial security chain. "I think it's a broken model," Ferrari says.
Of course it's a broken model. We have to stop trying to authenticate the person; instead, we need to authenticate the transaction:
One way to think about this is that two-factor authentication solves security problems involving authentication. The current wave of attacks against financial systems are not exploiting vulnerabilities in the authentication system, so two-factor authentication doesn't help.
More on mitigating identity theft.
For nine months, Eagle's team recorded data from the phones of 94 students and staff at MIT. By using blue-tooth technology and phone masts, they could monitor the movements of the participants, as well as their phone calls. Their main goal with this preliminary study was to compare data collected from the phones with subjective self-report data collected through traditional survey methodology.
According to the abstract:
Data collected from mobile phones have the potential to provide insight into the relational dynamics of individuals. This paper compares observational data from mobile phones with standard self-report survey data. We find that the information from these two data sources is overlapping but distinct. For example, self-reports of physical proximity deviate from mobile phone records depending on the recency and salience of the interactions. We also demonstrate that it is possible to accurately infer 95% of friendships based on the observational data alone, where friend dyads demonstrate distinctive temporal and spatial patterns in their physical proximity and calling patterns. These behavioral patterns, in turn, allow the prediction of individual-level outcomes such as job satisfaction.
EDITED TO ADD (10/12): More information.
Good essay on "terrorist havens" -- like Afghanistan -- and why they're not as big a worry as some maintain:
Rationales for maintaining the counterinsurgency in Afghanistan are varied and complex, but they all center on one key tenet: that Afghanistan must not be allowed to again become a haven for terrorist groups, especially al-Qaeda.
Interview with Jonathan Coulton.
I wrote about the DHS's color-coded threat alert system in 2003, in Beyond Fear:
The color-coded threat alerts issued by the Department of Homeland Security are useless today, but may become useful in the future. The U.S. military has a similar system; DEFCON 1-5 corresponds to the five threat alerts levels: Green, Blue, Yellow, Orange, and Red. The difference is that the DEFCON system is tied to particular procedures; military units have specific actions they need to perform every time the DEFCON level goes up or down. The color-alert system, on the other hand, is not tied to any specific actions. People are left to worry, or are given nonsensical instructions to buy plastic sheeting and duct tape. Even local police departments and government organizations largely have no idea what to do when the threat level changes. The threat levels actually do more harm than good, by needlessly creating fear and confusion (which is an objective of terrorists) and anesthetizing people to future alerts and warnings. If the color-alert system became something better defined, so that people know exactly what caused the levels to change, what the change means, and what actions they need to take in the event of a change, then it could be useful. But even then, the real measure of effectiveness is in the implementation. Terrorist attacks are rare, and if the color-threat level changes willy-nilly with no obvious cause or effect, then people will simply stop paying attention. And the threat levels are publicly known, so any terrorist with a lick of sense will simply wait until the threat level goes down.
Of course, the codes never became useful. There were never any actions associated with them. And we now know that their primary use was political. They were, and remain, a security joke.
This is what I wrote in 2004:
The DHS's threat warnings have been vague, indeterminate, and unspecific. The threat index goes from yellow to orange and back again, although no one is entirely sure what either level means. We've been warned that the terrorists might use helicopters, scuba gear, even cheap prescription drugs from Canada. New York and Washington, D.C., were put on high alert one day, and the next day told that the alert was based on information years old. The careful wording of these alerts allows them not to require any sound, confirmed, accurate intelligence information, while at the same time guaranteeing hysterical media coverage. This headline-grabbing stuff might make for good movie plots, but it doesn't make us safer.
Finally, in 2009, the DHS is considering changes to the system:
A proposal by the Homeland Security Advisory Council, unveiled late Tuesday, recommends removing two of the five colors, with a standard state of affairs being a "guarded" Yellow. The Green "low risk of terrorist attacks" might get removed altogether, meaning stay prepared for your morning subway commute to turn deadly at any moment.
That's right, according to the DHS the problem was too many levels. I hope you all feel safer now.
Here are some more whimsical designs, but I want the whole thing be ditched. And it should be easy to ditch; no one thinks it has any value. Unfortunately, if the Obama Administration can't make this simple change, I don't think they have the political will to make any of the harder changes we need.
Using a 3D printer. Impressive.
At the end of the day he talked the officers into trying the key on their handcuffs and … it did work! At least the Dutch Police now knows there is a plastic key on the market that will open their handcuffs. A plastic key undetectable by metal detectors….
Skein is one of the 14 SHA-3 candidates chosen by NIST to advance to the second round. As part of the process, NIST allowed the algorithm designers to implement small "tweaks" to their algorithms. We've tweaked the rotation constants of Skein. This change does not affect Skein's performance in any way.
The revised Skein paper contains the new rotation constants, as well as information about how we chose them and why we changed them, the results of some new cryptanalysis, plus new IVs and test vectors. Revised source code is here.
The latest information on Skein is always here.
Tweaks were due today, September 15. Now the SHA-3 process moves into the second round. According to NIST's timeline, they'll choose a set of final round candidate algorithms in 2010, and then a single hash algorithm in 2012. Between now and then, it's up to all of us to evaluate the algorithms and let NIST know what we want. Cryptanalysis is important, of course, but so is performance.
Here's my 2008 essay on SHA-3. The second-round algorithms are: BLAKE, Blue Midnight Wish, CubeHash, ECHO, Fugue, Grøstl, Hamsi, JH, Keccak, Luffa, Shabal, SHAvite-3, SIMD, and Skein. You can find details on all of them, as well as the current state of their cryptanalysis, here.
In other news, we're making Skein shirts available to the public. Those of you who attended the First Hash Function Candidate Conference in Leuven, Belgium, earlier this year might have noticed the stylish black Skein polo shirts worn by the Skein team. Anyone who wants one is welcome to buy it, at cost. Details (with photos) are here. All orders must be received before 1 October, and then we'll have all the shirts made in one batch.
Back in 2002, science fiction author Robert J. Sawyer wrote an essay about the trade-off between privacy and security, and came out in favor of less privacy. I disagree with most of what he said, and have written pretty much the opposite essay -- and others on the value of privacy and the future of privacy -- several times since then.
The point of this blog entry isn't really to debate the topic, though. It's to reprint the opening paragraph of Sawyer's essay, which I've never forgotten:
Whenever I visit a tourist attraction that has a guest register, I always sign it. After all, you never know when you'll need an alibi.
Since I read that, whenever I see a tourist attraction with a guest register, I do the same thing. I sign "Robert J. Sawyer, Toronto, ON" -- because you never know when he'll need an alibi.
EDITED TO ADD (9/15): Sawyer's essay now has a preface, which states that he wrote it to promote a book of his:
The following was written as promotion for my science-fiction novel Hominids, and does not necessarily reflect the author's personal views.
In the comments below, though, Sawyer says that the essay does not reflect his personal views. So I'm not sure about the waffling on the essay page.
I am completely surprised that Sawyer's essay was fictional. For years I thought that he meant what he wrote, that it was a non-fiction essay written for a non-fiction publication. He has other essays on his website; I have no idea if any of those reflect his personal views. The whole thing makes absolutely no sense to me.
It's a mushroom: Pseudocolus fusiformis.
Me from 2006.
On September 30, 2001, I published a special issue of Crypto-Gram discussing the terrorist attacks. I wrote about the novelty of the attacks, airplane security, diagnosing intelligence failures, the potential of regulating cryptography -- because it could be used by the terrorists -- and protecting privacy and liberty. Much of what I wrote is still relevant today:
Appalled by the recent hijackings, many Americans have declared themselves willing to give up civil liberties in the name of security. They've declared it so loudly that this trade-off seems to be a fait accompli. Article after article talks about the balance between privacy and security, discussing whether various increases of security are worth the privacy and civil-liberty losses. Rarely do I see a discussion about whether this linkage is a valid one.
File deletion is all about control. This used to not be an issue. Your data was on your computer, and you decided when and how to delete a file. You could use the delete function if you didn't care about whether the file could be recovered or not, and a file erase program -- I use BCWipe for Windows -- if you wanted to ensure no one could ever recover the file.
You have to trust that these companies will delete your data when you ask them to, but they're generally not interested in doing so. Sites like these are more likely to make your data inaccessible than they are to physically delete it. Facebook is a known culprit: actually deleting your data from its servers requires a complicated procedure that may or may not work. And even if you do manage to delete your data, copies are certain to remain in the companies' backup systems. Gmail explicitly says this in its privacy notice.
Online backups, SMS messages, photos on photo sharing sites, smartphone applications that store your data in the network: you have no idea what really happens when you delete pieces of data or your entire account, because you're not in control of the computers that are storing the data.
This notion of control also explains how Amazon was able to delete a book that people had previously purchased on their Kindle e-book readers. The legalities are debatable, but Amazon had the technical ability to delete the file because it controls all Kindles. It has designed the Kindle so that it determines when to update the software, whether people are allowed to buy Kindle books, and when to turn off people's Kindles entirely.
Vanish is a research project by Roxana Geambasu and colleagues at the University of Washington. They designed a prototype system that automatically deletes data after a set time interval. So you can send an email, create a Google Doc, post an update to Facebook, or upload a photo to Flickr, all designed to disappear after a set period of time. And after it disappears, no one -- not anyone who downloaded the data, not the site that hosted the data, not anyone who intercepted the data in transit, not even you -- will be able to read it. If the police arrive at Facebook or Google or Flickr with a warrant, they won't be able to read it.
The details are complicated, but Vanish breaks the data's decryption key into a bunch of pieces and scatters them around the web using a peer-to-peer network. Then it uses the natural turnover in these networks -- machines constantly join and leave -- to make the data disappear. Unlike previous programs that supported file deletion, this one doesn't require you to trust any company, organisation, or website. It just happens.
Of course, Vanish doesn't prevent the recipient of an email or the reader of a Facebook page from copying the data and pasting it into another file, just as Kindle's deletion feature doesn't prevent people from copying a book's files and saving them on their computers. Vanish is just a prototype at this point, and it only works if all the people who read your Facebook entries or view your Flickr pictures have it installed on their computers as well; but it's a good demonstration of how control affects file deletion. And while it's a step in the right direction, it's also new and therefore deserves further security analysis before being adopted on a wide scale.
We've lost the control of data on some of the computers we own, and we've lost control of our data in the cloud. We're not going to stop using Facebook and Twitter just because they're not going to delete our data when we ask them to, and we're not going to stop using Kindles and iPhones because they may delete our data when we don't want them to. But we need to take back control of data in the cloud, and projects like Vanish show us how we can.
Now we need something that will protect our data when a large corporation decides to delete it.
This essay originally appeared in The Guardian.
The BBC has a video demonstration of a 16-ounce bottle of liquid blowing a hole in the side of a plane.
I know no more details other than what's in the video.
Three of the UK liquid bombers were convicted Monday. NSA-intercepted e-mail was introduced as evidence in the trial:
The e-mails, several of which have been reprinted by the BBC and other publications, contained coded messages, according to prosecutors. They were intercepted by the NSA in 2006 but were not included in evidence introduced in a first trial against the three last year.
EDITED TO ADD (9/9): Just to be sure, this has nothing to do with any illegal warrantless wiretapping the NSA has done over the years; the wiretap used to intercept these e-mails was obtained with a FISA warrant.
A new class of global actors is playing an increasingly important role in globalization: smugglers, warlords, guerrillas, terrorists, gangs, and bandits of all stripes. Since the end of the Cold War, the global illicit economy has consistently grown at twice the rate of the licit global economy. Increasingly, illicit actors will represent not just an economic but a political force. As globalization hollows out traditional nation-states, what will fill the power vacuum in slums and hinterlands will be informal non-state governance structures. These zones will be globally connected, effectively run by local gangs, religious leaders, or quasi-tribal organizations -- organizations that will govern without aspiring to statehood.
Malware is one of Nils Gilman's examples, at about the nine-minute mark.
The seven rules of the illicit global economy (he seems to use "illicit" and "deviant" interchangeably in the talk):
Very interesting hour-long interview.
Australian-born David Kilcullen was the senior advisor to US General David Petraeus during his time in Iraq, advising on counterinsurgency. The implementation of his strategies are now regarded as a major turning point in the war.
Blog post from Ed Felten:
Usually when the threat model mentions subpoenas, the bigger threats in reality come from malicious intruders or insiders. The biggest risk in storing my documents on CloudCorp's servers is probably that somebody working at CloudCorp, or a contractor hired by them, will mess up or misbehave.
But, after many years of evaluating the security of software systems, I'm incredibly down on using the book that made Bruce famous when designing the cryptographic aspects of a system. In fact, I can safely say I have never seen a secure system come out the other end, when that is the primary source for the crypto design. And I don't mean that people forget about the buffer overflows. I mean, the crypto is crappy.
I agree. And, to his credit, Viega points out that I agree:
But in the introduction to Bruce Schneier's book, Practical Cryptography, he himself says that the world is filled with broken systems built from his earlier book. In fact, he wrote Practical Cryptography in hopes of rectifying the problem.
This is all true.
Designing a cryptosystem is hard. Just as you wouldn't give a person -- even a doctor -- a brain-surgery instruction manual and then expect him to operate on live patients, you shouldn't give an engineer a cryptography book and then expect him to design and implement a cryptosystem. The patient is unlikely to survive, and the cryptosystem is unlikely to be secure.
Even worse, security doesn't provide immediate feedback. A dead patient on the operating table tells the doctor that maybe he doesn't understand brain surgery just because he read a book, but an insecure cryptosystem works just fine. It's not until someone takes the time to break it that the engineer might realize that he didn't do as good a job as he thought. Remember: Anyone can design a security system that he himself cannot break. Even the experts regularly get it wrong. The odds that an amateur will get it right are extremely low.
For those who are interested, a second edition of Practical Cryptography will be published in early 2010, renamed Cryptography Engineering and featuring a third author: Tadayoshi Kohno.
EDITED TO ADD (9/16): Commentary.
Access control is difficult in an organizational setting. On one hand, every employee needs enough access to do his job. On the other hand, every time you give an employee more access, there's more risk: he could abuse that access, or lose information he has access to, or be socially engineered into giving that access to a malfeasant. So a smart, risk-conscious organization will give each employee the exact level of access he needs to do his job, and no more.
Over the years, there's been a lot of work put into role-based access control. But despite the large number of academic papers and high-profile security products, most organizations don't implement it--at all--with the predictable security problems as a result.
Regularly we read stories of employees abusing their database access-control privileges for personal reasons: medical records, tax records, passport records, police records. NSA eavesdroppers spy on their wives and girlfriends. Departing employees take corporate secrets
A spectacular access control failure occurred in the UK in 2007. An employee of Her Majesty's Revenue & Customs had to send a couple of thousand sample records from a database on all children in the country to National Audit Office. But it was easier for him to copy the entire database of 25 million people onto a couple of disks and put it in the mail than it was to select out just the records needed. Unfortunately, the discs got lost in the mail and the story was a huge embarrassment for the government.
Eric Johnson at Dartmouth's Tuck School of Business has been studying the problem, and his results won't startle anyone who has thought about it at all. RBAC is very hard to implement correctly. Organizations generally don't even know who has what role. The employee doesn't know, the boss doesn't know--and these days the employee might have more than one boss -- and senior management certainly doesn't know. There's a reason RBAC came out of the military; in that world, command structures are simple and well-defined.
Even worse, employees' roles change all the time--Johnson chronicled one business group of 3,000 people that made 1,000 role changes in just three months--and it's often not obvious what information an employee needs until he actually needs it. And information simply isn't that granular. Just as it's much easier to give someone access to an entire file cabinet than to only the particular files he needs, it's much easier to give someone access to an entire database than only the particular records he needs.
This means that organizations either over-entitle or under-entitle employees. But since getting the job done is more important than anything else, organizations tend to over-entitle. Johnson estimates that 50 percent to 90 percent of employees are over-entitled in large organizations. In the uncommon instance where an employee needs access to something he normally doesn't have, there's generally some process for him to get it. And access is almost never revoked once it's been granted. In large formal organizations, Johnson was able to predict how long an employee had worked there based on how much access he had.
Clearly, organizations can do better. Johnson's current work involves building access-control systems with easy self-escalation, audit to make sure that power isn't abused, violation penalties (Intel, for example, issues "speeding tickets" to violators), and compliance rewards. His goal is to implement incentives and controls that manage access without making people too risk-averse.
In the end, a perfect access control system just isn't possible; organizations are simply too chaotic for it to work. And any good system will allow a certain number of access control violations, if they're made in good faith by people just trying to do their jobs. The "speeding ticket" analogy is better than it looks: we post limits of 55 miles per hour, but generally don't start ticketing people unless they're going over 70.
This essay previously appeared in Information Security, as part of a point/counterpoint with Marcus Ranum. You can read Marcus's response here -- after you answer some nosy questions to get a free account.
Blog post from Steve Bellovin:
It is vital that the keystream values (a) be truly random and (b) never be reused. The Soviets got that wrong in the 1940s; as a result, the U.S. Army's Signal Intelligence Service was able to read their spies' traffic in the Venona program. The randomness requirement means that the values cannot be generated by any algorithm; they really have to be random, and created by a physical process, not a mathematical one.
I wrote about one-time pads, and their practical insecurity, in 2002:
What a one-time pad system does is take a difficult message security problem -- that's why you need encryption in the first place -- and turn it into a just-as-difficult key distribution problem. It's a "solution" that doesn't scale well, doesn't lend itself to mass-market distribution, is singularly ill-suited to computer networks, and just plain doesn't work.
Good article, which basically says our policies are based more on fear than on reality.
So why is there so much concern about “cyber-terrorism”? Answering a question with a question: who frames the debate? Much of the data are gathered by ultra-secretive government agencies—which need to justify their own existence—and cyber-security companies—which derive commercial benefits from popular anxiety. Journalists do not help. Gloomy scenarios and speculations about cyber-Armaggedon draw attention, even if they are relatively short on facts.
Putting these complexities aside and focusing just on states, it is important to bear in mind that the cyber-attacks on Estonia and especially Georgia did little damage, particularly when compared to the physical destruction caused by angry mobs in the former and troops in the latter. One argument about the Georgian case is that cyber-attacks played a strategic role by thwarting Georgia’s ability to communicate with the rest of the world and present its case to the international community. This argument both overestimates the Georgian government’s reliance on the Internet and underestimates how much international PR -- particularly during wartime -- is done by lobbyists and publicity firms based in Washington, Brussels, and London. There is, probably, an argument to be made about the vast psychological effects of cyber-attacks -- particularly those that disrupt ordinary economic life. But there is a line between causing inconvenience and causing human suffering, and cyber-attacks have not crossed it yet.
The real risk isn't cyber-war or cyber-terrorism, it's cyber-crime.
So how many bits are in this instance of H1N1? The raw number of bits, by my count, is 26,022; the actual number of coding bits approximately 25,054 -- I say approximately because the virus does the equivalent of self-modifying code to create two proteins out of a single gene in some places (pretty interesting stuff actually), so it’s hard to say what counts as code and what counts as incidental non-executing NOP sleds that are required for self-modifying code.
Fascinating story of a 16-year-old blind phone phreaker.
One afternoon, not long after Proulx was swatted, Weigman came home to find his mother talking to what sounded like a middle-aged male. The man introduced himself as Special Agent Allyn Lynd of the FBI's cyber squad in Dallas, which investigates hacking and other computer crimes. A West Point grad, Lynd had spent 10 years combating phreaks and hackers. Now, with Proulx's cooperation, he was aiming to take down Stuart Rosoff and the Wrecking Crew — and he wanted Weigman's help.
Powered by Movable Type. Photo at top by Per Ervland.
Schneier.com is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc.