Schneier on Security
A blog covering security and security technology.
February 2009 Archives
Intriguingly, that gene is the one that enables the bacteria to form a biofilm, the tightly woven matrix of "slime" which allows bacterial colonies to behave in many ways like a single organism. "The biofilm might be critical for adhering to the light organ, or telling the host that the correct symbiont has arrived," says Mandel.
Note: This isn't the first time I have written about this topic, and it surely won't be the last. I think I did a particularly good job summarizing the issues this time, which is why I am reprinting it.
Welcome to the future, where everything about you is saved. A future where your actions are recorded, your movements are tracked, and your conversations are no longer ephemeral. A future brought to you not by some 1984-like dystopia, but by the natural tendencies of computers to produce data.
Data is the pollution of the information age. It's a natural byproduct of every computer-mediated interaction. It stays around forever, unless it's disposed of. It is valuable when reused, but it must be done carefully. Otherwise, its after effects are toxic.
And just as 100 years ago people ignored pollution in our rush to build the Industrial Age, today we're ignoring data in our rush to build the Information Age.
Increasingly, you leave a trail of digital footprints throughout your day. Once you walked into a bookstore and bought a book with cash. Now you visit Amazon, and all of your browsing and purchases are recorded. You used to buy a train ticket with coins; now your electronic fare card is tied to your bank account. Your store affinity cards give you discounts; merchants use the data on them to reveal detailed purchasing patterns.
Data about you is collected when you make a phone call, send an e-mail message, use a credit card, or visit a website. A national ID card will only exacerbate this.
More computerized systems are watching you. Cameras are ubiquitous in some cities, and eventually face recognition technology will be able to identify individuals. Automatic license plate scanners track vehicles in parking lots and cities. Color printers, digital cameras, and some photocopy machines have embedded identification codes. Aerial surveillance is used by cities to find building permit violators and by marketers to learn about home and garden size.
As RFID chips become more common, they'll be tracked, too. Already you can be followed by your cell phone, even if you never make a call. This is wholesale surveillance; not "follow that car," but "follow every car."
Computers are mediating conversation as well. Face-to-face conversations are ephemeral. Years ago, telephone companies might have known who you called and how long you talked, but not what you said. Today you chat in e-mail, by text message, and on social networking sites. You blog and you Twitter. These conversations – with family, friends, and colleagues – can be recorded and stored.
It used to be too expensive to save this data, but computer memory is now cheaper. Computer processing power is cheaper, too; more data is cross-indexed and correlated, and then used for secondary purposes. What was once ephemeral is now permanent.
Who collects and uses this data depends on local laws. In the US, corporations collect, then buy and sell, much of this information for marketing purposes. In Europe, governments collect more of it than corporations. On both continents, law enforcement wants access to as much of it as possible for both investigation and data mining.
Regardless of country, more organizations are collecting, storing, and sharing more of it.
More is coming. Keyboard logging programs and devices can already record everything you type; recording everything you say on your cell phone is only a few years away.
A "life recorder" you can clip to your lapel that'll record everything you see and hear isn't far behind. It'll be sold as a security device, so that no one can attack you without being recorded. When that happens, will not wearing a life recorder be used as evidence that someone is up to no good, just as prosecutors today use the fact that someone left his cell phone at home as evidence that he didn't want to be tracked?
You're living in a unique time in history: the technology is here, but it's not yet seamless. Identification checks are common, but you still have to show your ID. Soon it'll happen automatically, either by remotely querying a chip in your wallets or by recognizing your face on camera.
And all those cameras, now visible, will shrink to the point where you won't even see them. Ephemeral conversation will all but disappear, and you'll think it normal. Already your children live much more of their lives in public than you do. Your future has no privacy, not because of some police-state governmental tendencies or corporate malfeasance, but because computers naturally produce data.
Cardinal Richelieu famously said: "If one would give me six lines written by the hand of the most honest man, I would find something in them to have him hanged." When all your words and actions can be saved for later examination, different rules have to apply.
Society works precisely because conversation is ephemeral; because people forget, and because people don't have to justify every word they utter.
Conversation is not the same thing as correspondence. Words uttered in haste over morning coffee, whether spoken in a coffee shop or thumbed on a BlackBerry, are not official correspondence. A data pattern indicating "terrorist tendencies" is no substitute for a real investigation. Being constantly scrutinized undermines our social norms; furthermore, it's creepy. Privacy isn't just about having something to hide; it's a basic right that has enormous value to democracy, liberty, and our humanity.
We're not going to stop the march of technology, just as we cannot un-invent the automobile or the coal furnace. We spent the industrial age relying on fossil fuels that polluted our air and transformed our climate. Now we are working to address the consequences. (While still using said fossil fuels, of course.) This time around, maybe we can be a little more proactive.
Just as we look back at the beginning of the previous century and shake our heads at how people could ignore the pollution they caused, future generations will look back at us – living in the early decades of the information age – and judge our solutions to the proliferation of data.
We must, all of us together, start discussing this major societal change and what it means. And we must work out a way to create a future that our grandchildren will be proud of.
This essay originally appeared on the BBC.com website.
TrapCall instructs new customers to reprogram their cellphones to send all rejected, missed and unanswered calls to TrapCall's own toll-free number. If the user sees an incoming call with Caller ID blocked, he just presses the button on the phone that would normally send it to voicemail. The call invisibly loops through TelTech's system, then back to the user's phone, this time with the caller's number displayed as the Caller ID.
In addition to the free service, branded Fly Trap, a $10-per-month upgrade called Mouse Trap provides human-created transcripts of voicemail messages, and in some cases uses text messaging to send you the name of the caller — information not normally available to wireless customers. Mouse Trap will also send you text messages with the numbers of people who call while your phone was powered off, even if they don't leave a message.
There are rumors of a prototype:
Even the highly advanced US forces hadn't been generally thought to have developed a successful pulse-bomb yet, with most reports indicating that such a capability remains a few years off (as has been the case for decades). Furthermore, the pulse ordnance has usually been seen as large and heavy, in the same league as an aircraft bomb or cruise missile warhead -- or in the case of an HPM raygun, of a weapons-pod or aircraft payload size.
This is priceless:
Our advances in Prime Number Theory have led to a new branch of mathematics called Neutronics. Neutronic functions make possible for the first time the ability to analyze regions of mathematics commonly thought to be undefined, such as the point where one is divided by zero. In short, we have developed a new way to analyze the undefined point at the singularity which appears throughout higher mathematics.
EDITED TO ADD (3/30): The CTO has responded to me.
No one cares, probably because he isn't Muslim. White supremacist terrorism just isn't sexy these days.
President Obama has tasked Melissa Hathaway with conducting a 60-day review of the nation's cybersecurity policies.
Hathaway has been working as a cybercoordination executive for the Office of the Director of National Intelligence. She chaired a multiagency group called the National Cyber Study Group that was instrumental in developing the Comprehensive National Cyber Security Initiative, which was approved by former President George W. Bush early last year. Since then, she has been in charge of coordinating and monitoring the CNCI's implementation.
Although, honestly, the best thing to read to get an idea of how she thinks is this interview from IEEE Security & Privacy:
In the technology field, concern to be first to market often does trump the need for security to be built in up front. Most of the nation's infrastructure is owned, operated, and developed by the commercial sector. We depend on this sector to address the nation's broader needs, so we'll need a new information-sharing environment. Private-sector risk models aren't congruent with the needs for national security. We need to think about a way to do business that meets both sets of needs. The proposed revisions to Federal Information Security Management Act [FISMA] legislation will raise awareness of vulnerabilities within broader-based commercial systems.
This is one well-designed piece of malware:
Conficker B++ is somewhat similar to Conficker B, with 294 of 297 sub-routines the same and 39 additional subroutines. The latest variant, first spotted on 16 February, is even more sneaky than its previous incarnations, SRI explains.
The study, funded by the National Institute of Justice, examined the cases of 550 sex offenders who were broken into two groups—those released from prison before the passage of Megan's Law and those released afterward.
At least, according to an anonymous "industry source":
The spybiz exec, who preferred to remain anonymous, confirmed that Skype continues to be a major problem for government listening agencies, spooks and police. This was already thought to be the case, following requests from German authorities for special intercept/bugging powers to help them deal with Skype-loving malefactors. Britain's GCHQ has also stated that it has severe problems intercepting VoIP and internet communication in general.
I'm sure this is a real problem. Here's an article claiming that Italian criminals are using Skype more than the telephone because of eavesdropping concerns.
They're strong and lightweight:
The teeth get their strength from architecture. A series of tooth pores runs through the protein, and on the outer edge the pores are spaced widely for a hard, shape edge that digs into the flesh of hapless prey. Toward the base, the pores are closer together, making a softer material that can absorb the prey's thrashing without breaking.
Evidence of its effectiveness:
Researchers, working with police, identified 34 crime hot spots. In half of them, authorities set to work—clearing trash from the sidewalks, fixing street lights, and sending loiterers scurrying. Abandoned buildings were secured, businesses forced to meet code, and more arrests made for misdemeanors. Mental health services and homeless aid referrals expanded.
EDITED TO ADD (3/13): The paper.
The striking different between the two incidents is that the phpbb passwords are simpler. MySpace requires that passwords "must be between 6 and 10 characters, and contain at least 1 number or punctuation character." Most people satisfied this requirement by simply appending "1" to the ends of their passwords. The phpbb site has no such restrictions—the passwords are shorter and rarely contain anything more than a dictionary word.
Seems like we still can't choose good passwords. Conficker.B exploits this, trying about 200 common passwords to help spread itself.
Since January, the Conficker.B worm has been spreading like wildfire across the Internet: infecting the French Navy, hospitals in Sheffield, the court system in Houston, and millions of computers worldwide. One of the ways it spreads is by cracking administrator passwords on networks. Which leads to the important question: Why in the world are IT administrators still using easy-to-guess passwords?
Computer authentication systems have two basic requirements. They need to keep the bad guys from accessing your account, and they need to allow you to access your account. Both are important, and every authentication system is a balancing act between the two. Too little security, and the bad guys will get in too easily. But if the authentication system is too complicated, restrictive, or hard to use, you won't be able to—or won't bother to—use it.
Passwords are the most common authentication system, and a good place to start. They're very easy to implement and use, which is why they're so popular. But as computers have become faster, password guessing has become easier. Most people don't choose passwords that are complicated enough to remain secure against modern password-guessing attacks. Conficker.B is even less clever; it just tries a list of about 200 common passwords.
To combat password guessing, many systems force users to choose harder-to-guess passwords—requiring minimum lengths, non alpha-numeric characters, etc.—and change their passwords more frequently. The first makes guessing harder, and the second makes a guessed password less valuable. This, of course, makes the system more annoying, so users respond by writing their passwords down and taping them to their monitors, or simply forgetting them more often. Smarter users write them down and put them in their wallets, or use a secure password database like Password Safe.
Users forgetting their passwords can be expensive—sysadmins or customer service reps have to field phone calls and reset passwords—so some systems include a backup authentication system: a secret question. The idea is that if you forget your password, you can authenticate yourself with some personal information that only you know. Your mother's maiden name was traditional, but these days there are all sorts of secret questions: your favourite schoolteacher, favourite colour, street you grew up on, name of your first pet, and so on. This might make the system more usable, but it also makes it much less secure: answers can be easily guessable, and are often known by people close to you.
A common enhancement is a one-time password generator, like a SecurID token. This is a small device with a screen that displays a password that changes automatically once a minute. Adding this is called two-factor authentication, and is much more secure, because this token—"something you have"—is combined with a password—"something you know." But it's less usable, because the tokens have to be purchased and distributed to all users, and far too often it's "something you lost or forgot." And it costs money. Tokens are far more frequently used in corporate environments, but banks and some online gaming worlds have taken to using them—sometimes only as an option, because people don't like them.
In most cases, how an authentication system works when a legitimate user tries to log on is much more important than how it works when an impostor tries to log on. No security system is perfect, and there is some level of fraud associated with any of these authentication methods. But the instances of fraud are rare compared to the number of times someone tries to log on legitimately. If a given authentication system let the bad guys in one in a hundred times, a bank could decide to live with the problem—or try to solve it in some other way. But if the same authentication system prevented legitimate customers from logging on even one in a thousand times, the number of complaints would be enormous and the system wouldn't survive one week.
Balancing security and usability is hard, and many organizations get it wrong. But it's also evolving; organizations needing to tighten their security continue to push more involved authentication methods, and more savvy Internet users are willing to accept them. And certainly IT administrators need to be leading that evolutionary change.
A version of this essay was originally published in The Guardian.
Refreshing commentary from Nigel Inkster, former Assistant Chief and Director of Operations and Intelligence of MI6:
"Efforts to establish a global repository of counterterrorist information are unlikely ever to succeed. We need to be wary of rebuilding our world to deal with just one problem, one which might not be by any means the most serious we face."
On page 379 of the current stimulus bill, there's a bit about establishing a website of companies that lost patient information:
(4) POSTING ON HHS PUBLIC WEBSITE -- The Secretary shall make available to the public on the Internet website of the Department of Health and Human Services a list that identifies each covered entity involved in a breach described in subsection (a) in which the unsecured protected health information of more than 500 individuals is acquired or disclosed.
I'm not sure if this passage survived the final bill, but it will be interesting if it is now law.
"WiFi networks and malware epidemiology," by Hao Hu, Steven Myers, Vittoria Colizza, and Alessandro Vespignani.
Honestly, I'm not sure I understood most of the article. And I don't think that their model is all that great. But I like to see these sorts of methods applied to malware and infection rates.
EDITED TO ADD (3/13): Earlier -- but free -- version of the paper.
Do I have any readers left who think humans are rational about risks?
They've lost 80 computers: no idea if they're stolen, or just misplaced. Typical story—not even worth commenting on—but this great comment by Los Alamos explains a lot about what was wrong with their security policy:
The letter, addressed to Department of Energy security officials, contends that "cyber security issues were not engaged in a timely manner" because the computer losses were treated as a "property management issue."
The real risk in computer losses is the data, not the hardware. I thought everyone knew that.
Rajendrasinh Makwana was a UNIX contractor for Fannie Mae. On October 24, he was fired. Before he left, he slipped a logic bomb into the organization's network. The bomb would have "detonated" on January 31. It was programmed to disable access to the server on which it was running, block any network monitoring software, systematically and irretrievably erase everything—and then replicate itself on all 4,000 Fannie Mae servers. Court papers claim the damage would have been in the millions of dollars, a number that seems low. Fannie Mae would have been shut down for at least a week.
Luckily—and it does seem it was pure luck—another programmer discovered the script a week later, and disabled it.
Insiders are a perennial problem. They have access, and they're known by the system. They know how the system and its security works, and its weak points. They have opportunity. Bank heists, casino thefts, large-scale corporate fraud, train robberies: many of the most impressive criminal attacks involve insiders. And, like Makwana's attempt at revenge, these insiders can have pretty intense motives—motives that can only intensify as the economy continues to suffer and layoffs increase.
Insiders are especially pernicious attackers because they're trusted. They have access because they're supposed to have access. They have opportunity, and an understanding of the system, because they use it—or they designed, built, or installed it. They're already inside the security system, making them much harder to defend against.
It's not possible to design a system without trusted people. They're everywhere. In offices, employees are trusted people given access to facilities and resources, and allowed to act—sometimes broadly, sometimes narrowly—in the company's name. In stores, employees are allowed access to the back room and the cash register; and customers are trusted to walk into the store and touch the merchandise. IRS employees are trusted with personal tax information; hospital employees are trusted with personal health information. Banks, airports, and prisons couldn't operate without trusted people.
Replacing trusted people with computers doesn't make the problem go away; it just moves it around and makes it even more complex. The computer, software, and network designers, implementers, coders, installers, maintainers, etc. are all trusted people. See any analysis of the security of electronic voting machines, or some of the frauds perpetrated against computerized gambling machines, for some graphic examples of the risks inherent in replacing people with computers.
Of course, this problem is much, much older than computers. And the solutions haven't changed much throughout history, either. There are five basic techniques to deal with trusted people:
1. Limit the number of trusted people. This one is obvious. The fewer people who have root access to the computer system, know the combination to the safe, or have the authority to sign checks, the more secure the system is.
2. Ensure that trusted people are also trustworthy. This is the idea behind background checks, lie detector tests, personality profiling, prohibiting convicted felons from getting certain jobs, limiting other jobs to citizens, the TSA's no-fly list, and so on, as well as behind bonding employees, which means there are deep pockets standing behind them if they turn out not to be trustworthy.
3. Limit the amount of trust each person has. This is compartmentalization; the idea here is to limit the amount of damage a person can do if he ends up not being trustworthy. This is the concept behind giving people keys that only unlock their office or passwords that only unlock their account, as well as "need to know" and other levels of security clearance.
4. Give people overlapping spheres of trust. This is what security professionals call defense in depth. It's why it takes two people with two separate keys to launch nuclear missiles, and two signatures on corporate checks over a certain value. It's the idea behind bank tellers requiring management overrides for high-value transactions, double-entry bookkeeping, and all those guards and cameras at casinos. It's why, when you go to a movie theater, one person sells you a ticket and another person standing a few yards away tears it in half: It makes it much harder for one employee to defraud the system. It's why key bank employees need to take their two-week vacations all at once—so their replacements have a chance to uncover any fraud.
5. Detect breaches of trust after the fact and prosecute the guilty. In the end, the four previous techniques can only do so well. Trusted people can subvert a system. Most of the time, we discover the security breach after the fact and then punish the perpetrator through the legal system: publicly, so as to provide a deterrence effect and increase the overall level of security in society. This is why audit is so vital.
These security techniques don't only protect against fraud or sabotage; they protect against the more common problem: mistakes. Trusted people aren't perfect; they can inadvertently cause damage. They can make a mistake, or they can be tricked into making a mistake through social engineering.
Good security systems use multiple measures, all working together. Fannie Mae certainly limits the number of people who have the ability to slip malicious scripts into their computer systems, and certainly limits the access that most of these people have. It probably has a hiring process that makes it less likely that malicious people come to work at Fannie Mae. It obviously doesn't have an audit process by which a change one person makes on the servers is checked by someone else; I'm sure that would be prohibitively expensive. Certainly the company's IT department should have terminated Makwana's network access as soon as he was fired, and not at the end of the day.
In the end, systems will always have trusted people who can subvert them. It's important to keep in mind that incidents like this don't happen very often; that most people are honest and honorable. Security is very much designed to protect against the dishonest minority. And often little things—like disabling access immediately upon termination—can go a long way.
This essay originally appeared on the Wall Street Journal website.
This ad, for a Uni-ball pen that's hard to erase, is kind of surreal. They're using fear to sell pens -- again -- but it's the wrong fear. They're confusing check-washing fraud, where someone takes a check and changes the payee and maybe the amount, with identity theft. And how can someone steal money from me by erasing and changing information on a tax form? Are they going to cause my refund check to be sent to another address? This is getting awfully Byzantine.
Not quite sure....
Yet another one.
Turns out the algorithm is linear.
When you're buying security products, you have to trust the vendor. That's why I don't buy any of these hardware-encrypted drives. I don't trust the vendors.
Moving toward the truly disingenuous, we've got the "FastPass Switcheroo." To do this, simply get your FastPass like normal for Splash Mountain. You notice that the return time is two hours away, in the afternoon. Wait two hours, then return here and get another set of FP tickets, this time for later in the evening. But at this moment, your first set of FP tickets are active. Use them to get by the FP guard at the front, but when prompted to turn in your tickets at the front of the FP line, hand over the ones for this evening instead. 99.9% of the time, they do not look at these tickets whatsoever in this point in the line; they just add them to the pile in their hand and impatiently gesture you forward. All the examining of the tickets takes place at the start of the line, not the end. Voila, you've cheated the system. After this ride, you can get off and immediately ride again, since you've held on to the afternoon FPs and can use them in the normal fashion now.
Small cameras can now be embedded in the screen or hidden around it, tracking who looks at the screen and for how long. The makers of the tracking systems say the software can determine the viewer's gender, approximate age range and, in some cases, ethnicity—and can change the ads accordingly.
These are ads at eye level: on the streets, in malls, in train stations.
EDITED TO ADD (2/11): I got some details wrong. Chris Paget, the researcher, is cloning Western Hemisphere Travel Initiative (WHTI) compliant documents such as the passport card and Electronic Drivers License (EDL), and not the passport itself. Here is the link to Paget's talk at ShmooCon.
They're used to smuggle drugs into the U.S.
Since the vessels have a low profile -- the hulls only rise about a foot above the waterline -- they are hard to see from a distance and produce a small radar signature. U.S. counterdrug officials estimate that SPSS are responsible for 32% of all cocaine movement in the transit zone.
But let's not forget the terrorism angle:
"What worries me [about the SPSS] is if you can move that much cocaine, what else can you put in that semi-submersible. Can you put a weapon of mass destruction in it?" Navy Adm. Jim Stavridis, Commander, U.S. Southern Command
This isn't the first time Amtrak police have been idiots.
And in related news, in the U.K. it soon might be illegal to photograph the police.
EDITED TO ADD (2/10): The photographer's page about the incident has been replaced with the words "No comment!" Anyone have a link to a copy? In the meantime, here's an entry about the incident on a photo activist's blog.
EDITED AGAIN: Thanks to Phil M. in comments for finding these Google Cache links from Duane Kerzic's site:
The House approved a bill creating a whitelist of people who are on the blacklist, but shouldn't be. No word yet about what they're going to do about people who are on the whitelist, but shouldn't be. Perhaps they'll create a second blacklist for them. Then we'll all be safe from terrorists, for sure.
Monster's latest breach "shouldn't have happened," said Bruce Schneier, chief security technology officer for BT Group. "But you can't understand a company's network security by looking at public events—that's a bad metric. All the public events tell you are, these are attacks that were successful enough to steal data, but were unsuccessful in covering their tracks."
Thinking about it, it's even more complex than that. To assess an organization's network security, you need to actually analyze it. You can't get a lot of information from the list of attacks that were successful enough to steal data but not successful enough to cover their tracks, and which the company's attorneys couldn't figure out a reason not to disclose to the public.
Doesn't really look all that tasty.
Good xkcd comic on the difference between theoretical and practical cryptanalysis.
Last Saturday I was interviewed on Paul Harris's Chicago radio show.
Interesting, at least to me. It helps if you know the various code names and the names of the different equipment.
Honestly, I don't think this is really needed. I use PGP Disk, and I haven't noticed any slowdown due to having encryption done in software. And I worry about yet another standard with its inevitable flaws and security vulnerabilities.
EDITED TO ADD (2/13): Perceptive comment about how the real benefit is regulatory compliance.
Not that this is any news, but there's some new research to back it up:
The study was performed by William Press, who does bioinformatics research at the University of Texas, Austin, with a joint appointment at Los Alamos National Labs. His background in statistics is apparent in his ability to handle various mathematical formulae with aplomb, but he's apparently used to explaining his work to biologists, since the descriptions that surround those formulae make the general outlines of the paper fairly accessible.
People confess to crimes they don't commit. They do it a lot. What's interesting about this research is that confessions—whether false or true—corrupt other eyewitnesses:
When asked to explain their change, subjects revealed they were actually convinced by the confessor, and not simply complying with it, saying, "His face now looks more familiar than the one I chose before."
Someone did the analysis:
As will be analyzed below, it is estimated that the costs of the no-fly list, since 2002, range from approximately $300 million (a conservative estimate) to $966 million (an estimate on the high end). Using those figures as low and high potentials, a reasonable estimate is that the U.S. government has spent over $500 million on the project since the September 11, 2001 terrorist attacks. Using annual data, this article suggests that the list costs taxpayers somewhere between $50 million and $161 million a year, with a reasonable compromise of those figures at approximately $100 million.
There's a bill in Congress—unlikely to go anywhere—to force digital cameras to go "click." The idea is that this will make surreptitious photography harder:
The bill's text says that Congress has found that "children and adolescents have been exploited by photographs taken in dressing rooms and public places with the use of a camera phone."
This is so silly it defies comment.
EDITED TO ADD (2/13): Apparently this is already law in Japan.
"Probing the Improbable: Methodological Challenges for Risks with Low Probabilities and High Stakes," by Toby Ord, Rafaela Hillerbrand, Anders Sandberg.
From the Los Angeles Times:
Freeman is one of at least 200 people on flights who have been convicted under the amended law. In most of the cases, there was no evidence that the passengers had attempted to hijack the airplane or physically attack any of the flight crew. Many have simply involved raised voices, foul language and drunken behavior.
Powered by Movable Type. Photo at top by Per Ervland.
Schneier.com is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc.