Schneier on Security
A blog covering security and security technology.
January 2013 Archives
The BMC is an embedded computer found on most server motherboards made in the last 10 or 15 years. Often running Linux, the BMC's CPU, memory, storage, and network run independently. It runs Intel's IPMI out-of-band systems management protocol alongside network services (web, telnet, VNC, SMTP, etc.) to help manage, debug, monitor, reboot, and roll out servers, virtual systems, and supercomputers. Vendors frequently add features and rebrand OEM'd BMCs: Dell has iDRAC, Hewlett Packard iLO, IBM calls theirs IMM2, etc. It is popular because it helps raise efficiency and lower costs associated with availability, personnel, scaling, power, cooling, and more.
What's the problem?
Servers are usually managed in large groups, which may have thousands or even hundreds of thousands of computers. Each group typically has one or two reusable and closely guarded passwords; if you know the password, you control all the servers in the group. Passwords can remain unchanged for a long time -- often years -- not only because it is very difficult to manage or modify, but also due to the near impossibility of auditing or verifying change. And due to the spec, the password is stored in clear text on the BMC.
Basically, it's a perfect spying platform. You can't control it. You can't patch it. It can completely control your computer's hardware and software. And its purpose is remote monitoring.
At the very least, we need to be able to look into these devices and see what's running on them.
I'm amazed we haven't seen any talk about this before now.
EDITED TO ADD (1/31): Correction -- these chips are on server motherboards, not on PCs or other consumer devices.
All disruptive technologies upset traditional power balances, and the Internet is no exception. The standard story is that it empowers the powerless, but that's only half the story. The Internet empowers everyone. Powerful institutions might be slow to make use of that new power, but since they are powerful, they can use it more effectively. Governments and corporations have woken up to the fact that not only can they use the Internet, they can control it for their interests. Unless we start deliberately debating the future we want to live in, and the role of information technology in enabling that world, we will end up with an Internet that benefits existing power structures and not society in general.
We've all lived through the Internet's disruptive history. Entire industries, like travel agencies and video rental stores, disappeared. Traditional publishing -- books, newspapers, encyclopedias, music -- lost power, while Amazon and others gained. Advertising-based companies like Google and Facebook gained a lot of power. Microsoft lost power (as hard as that is to believe).
The Internet changed political power as well. Some governments lost power as citizens organized online. Political movements became easier, helping to topple governments. The Obama campaign made revolutionary use of the Internet, both in 2008 and 2012.
And the Internet changed social power, as we collected hundreds of "friends" on Facebook, tweeted our way to fame, and found communities for the most obscure hobbies and interests. And some crimes became easier: impersonation fraud became identity theft, copyright violation became file sharing, and accessing censored materials -- political, sexual, cultural -- became trivially easy.
Now powerful interests are looking to deliberately steer this influence to their advantage. Some corporations are creating Internet environments that maximize their profitability: Facebook and Google, among many others. Some industries are lobbying for laws that make their particular business models more profitable: telecom carriers want to be able to discriminate between different types of Internet traffic, entertainment companies want to crack down on file sharing, advertisers want unfettered access to data about our habits and preferences.
On the government side, more countries censor the Internet -- and do so more effectively -- than ever before. Police forces around the world are using Internet data for surveillance, with less judicial oversight and sometimes in advance of any crime. Militaries are fomenting a cyberwar arms race. Internet surveillance -- both governmental and commercial -- is on the rise, not just in totalitarian states but in Western democracies as well. Both companies and governments rely more on propaganda to create false impressions of public opinion.
In 1996, cyber-libertarian John Perry Barlow issued his "Declaration of the Independence of Cyberspace." He told governments: "You have no moral right to rule us, nor do you possess any methods of enforcement that we have true reason to fear." It was a utopian ideal, and many of us believed him. We believed that the Internet generation, those quick to embrace the social changes this new technology brought, would swiftly outmaneuver the more ponderous institutions of the previous era.
Reality turned out to be much more complicated. What we forgot is that technology magnifies power in both directions. When the powerless found the Internet, suddenly they had power. But while the unorganized and nimble were the first to make use of the new technologies, eventually the powerful behemoths woke up to the potential -- and they have more power to magnify. And not only does the Internet change power balances, but the powerful can also change the Internet. Does anyone else remember how incompetent the FBI was at investigating Internet crimes in the early 1990s? Or how Internet users ran rings around China's censors and Middle Eastern secret police? Or how digital cash was going to make government currencies obsolete, and Internet organizing was going to make political parties obsolete? Now all that feels like ancient history.
It's not all one-sided. The masses can occasionally organize around a specific issue -- SOPA/PIPA, the Arab Spring, and so on -- and can block some actions by the powerful. But it doesn't last. The unorganized go back to being unorganized, and powerful interests take back the reins.
Debates over the future of the Internet are morally and politically complex. How do we balance personal privacy against what law enforcement needs to prevent copyright violations? Or child pornography? Is it acceptable to be judged by invisible computer algorithms when being served search results? When being served news articles? When being selected for additional scrutiny by airport security? Do we have a right to correct data about us? To delete it? Do we want computer systems that forget things after some number of years? These are complicated issues that require meaningful debate, international cooperation, and iterative solutions. Does anyone believe we're up to the task?
We're not, and that's the worry. Because if we're not trying to understand how to shape the Internet so that its good effects outweigh the bad, powerful interests will do all the shaping. The Internet's design isn't fixed by natural laws. Its history is a fortuitous accident: an initial lack of commercial interests, governmental benign neglect, military requirements for survivability and resilience, and the natural inclination of computer engineers to build open systems that work simply and easily. This mix of forces that created yesterday's Internet will not be trusted to create tomorrow's. Battles over the future of the Internet are going on right now: in legislatures around the world, in international organizations like the International Telecommunications Union and the World Trade Organization, and in Internet standards bodies. The Internet is what we make it, and is constantly being recreated by organizations, companies, and countries with specific interests and agendas. Either we fight for a seat at the table, or the future of the Internet becomes something that is done to us.
Back in 1999 when I formed Counterpane Internet Security, Inc., I popularized the notion that security was a combination of people, process, and technology. Back then, it was an important notion; security back then was largely technology-only, and I was trying to push the idea that people and process needed to be incorporated into an overall security system.
This blog post argues that the IT security world has become so complicated that we need less in the way of people and process, and more technology:
Such a landscape can no longer be policed by humans and procedures. Technology is needed to leverage security controls. The Golden Triangle of people, process and technology needs to be rebalanced in favour of automation. And I'm speaking as a pioneer and highly experienced expert in process and human factors.
He's right. People and process work on human timescales, not computer timescales. They're important at the strategic level, and sometimes at the tactical level -- but the more we can capture and automate that, the better we're going to do.
The problem is, though, that sometimes human intelligence is required to make sense of an attack, and to formulate an appropriate response. And as long as that's the case, there are going to be instances where an automated attack is going to have the advantage.
Lately I've been thinking a lot about power and the Internet, and what I call the feudal model of IT security that is becoming more and more pervasive. Basically, between cloud services and locked-down end-user devices, we have less control and visibility over our security -- and have no point but to trust those in power to keep us safe.
"Many of its users rely on Skype for secure communications -- whether they are activists operating in countries governed by authoritarian regimes, journalists communicating with sensitive sources, or users who wish to talk privately in confidence with business associates, family, or friends," the letter explains.
That's security in today's world. We have no choice but to trust Microsoft. Microsoft has reasons to be trustworthy, but they also have reasons to betray our trust in favor of other interests. And all we can do is ask them nicely to tell us first.
Results showed that more than half of the survey respondents from mid-sized (identified as 50-2500 employees) and enterprise organizations (identified as 2500+ employees) stated that complex policies ultimately led to a security breach, system outage or both.
Usual caveats for this sort of thing apply. The survey is only among 127 people -- I can't find data on what percentage replied. The numbers are skewed because only those that chose to reply were counted. And the results are based on self-reported replies: no way to verify them.
This story exemplifies everything that's wrong with our see-something-say-something war on terror: a perfectly innocent person on an airplane, a random person identifying him as a terrorist threat, and a complete overreaction on the part of the authorities.
Typical overreaction, but in this case -- as in several others over the past decade -- F-15 fighter jets were scrambled to escort the airplane to the ground. Very expensive, and potentially catastrophically fatal.
This blog post makes the point well:
What bothers me about this is not so much that they interrogated the wrong person -- that happens all the time, not that it's okay -- but rather the fighter jets. I think most people probably understand this, but just to make it totally clear, if they send up fighters that is not because they are bringing the first-class passengers some more of those little hot towels. It is so they can be ready to SHOOT YOU DOWN if necessary. Now, I realize the odds that would ever happen, even accidentally, are very tiny. I still question whether it's wise to put fighters next to a passenger plane at the drop of a hat, or in this case because of an anonymous tip about a sleeping passenger.
This is fascinating:
Intuitively we understand that people surrounded by violence are more likely to be violent themselves. This isn't just some nebulous phenomenon, argue Slutkin and his colleagues, but a dynamic that can be rigorously quantified and understood.
I am reminded of this paper on the effects of bystanders on escalating and de-escalating potentially violent situations.
As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.
This interview was conducted last month, at an artificial intelligence conference at Oxford.
Janesville, Wisconsin, has published information about repeated drunk driving offenders since 2010. The idea is that the public shame will reduce future incidents.
It's called stylometry, and it's based on the analysis of things like word choice, sentence structure, syntax and punctuation. In one experiment, researchers were able to identify 80% of users with a 5,000-word writing sample.
Download tools here, including one to anonymize your writing style.
The genetic data posted online seemed perfectly anonymous - strings of billions of DNA letters from more than 1,000 people. But all it took was some clever sleuthing on the Web for a genetics researcher to identify five people he randomly selected from the study group. Not only that, he found their entire families, even though the relatives had no part in the study -- identifying nearly 50 people.
Ever since the launch of Kim Dotcom's file-sharing service, I have been asked about the unorthodox encryption and security system.
Please add other links in the comments.
EDITED TO ADD (1/24): Also this.
There has been an enormous amount written about the suicide of Aaron Swartz. This is primarily a collection of links, starting with those that use his death to talk about the broader issues at play: Orin Kerr, Larry Lessig, Jennifer Granick, Glenn Greenwald, Henry Farrell, danah boyd, Cory Doctorow, James Fallows, Brewster Kahle, Carl Malamud, and Mark Bernstein. Here are obituaries from the New York Times and Economist. Here are articles and essays from CNN.com, The Huffington Post, Larry Lessig, TechDirt, CNet, and Forbes, mostly about the prosecutor's statement after the death and the problems with plea bargaining in general. Representative Zoe Lofgren is introducing a bill to prevent this from happening again.
I don't have anything to add, but enough people have sent me their thoughts via e-mail that I thought it would be good to have a thread on this blog for conversation.
EDITED TO ADD (1/29): Another.
EDITED TO ADD (2/28): The DoJ has admitted that Aaron Swartz's prosecution was political.
EDITED TO ADD (3/4): This profile of Aaron Swartz is very good.
Google is working on non-password authentication techniques.
But for Google's password-liberation plan to really take off, they’re going to need other websites to play ball. "Others have tried similar approaches but achieved little success in the consumer world," they write. "Although we recognize that our initiative will likewise remain speculative until we've proven large scale acceptance, we’re eager to test it with other websites."
They have enough industry muscle that they might pull it off.
This essay is worth reading:
Obscurity is the idea that when information is hard to obtain or understand, it is, to some degree, safe. Safety, here, doesn't mean inaccessible. Competent and determined data hunters armed with the right tools can always find a way to get it. Less committed folks, however, experience great effort as a deterrent.
The essay is about Facebook's new Graph search tool, and how its harm is best thought of as reducing obscurity.
This is big news:
The U.S. Transportation Security Administration will remove airport body scanners that privacy advocates likened to strip searches after OSI Systems Inc. (OSIS) couldn't write software to make passenger images less revealing.
This doesn't mean the end of full-body scanning. There are two categories of these devices: backscatter X-ray and millimeter wave.
The government said Friday it is abandoning its deployment of so-called backscatter technology machines produced by Rapiscan because the company could not meet deadlines to switch to generic imaging with so-called Automated Target Recognition software, the TSA said. Instead, the TSA will continue to use and deploy more millimeter wave technology scanners produced by L-3 Communications, which has adopted the generic-outline standard.
And there are still backscatter X-ray machines being deployed, but I don't think there are very many of them.
TSA has contracted with L-3, Smiths Group Plc (SMIN) and American Science & Engineering Inc. (ASEI) for new body-image scanners, all of which must have privacy software. L-3 and Smiths used millimeter-wave technology. American Science uses backscatter.
This is a big win for privacy. But, more importantly, it's a big win because the TSA is actually taking privacy seriously. Yes, Congress ordered them to do so. But they didn't defy Congress; they did it. The machines will be gone by June.
Now that videographers have bagged a giant squid, the search turns to the colossal squid.
As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.
Good essay by Matt Blaze and Susan Landau.
The Finnish phone giant has since admitted that it decrypts secure data that passes through HTTPS connections -- including social networking accounts, online banking, email and other secure sessions -- in order to compress the data and speed up the loading of Web pages.
The basic problem is that https sessions are opaque as they travel through the network. That's the point -- it's more secure -- but it also means that the network can't do anything about them. They can't be compressed, cached, or otherwise optimized. They can't be rendered remotely. They can't be inspected for security vulnerabilities. All the network can do is transmit the data back and forth.
But in our cloud-centric world, it makes more and more sense to process web data in the cloud. Nokia isn't alone here. Opera's mobile browser performs all sorts of optimizations on web pages before they are sent over the air to your smart phone. Amazon does the same thing with browsing on the Kindle. MobileScope, a really good smart-phone security application, performs the same sort of man-in-the-middle attack against https sessions to detect and prevent data leakage. I think Umbrella does as well. Nokia's mistake was that they did it without telling anyone. With appropriate consent, it's perfectly reasonable for most people and organizations to give both performance and security companies that ability to decrypt and re-encrypt https sessions -- at least most of the time.
This is an area where security concerns are butting up against other issues. Nokia's answer, which is basically "trust us, we're not looking at your data," is going to increasingly be the norm.
There's a fascinating story about a probable tournament chess cheat. No one knows how he does it; there's only the facts that 1) historically he's not nearly as good as his recent record, and 2) his moves correlate almost perfectly with one of best computer chess programs. The general question is how valid statistical evidence is when there is no other corroborating evidence.
This essay, which uses the suicide of Aaron Swartz as a jumping off point for how the term "hactivist" has been manipulated by various powers, has this to say about "lexical warfare":
I believe the debate itself is far broader than the specifics of this unhappy case, for if there was prosecutorial overreach it raises the question of whether we as a society created the enabling condition for this sort of overreach by letting the demonization of hacktivists go unanswered. Prosecutors do not work in a vacuum, after all; they are more apt to pursue cases where public discourse supports their actions. The debate thus raises an issue that, as philosopher of language, I have spent time considering: the impact of how words and terms are defined in the public sphere.
It's both an art project and a practical clothing line.
...Harvey's line of "Stealth Wear" clothing includes an "anti-drone hoodie" that uses metalized material designed to counter thermal imaging used by drones to spot people on the ground. He's also created a cellphone pouch made of a special "signal attenuating fabric." The pocket blocks your phone signal so that it can't be tracked or intercepted by devices like the covert "Stingray" tool used by law enforcement agencies like the FBI.
Philosophy professor David Livingstone Smith on the origins of war.
As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.
Last August, I offered to sell Liars and Outliers for $11 in exchange for a book review. This was much less than the $30 list price; less even than the $16 Amazon price. For readers outside the U.S., where books can be very expensive, it was a great price.
I sold 800 books from this offer -- much more than the few hundred I originally intended -- to people all over the world. It was the end of September before I mailed them all out, and probably a couple of weeks later before everyone received their copy. Now, three months after that, it's interesting to count up the number of reviews I received from the offer.
That's not a trivial task. I asked people to e-mail me URLs for their review, but not everyone did. But counting the independent reviews, the Amazon reviews, and the Goodreads reviews from the time period, and making some reasonable assumptions, about 70 people fulfilled their end of the bargain and reviewed my book.
There were some outliers. One person wrote to tell me that he didn't like the book, and offered not to publish a review despite the agreement. Another two e-mailed me to offer to return the price difference (I declined).
Perhaps people have been busier than they expected -- and haven't gotten around to reading the book and writing a review yet. I know my reading is often delayed by more pressing priorities. And although I didn't put any deadline on when the review should be completed by, I received a surge of reviews around the end of the year -- probably because some people self-imposed a deadline. What is certain is that a great majority of people decided not to uphold their end of the bargain.
The original offer was an exercise in trust. But to use the language of the book, the only thing inducing compliance was the morals of the reader. I suppose I could have collected everyone's names, checked off those who wrote reviews, and tried shaming the rest -- but that seems like a lot of work. Perhaps this public nudge will be enough to convince some more people to write reviews.
EDITED TO ADD (1/11): I never intended to make people feel bad with this post. I know that some people are busy, and that reading an entire book is a large time commitment (especially in our ever-shortened-attention-span era). I can see how this post could be read as an attempt to shame, but -- really -- that was not my intention.
This essay explains why we're all living in failed Hobbesian states:
What do these three implications -- states have a great deal of freedom to determine what threatens a people and how to respond to those threats, and in making those determinations, they are influenced by the interests and ideologies of their primary constituencies; states have strong incentives and have been given strong justifications for exaggerating threats; and while states aspire, rhetorically, to a unity of will and judgment, they seldom achieve it in practice -- tell us about the relationship between security and freedom? What light do they shed on the question of why security is such a potent argument for the suppression of rights and liberties?
Just claim the person is dead. All you need to do is fake an online obituary.
Not a cat burglar, a cat smuggler.
Guards thought there was something suspicious about a little white cat slipping through a prison gate in northeastern Brazil. A prison official says that when they caught the animal, they found a cellphone, drills, small saws and other contraband taped to its body.
Another article, with video.
A prison spokesperson was quoted by local paper Estado de S. Paulo as saying: "It's tough to find out who's responsible for the action as the cat doesn't speak."
This Wall Street Journal investigative piece is a month old, but well worth reading. Basically, the Total Information Awareness program is back with a different name:
The rules now allow the little-known National Counterterrorism Center to examine the government files of U.S. citizens for possible criminal behavior, even if there is no reason to suspect them. That is a departure from past practice, which barred the agency from storing information about ordinary Americans unless a person was a terror suspect or related to an investigation.
Note that this is government data only, not commercial data. So while it includes "almost any government database, from financial forms submitted by people seeking federally backed mortgages to the health records of people who sought treatment at Veterans Administration hospitals" as well lots of commercial data, it's data the corporations have already given to the government. It doesn't include, for example, your detailed cell phone bills or your tweets.
See also this supplementary blog post to the article.
Interesting details of an Amazon Marketplace scam. Worth reading.
Most scams use a hook to cause a reaction. The idea being that if you are reacting, they get to control you. If you take the time to stop and think things through, you take control back and can usually spot the scam. Common hooks involve Urgency, Uncertainty, Sex, Fear or Anger. In this case, it's all about Urgency, Uncertainty and Fear. By setting the price so low, they drive urgency high, as you're afraid that you might miss the deal. They then compound this by telling me there was an error in the shipment, trying to make me believe they are incompetent and if I act quickly, I can take advantage of their error.
We'll see it later this month.
This is what Facebook gives the police in response to a subpoena. (Note that this isn't in response to a warrant; it's in response to a subpoena.) This might be the first one of these that has ever become public.
EDITED TO ADD (1/4): Commenters point out that this case is four years old, and that Facebook claims to have revised its policies since then.
This is a great essay:
Spheres are special shapes for nuclear weapons designers. Most nuclear weapons have, somewhere in them, that spheres-within-spheres arrangement of the implosion nuclear weapon design. You don’t have to use spheres -- cylinders can be made to work, and there are lots of rumblings and rumors about non-spherical implosion designs around these here Internets -- but spheres are pretty common.
The essay continues with a story of a scientist who received a security violation for leaving an orange on his desk.
Two points here. One, this is a classic problem with any detection system. When it's hard to build a system that detects the thing you're looking for, you change the problem to detect something easier -- and hope the overlap is enough to make the system work. Think about airport security. It's too hard to detect actual terrorists with terrorist weapons, so instead they detect pointy objects. Internet filtering systems work the same way, too. (Remember when URL filters blocked the word "sex," and the Middlesex Public Library found that it couldn't get to its municipal webpages?)
Two, the Los Alamos system only works because false negatives are much, much worse than false positives. It really is worth classifying an abstract shape and annoying an officeful of scientists and others to protect the nuclear secrets. Airport security fails because the false-positive/false-negative cost ratio is different.
"Come on," Jillette said. "Steal something from me."
Really -- read the whole thing.
Powered by Movable Type. Photo at top by Per Ervland.
Schneier.com is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc.