Essays in the Category “Computer and Information Security”
There’s nothing stopping attackers from manipulating the data they make public.
In the past few years, the devastating effects of hackers breaking into an organization's network, stealing confidential data, and publishing everything have been made clear. It happened to the Democratic National Committee, to Sony, to the National Security Agency, to the cyber-arms weapons manufacturer Hacking Team, to the online adultery site Ashley Madison, and to the Panamanian tax-evasion law firm Mossack Fonseca.
This style of attack is known as organizational doxing. The hackers, in some cases individuals and in others nation-states, are out to make political points by revealing proprietary, secret, and sometimes incriminating information.
Over the past year or two, someone has been probing the defenses of the companies that run critical pieces of the Internet. These probes take the form of precisely calibrated attacks designed to determine exactly how well these companies can defend themselves, and what would be required to take them down. We don't know who is doing this, but it feels like a large a large nation state. China and Russia would be my first guesses.
Every few years, a researcher replicates a security study by littering USB sticks around an organization's grounds and waiting to see how many people pick them up and plug them in, causing the autorun function to install innocuous malware on their computers. These studies are great for making security professionals feel superior. The researchers get to demonstrate their security expertise and use the results as "teachable moments" for others. "If only everyone was more security aware and had more security training," they say, "the Internet would be a much safer place."
Enough of that.
The National Security Agency is lying to us. We know that because of data stolen from an NSA server was dumped on the internet. The agency is hoarding information about security vulnerabilities in the products you use, because it wants to use it to hack others' computers. Those vulnerabilities aren't being reported, and aren't getting fixed, making your computers and networks unsafe.
Russia has attacked the U.S. in cyberspace in an attempt to influence our national election, many experts have concluded. We need to take this national security threat seriously and both respond and defend, despite the partisan nature of this particular attack.
There is virtually no debate about that, either from the technical experts who analyzed the attack last month or the FBI which is analyzing it now.
If Russia really is responsible, there's no reason political interference would end with the DNC emails.
Russia was behind the hacks into the Democratic National Committee's computer network that led to the release of thousands of internal emails just before the party's convention began, U.S. intelligence agencies have reportedly concluded.
The FBI is investigating. WikiLeaks promises there is more data to come.
Disaster stories involving the Internet of Things are all the rage. They feature cars (both driven and driverless), the power grid, dams, and tunnel ventilation systems. A particularly vivid and realistic one, near-future fiction published last month in New York Magazine, described a cyberattack on New York that involved hacking of cars, the water system, hospitals, elevators, and the power grid. In these stories, thousands of people die.
Traditional computer security concerns itself with vulnerabilities. We employ antivirus software to detect malware that exploits vulnerabilities. We have automatic patching systems to fix vulnerabilities. We debate whether the FBI should be permitted to introduce vulnerabilities in our software so it can get access to systems with a warrant.
In today's world of ubiquitous computers and networks, it's hard to overstate the value of encryption. Quite simply, encryption keeps you safe. Encryption protects your financial details and passwords when you bank online. It protects your cell phone conversations from eavesdroppers.
When Johns Hopkins discovered a different security flaw, it notified Apple so the problem could be fixed. The FBI is keeping its newly found breach a secret from everyone.
The FBI's legal battle with Apple is over, but the way it ended may not be good news for anyone.
Federal agents had been seeking to compel Apple to break the security of an iPhone 5c that had been used by one of the San Bernardino, Calif., terrorists. Apple had been fighting a court order to cooperate with the FBI, arguing that the authorities' request was illegal and that creating a tool to break into the phone was itself harmful to the security of every iPhone user worldwide.
Last week, the FBI told the court it had learned of a possible way to break into the phone using a third party's solution, without Apple's help.
Writing a magazine column is always an exercise in time travel. I'm writing these words in early December. You're reading them in February. This means anything that's news as I write this will be old hat in two months, and anything that's news to you hasn't happened yet as I'm writing.
Thefts of personal information aren't unusual. Every week, thieves break into networks and steal data about people, often tens of millions at a time. Most of the time it's information that's needed to commit fraud, as happened in 2015 to Experian and the IRS.
Sometimes it's stolen for purposes of embarrassment or coercion, as in the 2015 cases of Ashley Madison and the U.S.
This essay is part of a debate with Denise Zheng of the Center for Strategic and International Studies.
Encryption keeps you safe. Encryption protects your financial details and passwords when you bank online. It protects your cell phone conversations from eavesdroppers. If you encrypt your laptop—and I hope you do—it protects your data if your computer is stolen.
Either everyone gets security, or no one does.
Earlier this week, a federal magistrate ordered Apple to assist the FBI in hacking into the iPhone used by one of the San Bernardino shooters. Apple will fight this order in court.
The policy implications are complicated. The FBI wants to set a precedent that tech companies will assist law enforcement in breaking their users' security, and the technology community is afraid that the precedent will limit what sorts of security features it can offer customers.
The Internet of Things is the name given to the computerization of everything in our lives. Already you can buy Internet-enabled thermostats, light bulbs, refrigerators, and cars. Soon everything will be on the Internet: the things we own, the things we interact with in public, autonomous things that interact with each other.
These "things" will have two separate parts.
Cyberthreats are changing. We're worried about hackers crashing airplanes by hacking into computer networks. We're worried about hackers remotely disabling cars. We're worried about manipulated counts from electronic voting booths, remote murder through hacked medical devices and someone hacking an Internet thermostat to turn off the heat and freeze the pipes.
Many technological security failures of today can be traced to failures of encryption. In 2014 and 2015, unnamed hackers—probably the Chinese government—stole 21.5 million personal files of U.S. government employees and others. They wouldn't have obtained this data if it had been encrypted.
Last week, CIA director John O. Brennan became the latest victim of what's become a popular way to embarrass and harass people on the internet. A hacker allegedly broke into his AOL account and published emails and documents found inside, many of them personal and sensitive.
It's called doxing—sometimes doxxing—from the word "documents." It emerged in the 1990s as a hacker revenge tactic, and has since been as a tool to harass and intimidate people, primarily women, on the internet. Someone would threaten a woman with physical harm, or try to incite others to harm her, and publish her personal information as a way of saying "I know a lot about you—like where you live and work." Victims of doxing talk about the fear that this tactic instills.
If the director of the CIA can't keep his e-mail secure, what hope do the rest of us have—for our e-mail or any of our digital information?
None, and that's why the companies that we entrust with our digital lives need to be required to secure it for us, and held accountable when they fail. It's not just a personal or business issue; it's a matter of public safety.
The details of the story are worth repeating.
The news from the Office of Personnel Management hack keeps getting worse. In addition to the personal records of over 20 million US government employees, we've now learned that the hackers stole fingerprint files for 5.6 million of them.
This is fundamentally different from the data thefts we regularly read about in the news, and should give us pause before we entrust our biometric data to large networked databases.
There are three basic kinds of data that can be stolen.
Portuguese translation by Ricardo R Hashimoto
For the past six years, Volkswagen has been cheating on the emissions testing for its diesel cars. The cars' computers were able to detect when they were being tested, and temporarily alter how their engines worked so they looked much cleaner than they actually were. When they weren't being tested, they belched out 40 times the pollutants. Their CEO has resigned, and the company will face an expensive recall, enormous fines and worse.
When the National Security Administration (NSA)—or any government agency—discovers a vulnerability in a popular computer system, should it disclose it or not? The debate exists because vulnerabilities have both offensive and defensive uses. Offensively, vulnerabilities can be exploited to penetrate others' computers and networks, either for espionage or destructive purposes. Defensively, publicly revealing security flaws can be used to make our own systems less vulnerable to those same attacks.
The doxing of Ashley Madison reveals an uncomfortable truth: In the age of cloud computing, everyone is vulnerable.
Most of us get to be thoroughly relieved that our emails weren't in the Ashley Madison database. But don't get too comfortable. Whatever secrets you have, even the ones you don't think of as secret, are more likely than you think to get dumped on the Internet. It's not your fault, and there's largely nothing you can do about it.
Recently, WikiLeaks began publishing over half a million previously secret cables and other documents from the Foreign Ministry of Saudi Arabia. It's a huge trove, and already reporters are writing stories about the highly secretive government.
What Saudi Arabia is experiencing isn't common but part of a growing trend.
Just last week, unknown hackers broke into the network of the cyber-weapons arms manufacturer Hacking Team and published 400 gigabytes of internal data, describing, among other things, its sale of Internet surveillance software to totalitarian regimes around the world.
Encryption protects our data. It protects our data when it’s sitting on our computers and in data centres, and it protects it when it's being transmitted around the Internet. It protects our conversations, whether video, voice, or text. It protects our privacy.
Last weekend, the Sunday Times published a front-page story (full text here), citing anonymous British sources claiming that both China and Russia have copies of the Snowden documents. It's a terrible article, filled with factual inaccuracies and unsubstantiated claims about both Snowden's actions and the damage caused by his disclosure, and others have thoroughly refuted the story. I want to focus on the actual question: Do countries like China and Russia have copies of the Snowden documents?
I believe the answer is certainly yes, but that it's almost certainly not Snowden's fault.
From May 26th to June 5th, 2015, The Economist hosted a debate on cloud computing, with Ludwig Siegele as moderator, Simon Crosby taking the Yes position, and Bruce Schneier as No. For the full debate, see The Economist's site. Bruce's entries are reprinted below.
Yes. No. Yes. Maybe. Yes. Okay, it’s complicated.
The economics of cloud computing are compelling.
Imagine this: A terrorist hacks into a commercial airplane from the ground, takes over the controls from the pilots and flies the plane into the ground. It sounds like the plot of some "Die Hard" reboot, but it's actually one of the possible scenarios outlined in a new Government Accountability Office report on security vulnerabilities in modern airplanes.
It's certainly possible, but in the scheme of Internet risks I worry about, it's not very high. I'm more worried about the more pedestrian attacks against more common Internet-connected devices.
The Sony hack revealed the challenges of identifying perpetrators of cyberattacks, especially as hackers can masquerade as government soldiers and spies, and vice versa. It's a dangerous new dynamic for foreign relations, especially as what governments know about hackers – and how they know it – remains secret.
The vigorous debate after the Sony Pictures breach pitted the Obama administration against many of us in the cybersecurity community who didn't buy Washington's claim that North Korea was the culprit.
What's both amazing—and perhaps a bit frightening—about that dispute over who hacked Sony is that it happened in the first place.
But what it highlights is the fact that we're living in a world where we can't easily tell the difference between a couple of guys in a basement apartment and the North Korean government with an estimated $10 billion military budget.
In December Google's Executive Chairman Eric Schmidt was interviewed at the CATO Institute Surveillance Conference. One of the things he said, after talking about some of the security measures his company has put in place post-Snowden, was: "If you have important information, the safest place to keep it is in Google. And I can assure you that the safest place to not keep it is anywhere else."
The surprised me, because Google collects all of your information to show you more targeted advertising. Surveillance is the business model of the Internet, and Google is one of the most successful companies at that. To claim that Google protects your privacy better than anyone else is to profoundly misunderstand why Google stores your data for free in the first place.
Thousands of articles have called the December attack against Sony Pictures a wake-up call to industry. Regardless of whether the attacker was the North Korean government, a disgruntled former employee, or a group of random hackers, the attack showed how vulnerable a large organization can be and how devastating the publication of its private correspondence, proprietary data, and intellectual property can be.
But while companies are supposed to learn that they need to improve their security against attack, there's another equally important but much less discussed lesson here: companies should have an aggressive deletion policy.
One of the social trends of the computerization of our business and social communications tools is the loss of the ephemeral.
American history is littered with examples of classified information pointing us towards aggression against other countries—think WMDs—only to later learn that the evidence was wrong
When you're attacked by a missile, you can follow its trajectory back to where it was launched from. When you're attacked in cyberspace, figuring out who did it is much harder. The reality of international aggression in cyberspace will change how we approach defense.
Many of us in the computer-security field are skeptical of the U.S.
Welcome to a world where it's impossible to tell the difference between random hackers and governments.
If anything should disturb you about the Sony hacking incidents and subsequent denial-of-service attack against North Korea, it's that we still don't know who's behind any of it. The FBI said in December that North Korea attacked Sony. I and others have serious doubts. There's countervailing evidence to suggest that the culprit may have been a Sony insider or perhaps Russian nationals.
No one has admitted taking down North Korea's Internet.
It's too early to take the U.S. government at its word.
I am deeply skeptical of the FBI's announcement on Friday that North Korea was behind last month's Sony hack. The agency's evidence is tenuous, and I have a hard time believing it. But I also have trouble believing that the U.S. government would make the accusation this formally if officials didn't believe it.
A focused, skillful cyber attacker will always get in, warns a security expert.
Earlier this month, a mysterious group that calls itself Guardians of Peace hacked into Sony Pictures Entertainment's computer systems and began revealing many of the Hollywood studio's best-kept secrets, from details about unreleased movies to embarrassing emails (notably some racist notes from Sony bigwigs about President Barack Obama's presumed movie-watching preferences) to the personnel data of employees, including salaries and performance reviews. The Federal Bureau of Investigation now says it has evidence that North Korea was behind the attack, and Sony Pictures pulled its planned release of "The Interview," a satire targeting that country's dictator, after the hackers made some ridiculous threats about terrorist violence.
Your reaction to the massive hacking of such a prominent company will depend on whether you're fluent in information-technology security. If you're not, you're probably wondering how in the world this could happen.
First we thought North Korea was behind the Sony cyberattacks. Then we thought it was a couple of hacker guys with an axe to grind. Now we think North Korea is behind it again, but the connection is still tenuous. There have been accusations of cyberterrorism, and even cyberwar.
The Intercept has published an article—based on the Snowden documents—about AURORAGOLD, an NSA surveillance operation against cell phone network operators and standards bodies worldwide. This is not a typical NSA surveillance operation where agents identify the bad guys and spy on them. This is an operation where the NSA spies on people designing and building a general communications infrastructure, looking for weaknesses and vulnerabilities that will allow it to spy on the bad guys at some later date.
In that way, AURORAGOLD is similar to the NSA's program to hack sysadmins around the world, just in case that access will be useful at some later date; and to the GCHQ's hacking of the Belgian phone company Belgacom.
Antivirus companies had tracked the sophisticated—and likely U.S.-backed—Regin malware for years. But they kept what they learned to themselves.
Last week we learned about a striking piece of malware called Regin that has been infecting computer networks worldwide since 2008. It's more sophisticated than any known criminal malware, and everyone believes a government is behind it. No country has taken credit for Regin, but there's substantial evidence that it was built and operated by the United States.
This isn't the first government malware discovered.
Last week Apple announced that it is closing a serious security vulnerability in the iPhone. It used to be that the phone's encryption only protected a small amount of the data, and Apple had the ability to bypass security on the rest of it.
From now on, all the phone's data is protected. It can no longer be accessed by criminals, governments, or rogue employees.
View or Download in Acrobat Format
Security is a combination of protection, detection, and response. It’s taken the industry a long time to get to this point, though. The 1990s was the era of protection. Our industry was full of products that would protect your computers and network.
Chinese hacking of American computer networks is old news. For years we've known about their attacks against U.S. government and corporate targets. We've seen detailed reports of how they hacked The New York Times.
There's a debate going on about whether the U.S. government—specifically, the NSA and United States Cyber Command—should stockpile Internet vulnerabilities or disclose and fix them. It's a complicated problem, and one that starkly illustrates the difficulty of separating attack and defense in cyberspace.
A software vulnerability is a programming mistake that allows an adversary access into that system.
The Heartbleed bug that was reported in April allowed hackers to steal private online information. Cyber-security analyst Bruce Schneier argues that such technical vulnerabilities always arise from human errors.
The announcement on April 7 was alarming. A new internet vulnerability called Heartbleed could allow hackers to steal your logins and passwords. It affected a piece of security software that is used on half a million websites worldwide. Fixing it would be hard: It would strain our security infrastructure and the patience of users everywhere.
Ephemeral messaging apps such as Snapchat, Wickr and Frankly, all of which advertise that your photo, message or update will only be accessible for a short period, are on the rise. Snapchat and Frankly, for example, claim they permanently delete messages, photos and videos after 10 seconds. After that, there's no record.
This notion is especially popular with young people, and these apps are an antidote to sites such as Facebook where everything you post lasts forever unless you take it down—and taking it down is no guarantee that it isn't still available.
Back when we first started getting reports of the Chinese breaking into U.S. computer networks for espionage purposes, we described it in some very strong language. We called the Chinese actions cyber-attacks. We sometimes even invoked the word cyberwar, and declared that a cyber-attack was an act of war.
As insecure as passwords generally are, they're not going away anytime soon. Every year you have more and more passwords to deal with, and every year they get easier and easier to break. You need a strategy.
The best way to explain how to choose a good password is to explain how they're broken.
We're at a crisis point now with regard to the security of embedded systems, where computing is embedded into the hardware itself—as with the Internet of Things. These embedded computers are riddled with vulnerabilities, and there's no good way to patch them.
It's not unlike what happened in the mid-1990s, when the insecurity of personal computers was reaching crisis levels. Software and operating systems were riddled with security vulnerabilities, and there was no good way to patch them.
We already know the NSA wants to eavesdrop on the internet. It has secret agreements with telcos to get direct access to bulk internet traffic. It has massive systems like TUMULT, TURMOIL, and TURBULENCE to sift through it all. And it can identify ciphertext—encrypted information—and figure out which programs could have created it.
Since I started working with Snowden's documents, I have been using a number of tools to try to stay secure from the NSA. The advice I shared included using Tor, preferring certain cryptography over others, and using public-domain encryption wherever possible.
I also recommended using an air gap, which physically isolates a computer or local network of computers from the internet. (The name comes from the literal gap of air between the computer and the internet; the word predates wireless networks.)
But this is more complicated than it sounds, and requires explanation.
Cyber War Will Not Take Place
by Thomas Rid
Hurst & Co., 2013, 218 pp.
ISBN: 978 1 84904 280 2
Cyber war is possibly the most dangerous buzzword of the Internet era. The fear-inducing rhetoric surrounding it is being used to justify major changes in the way the internet is organised, governed, and constructed. And in Cyber War Will Not Take Place, Thomas Rid convincingly argues that cyber war is not a compelling threat. Rid is one of the leading cyber war sceptics in Europe, and although he doesn't argue that war won't extend into cyberspace, he says that cyberspace's role in war is more limited than doomsayers want us to believe.
The primary difficulty of cyber security isn't technology—it's policy. The Internet mirrors real-world society, which makes security policy online as complicated as it is in the real world. Protecting critical infrastructure against cyber-attack is just one of cyberspace's many security challenges, so it's important to understand them all before any one of them can be solved.
The list of bad actors in cyberspace is long, and spans a wide range of motives and capabilities.
When Apple bought AuthenTec for its biometrics technology—reported as one of its most expensive purchases—there was a lot of speculation about how the company would incorporate biometrics in its product line. Many speculate that the new Apple iPhone to be announced tomorrow will come with a fingerprint authentication system, and there are several ways it could work, such as swiping your finger over a slit-sized reader to have the phone recognize you.
Apple would be smart to add biometric technology to the iPhone. Fingerprint authentication is a good balance between convenience and security for a mobile device.
The NSA has huge capabilities – and if it wants in to your computer, it's in. With that in mind, here are five ways to stay safe
Now that we have enough details about how the NSA eavesdrops on the internet, including today's disclosures of the NSA's deliberate weakening of cryptographic systems, we can finally start to figure out how to protect ourselves.
For the past two weeks, I have been working with the Guardian on NSA stories, and have read hundreds of top-secret NSA documents provided by whistleblower Edward Snowden. I wasn't part of today's story—it was in process well before I showed up—but everything I read confirms what the Guardian is reporting.
At this point, I feel I can provide some advice for keeping secure against such an adversary.
The NSA has undermined a fundamental social contract. We engineers built the internet – and now we have to fix it
Government and industry have betrayed the internet, and us.
By subverting the internet at every level to make it a vast, multi-layered and robust surveillance platform, the NSA has undermined a fundamental social contract. The companies that build and manage our internet infrastructure, the companies that create and sell us our hardware and software, or the companies that host our data: we can no longer trust them to be ethical internet stewards.
This is not the internet the world needs, or the internet its creators envisioned.
The latest Snowden document is the US intelligence 'black budget.' There's a lot of information in the few pages the Washington Post decided to publish, including an introduction by Director of National Intelligence James Clapper. In it, he drops a tantalizing hint: 'Also, we are investing in groundbreaking cryptanalytic capabilities to defeat adversarial cryptography and exploit internet traffic.'
Honestly, I'm skeptical. Whatever the NSA has up its top-secret sleeves, the mathematics of cryptography will still be the most secure part of any encryption system. I worry a lot more about poorly designed cryptographic products, software bugs, bad passwords, companies that collaborate with the NSA to leak all or part of the keys, and insecure computers and networks.
I jacked a visitor's badge from the Eisenhower Executive Office Building in Washington, DC, last month. The badges are electronic; they're enabled when you check in at building security. You're supposed to wear it on a chain around your neck at all times and drop it through a slot when you leave.
I kept the badge.
The Syrian Electronic Army attacked again this week, compromising the websites of the New York Times, Twitter, the Huffington Post and others.
Political hacking isn't new. Hackers were breaking into systems for political reasons long before commerce and criminals discovered the Internet. Over the years, we've seen U.K.
Last weekend a Texas couple apparently discovered that the electronic "baby monitor" in their children's bedroom had been hacked. According to a local TV station, the couple said they heard an unfamiliar voice coming from the room, went to investigate and found that someone had taken control of the camera monitor remotely and was shouting profanity-laden abuse. The child's father unplugged the monitor.
What does this mean for the rest of us? How secure are consumer electronic systems, now that they're all attached to the Internet?
Facebook regularly abuses the privacy of its users. Google has stopped supporting its popular RSS feeder. Apple prohibits all iPhone apps that are political or sexual. Microsoft might be cooperating with some governments to spy on Skype calls, but we don't know which ones.
The focus on training obscures the failures of security design
Should companies spend money on security awareness training for their employees? It's a contentious topic, with respected experts on both sides of the debate. I personally believe that training users in security is generally a waste of time, and that the money can be spent better elsewhere. Moreover, I believe that our industry's focus on training serves to obscure greater failings in security design.
Cyber-espionage is old news. What's new is the rhetoric, which is reaching a fever pitch right now.
For technology that was supposed to ignore borders, bring the world closer together, and sidestep the influence of national governments, the Internet is fostering an awful lot of nationalism right now. We've started to see increased concern about the country of origin of IT products and services; U.S. companies are worried about hardware from China; European companies are worried about cloud services in the U.S; no one is sure whether to trust hardware and software from Israel; Russia and China might each be building their own operating systems out of concern about using foreign ones.
I see this as an effect of all the cyberwar saber-rattling that's going on right now.
Society runs on trust. Over the millennia, we've developed a variety of mechanisms to induce trustworthy behavior in society. These range from a sense of guilt when we cheat, to societal disapproval when we lie, to laws that arrest fraudsters, to door locks and burglar alarms that keep thieves out of our homes. They're complicated and interrelated, but they tend to keep society humming along.
We're in the early years of a cyberwar arms race. It's expensive, it's destabilising and it threatens the very fabric of the internet we use every day. Cyberwar treaties, as imperfect as they might be, are the only way to contain the threat.
If you read the press and listen to government leaders, we're already in the middle of a cyberwar.
Some of us have pledged our allegiance to Google: We have Gmail accounts, we use Google Calendar and Google Docs, and we have Android phones. Others have pledged allegiance to Apple: We have Macintosh laptops, iPhones, and iPads; and we let iCloud automatically synchronize and back up everything. Still others of us let Microsoft do it all. Or we buy our music and e-books from Amazon, which keeps records of what we own and allows downloading to a Kindle, computer, or phone.
A lot of the debate around President Obama's cybersecurity initiative center on how much of a burden it would be on industry, and how that should be financed. As important as that debate is, it obscures some of the larger issues surrounding cyberwar, cyberterrorism, and cybersecurity in general.
It's difficult to have any serious policy discussion amongst the fear mongering. Secretary Panetta's recent comments are just the latest; search the Internet for "cyber 9/11," "cyber Peal-Harbor," "cyber Katrina," or -- my favorite -- "cyber Armageddon."
There's an enormous amount of money and power that results from pushing cyberwar and cyberterrorism: power within the military, the Department of Homeland Security, and the Justice Department; and lucrative government contracts supporting those organizations.
This essay orginally appeared as part of a series of advice columns on how to break into the field of security.
I regularly receive e-mail from people who want advice on how to learn more about computer security, either as a course of study in college or as an IT person considering it as a career choice.
First, know that there are many subspecialties in computer security. You can be an expert in keeping systems from being hacked, or in creating unhackable software. You can be an expert in finding security problems in software, or in networks.
ABSTRACT: The problem of securing biological research data is a difficult and complicated one. Our ability to secure data on computers is not robust enough to ensure the security of existing data sets. Lessons from cryptography illustrate that neither secrecy measures, such as deleting technical details, nor national solutions, such as export controls, will work.
Science and Nature have each published papers on the H5N1 virus in humans after considerable debate about whether the research results in those papers could help terrorists create a bioweapon (1, 2).
We're in the early years of a cyberwar arms race. It's expensive, it's destabilizing, and it threatens the very fabric of the Internet we use every day. Cyberwar treaties, as imperfect as they might be, are the only way to contain the threat.
If you read the press and listen to government leaders, we're already in the middle of a cyberwar.
Brazilian Portuguese translation
Recently, there have been several articles about the new market in zero-day exploits: new and unpatched computer vulnerabilities. It's not just software companies, who sometimes pay bounties to researchers who alert them of security vulnerabilities so they can fix them. And it's not only criminal organizations, who pay for vulnerabilities they can exploit. Now there are governments, and companies who sell to governments, who buy vulnerabilities with the intent of keeping them secret so they can exploit them.
This essay appeared as the second half of a point/counterpoint with Marcus Ranum.
The whitelist/blacklist debate is far older than computers, and it's instructive to recall what works where. Physical security works generally on a whitelist model: if you have a key, you can open the door; if you know the combination, you can open the lock. We do it this way not because it's easier -- although it is generally much easier to make a list of people who should be allowed through your office door than a list of people who shouldn't--but because it's a security system that can be implemented automatically, without people.
To find blacklists in the real world, you have to start looking at environments where almost everyone is allowed.
Keeping infected computers at bay is great in theory, but there are all sorts of complicating factors to consider.
Last month Scott Charney of Microsoft proposed that infected computers be quarantined from the Internet. Using a public health model for Internet security, the idea is that infected computers spreading worms and viruses are a risk to the greater community and thus need to be isolated. Internet service providers would administer the quarantine, and would also clean up and update users' computers so they could rejoin the greater Internet.
This isn't a new idea.
How often should you change your password? I get asked that question a lot, usually by people annoyed at their employer's or bank's password expiration policy -- people who finally memorized their current password and are realizing they'll have to write down their new one. How could that possibly be more secure, they want to know.
The answer depends on what the password is used for.
This essay appeared as the first half of a point-counterpoint with Marcus Ranum. Marcus's half is here.
In 2003, a group of security experts -- myself included -- published a paper saying that 1) software monocultures are dangerous and 2) Microsoft, being the largest creator of monocultures out there, is the most dangerous. Marcus Ranum responded with an essay that basically said we were full of it. Now, eight years later, Marcus and I thought it would be interesting to revisit the debate.
This essay appeared as the second half of a point/counterpoint with Marcus Ranum.
If you're a typical wired American, you've got a bunch of tech tools you like and a bunch more you covet. You have a cell phone that can easily text. You've got a laptop configured just the way you want it. Maybe you have a Kindle for reading, or an iPad. And when the next new thing comes along, some of you will line up on the first day it's available.
Last month, Sen. Joe Lieberman, I-Conn., introduced a bill that might -- we're not really sure -- give the president the authority to shut down all or portions of the Internet in the event of an emergency. It's not a new idea. Sens. Jay Rockefeller, D-W.Va., and Olympia Snowe, R-Maine, proposed the same thing last year, and some argue that the president can already do something like this. If this or a similar bill ever passes, the details will change considerably and repeatedly.
There's a power struggle going on in the U.S. government right now.
It's about who is in charge of cyber security, and how much control the government will exert over civilian networks. And by beating the drums of war, the military is coming out on top.
For a while now, I've pointed out that cryptography is singularly ill-suited to solve the major network security problems of today: denial-of-service attacks, website defacement, theft of credit card numbers, identity theft, viruses and worms, DNS attacks, network penetration, and so on.
Cryptography was invented to protect communications: data in motion. This is how cryptography was used throughout most of history, and this is how the militaries of the world developed the science. Alice was the sender, Bob the receiver, and Eve the eavesdropper.
This essay previously appeared in Information Security as the first half of a point-counterpoint with Marcus Ranum. Marcus's half is here.
Any essay on hiring hackers quickly gets bogged down in definitions. What is a hacker, and how is he different from a cracker? I have my own definitions, but I'd rather define the issue more specifically: Would you hire someone convicted of a computer crime to fill a position of trust in your computer network?
This essay previously appeared in Information Security as the first half of a point-counterpoint with Marcus Ranum. Marcus's half is here.
Universal identification is portrayed by some as the holy grail of Internet security. Anonymity is bad, the argument goes; and if we abolish it, we can ensure only the proper people have access to their own information. We'll know who is sending us spam and who is trying to hack into corporate networks.
This essay appeared as the second half of a point/counterpoint with Marcus Ranum. Marcus's half is here.
Information technology is increasingly everywhere, and it's the same technologies everywhere. The same operating systems are used in corporate and government computers. The same software controls critical infrastructure and home shopping.
Google made headlines when it went public with the fact that Chinese hackers had penetrated some of its services, such as Gmail, in a politically motivated attempt at intelligence gathering. The news here isn't that Chinese hackers engage in these activities or that their attempts are technically sophisticated -- we knew that already -- it's that the U.S. government inadvertently aided the hackers.
In order to comply with government search warrants on user data, Google created a backdoor access system into Gmail accounts.
Security is rarely static. Technology changes the capabilities of both security systems and attackers. But there's something else that changes security's cost/benefit trade-off: how the underlying systems being secured are used. Far too often we build security for one purpose, only to find it being used for another purpose -- one it wasn't suited for in the first place.
Sometimes mediocre encryption is better than strong encryption, and sometimes no encryption is better still.
The Wall Street Journal reported this week that Iraqi, and possibly also Afghan, militants are using commercial software to eavesdrop on U.S. Predators, other unmanned aerial vehicles, or UAVs, and even piloted planes. The systems weren't "hacked" -- the insurgents can’t control them -- but because the downlink is unencrypted, they can watch the same video stream as the coalition troops on the ground.
An SSL security flaw got bloggers hot and bothered, but it's the vendors who need to take action
Last month, researchers found a security flaw in the SSL protocol, which is used to protect sensitive web data. The protocol is used for online commerce, webmail, and social networking sites. Basically, hackers could hijack an SSL session and execute commands without the knowledge of either the client or the server. The list of affected products is enormous.
This essay appeared as the second half of a point/counterpoint with Marcus Ranum. Marcus's half is here.
Security is never black and white. If someone asks, "for best security, should I do A or B?" the answer almost invariably is both. But security is always a trade-off.
In the eternal arms race between bad guys and those who police them, automated systems can have perverse effects
A few years ago, a company began to sell a liquid with identification codes suspended in it. The idea was that you would paint it on your stuff as proof of ownership. I commented that I would paint it on someone else's stuff, then call the police.
I was reminded of this recently when a group of Israeli scientists demonstrated that it's possible to fabricate DNA evidence.
By Bruce Schneier
In computer security, a lot of effort is spent on the authentication problem. Whether it's passwords, secure tokens, secret questions, image mnemonics, or something else, engineers are continually coming up with more complicated -- and hopefully more secure -- ways for you to prove you are who you say you are over the Internet.
This is important stuff, as anyone with an online bank account or remote corporate network knows. But a lot less thought and work have gone into the other end of the problem: how do you tell the system on the other end of the line that you're no longer there?
Our use of social networking, as well as iPhones and Kindles, relinquishes control of how we delete files -- we need that back
File deletion is all about control. This used to not be an issue. Your data was on your computer, and you decided when and how to delete a file. You could use the delete function if you didn't care about whether the file could be recovered or not, and a file erase program -- I use BCWipe for Windows -- if you wanted to ensure no one could ever recover the file.
This essay appeared as the second half of a point/counterpoint with Marcus Ranum. Marcus's half is here.
Access control is difficult in an organizational setting. On one hand, every employee needs enough access to do his job. On the other hand, every time you give an employee more access, there's more risk: he could abuse that access, or lose information he has access to, or be socially engineered into giving that access to a malfeasant.
China is the world's most successful Internet censor. While the Great Firewall of China isn't perfect, it effectively limits information flowing in and out of the country. But now the Chinese government is taking things one step further.
Under a requirement taking effect soon, every computer sold in China will have to contain the Green Dam Youth Escort software package.
Since January, the Conficker.B worm has been spreading like wildfire across the internet, infecting the French navy, hospitals in Sheffield, the court system in Houston, Texas, and millions of computers worldwide. One of the ways it spreads is by cracking administrator passwords on networks. Which leads to the important question: why are IT administrators still using easy-to-guess passwords?
Computer authentication systems have two basic requirements.
Rajendrasinh Makwana was a UNIX contractor for Fannie Mae. On October 24, he was fired. Before he left, he slipped a logic bomb into the organization's network. The bomb would have "detonated" on January 31. It was programmed to disable access to the server on which it was running, block any network monitoring software, systematically and irretrievably erase everything -- and then replicate itself on all 4,000 Fannie Mae servers.
The Internet isn't really for us. We're here at the beginning, stumbling around, just figuring out what it's good for and how to use it. The Internet is for those born into it, those who have woven it into their lives from the beginning. The Internet is the greatest generation gap since rock and roll, and only our children can hope to understand it.
As the first digital president, Barack Obama is learning the hard way how difficult it can be to maintain privacy in the information age. Earlier this year, his passport file was snooped by contract workers in the State Department. In October, someone at Immigration and Customs Enforcement leaked information about his aunt's immigration status. And in November, Verizon employees peeked at his cellphone records.
These days, losing electronic devices is less about the hardware and more about the data. Hardly a week goes by without another newsworthy data loss. People leave thumb drives, memory sticks, mobile phones and even computers everywhere. And some of that data isn't easily replaceable.
When he becomes president, Barack Obama will have to give up his BlackBerry. Aides are concerned that his unofficial conversations would become part of the presidential record, subject to subpoena and eventually made public as part of the country's historical record.
This reality of the information age might be particularly stark for the president, but it's no less true for all of us. Conversation used to be ephemeral.
You might not have realized it, but the next great battle of cryptography began this month. It's not a political battle over export laws or key escrow or NSA eavesdropping, but an academic battle over who gets to be the creator of the next hash standard.
Hash functions are the most commonly used cryptographic primitive, and the most poorly understood. You can think of them as fingerprint functions: They take an arbitrary long data stream and return a fixed length, and effectively unique, string.
This essay also appeared in The Hindu.
I've been reading a lot about how passwords are no longer good security. The reality is more complicated. Passwords are still secure enough for many applications, but you have to choose a good one.
Quantum cryptography is back in the news, and the basic idea is still unbelievably cool, in theory, and nearly useless in real life.
The idea behind quantum crypto is that two people communicating using a quantum channel can be absolutely sure no one is eavesdropping. Heisenberg's uncertainty principle requires anyone measuring a quantum system to disturb it, and that disturbance alerts legitimate users as to the eavesdropper's presence. No disturbance, no eavesdropper — period.
Information insecurity is costing us billions. We pay for it in theft: information theft, financial theft. We pay for it in productivity loss, both when networks stop working and in the dozens of minor security inconveniences we all have to endure. We pay for it when we have to buy security products and services to reduce those other two losses.
Despite the best efforts of the security community, the details of a critical internet vulnerability discovered by Dan Kaminsky about six months ago have leaked. Hackers are racing to produce exploit code, and network operators who haven't already patched the hole are scrambling to catch up. The whole mess is a good illustration of the problems with researching and disclosing flaws like this.
The details of the vulnerability aren't important, but basically it's a form of DNS cache poisoning.
A recent study of Internet browsers worldwide discovered that over half – 52% – of Internet Explorer users weren't using the current version of the software. For other browsers the numbers were better, but not much: 17% of Firefox users, 35% of Safari users, and 44% of Opera users were using an old version.
This is particularly important because browsers are an increasingly common vector for internet attacks, and old versions of browsers don't have all their security patches up to date. They're open to attack through vulnerabilities the vendors have already fixed.
Last week's dramatic rescue of 15 hostages held by the guerrilla organization FARC was the result of months of intricate deception on the part of the Colombian government. At the center was a classic man-in-the-middle attack.
In a man-in-the-middle attack, the attacker inserts himself between two communicating parties. Both believe they're talking to each other, and the attacker can delete or modify the communications at will.
It used to be that just the entertainment industries wanted to control your computers -- and televisions and iPods and everything else -- to ensure that you didn't violate any copyright rules. But now everyone else wants to get their hooks into your gear.
OnStar will soon include the ability for the police to shut off your engine remotely. Buses are getting the same capability, in case terrorists want to re-enact the movie Speed.
The standard way to take control of someone else's computer is by exploiting a vulnerability in a software program on it. This was true in the 1960s when buffer overflows were first exploited to attack computers. It was true in 1988 when the Morris worm exploited a Unix vulnerability to attack computers on the Internet, and it's still how most modern malware works.
Vulnerabilities are software mistakes--mistakes in specification and design, but mostly mistakes in programming.
On April 27, 2007, Estonia was attacked in cyberspace. Following a diplomatic incident with Russia about the relocation of a Soviet World War II memorial, the networks of many Estonian organizations, including the Estonian parliament, banks, ministries, newspapers and broadcasters, were attacked and -- in many cases -- shut down. Estonia was quick to blame Russia, which was equally quick to deny any involvement.
It was hyped as the first cyberwar: Russia attacking Estonia in cyberspace.
It's a mystery to me why websites think "secret questions" are a good idea. We sign up for an online service, choose a hard-to-guess (and equally hard-to-remember) password, and are then presented with a "secret question" to answer.
Twenty years ago, there was just one secret question: what's your mother's maiden name? Today, there are several: what street did you grow up on?
Book Review of Access Denied
China restricts Internet access by keyword.
In 1993, Internet pioneer John Gilmore said "the net interprets censorship as damage and routes around it", and we believed him. In 1996, cyberlibertarian John Perry Barlow issued his 'Declaration of the Independence of Cyberspace' at the World Economic Forum at Davos, Switzerland, and online. He told governments: "You have no moral right to rule us, nor do you possess any methods of enforcement that we have true reason to fear."
At the time, many shared Barlow's sentiments.
We know what we don't like about buying consolidated product suites: one great product and a bunch of mediocre ones. And we know what we don't like about buying best-of-breed: multiple vendors, multiple interfaces, and multiple products that don't work well together. The security industry has gone back and forth between the two, as a new generation of IT security professionals rediscovers the downsides of each solution.
Wine Therapy is a web bulletin board for serious wine geeks. It's been active since 2000, and its database of back posts and comments is a wealth of information: tasting notes, restaurant recommendations, stories and so on. Late last year someone hacked the board software, got administrative privileges and deleted the database. There was no backup.
Buying an iPhone isn't the same as buying a car or a toaster. Your iPhone comes with a complicated list of rules about what you can and can't do with it. You can't install unapproved third-party applications on it. You can't unlock it and use it with the cellphone carrier of your choice.
Whenever I talk or write about my own security setup, the one thing that surprises people -- and attracts the most criticism -- is the fact that I run an open wireless network at home. There's no password. There's no encryption. Anyone with wireless capability who can see my network can use it to access the internet.
Bruce Schneier and Marcus Ranum look at the security landscape of the next 10 years.Bruce Schneier
Predictions are easy and difficult. Roy Amara of the Institute for the Future once said: "We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run."
Moore's Law is easy: In 10 years, computers will be 100 times more powerful. My desktop will fit into my cell phone, we'll have gigabit wireless connectivity everywhere, and personal networks will connect our computing devices and the remote services we subscribe to. Other aspects of the future are much more difficult to predict.
Computer security is hard. Software, computer and network security are all ongoing battles between attacker and defender. And in many cases the attacker has an inherent advantage: He only has to find one network flaw, while the defender has to find and fix every flaw.
Cryptography is an exception.
Random numbers are critical for cryptography: for encryption keys, random authentication challenges, initialization vectors, nonces, key-agreement schemes, generating prime numbers and so on. Break the random-number generator, and most of the time you break the entire security system. Which is why you should worry about a new random-number standard that includes an algorithm that is slow, badly designed and just might contain a backdoor for the National Security Agency.
Generating random numbers isn't easy, and researchers have discovered lots of problems and attacks over the years.
The hardest thing about working in IT security is convincing users to buy our technologies. An enormous amount of energy has been focused on this problem—risk analyses, ROI models, audits—yet critical technologies still remain uninstalled and important networks remain insecure. I’m constantly asked how to solve this by frustrated security vendors and—sadly—I have no good answer. But I know the problem is temporary: in the long run, the information security industry as we know it will disappear.
Having a liability clause is one good way to make sure that software vendors fix the security glitches in their products.
Information insecurity is costing us billions. We pay for it—year after year—when we buy security products and services. But all the money we spend isn't fixing the problem, which is insecure software. Typically, such software is badly designed and inadequately tested, comprising poorly implemented features and security vulnerabilities.
The Storm worm first appeared at the beginning of the year, hiding in e-mail attachments with the subject line: "230 dead as storm batters Europe." Those who opened the attachment became infected, their computers joining an ever-growing botnet.
Although it's most commonly called a worm, Storm is really more: a worm, a Trojan horse and a bot all rolled into one. It's also the most successful example we have of a new breed of worm, and I've seen estimates that between 1 million and 50 million computers have been infected worldwide.
Old style worms -- Sasser, Slammer, Nimda -- were written by hackers looking for fame.
Sports referees are supposed to be fair and impartial. They're not supposed to favor one team over another. And they're most certainly not supposed to have a financial interest in the outcome of a game.
Tim Donaghy, referee for the National Basketball Association, has been accused of both betting on basketball games and fixing games for the mob.
To the average home user, security is an intractable problem. Microsoft has made great strides improving the security of their operating system "out of the box," but there are still a dizzying array of rules, options, and choices that users have to make. How should they configure their anti-virus program? What sort of backup regime should they employ?
Last month Marine Gen. James Cartwright told the House Armed Services Committee that the best cyberdefense is a good offense.
As reported in Federal Computer Week, Cartwright said: "History teaches us that a purely defensive posture poses significant risks," and that if "we apply the principle of warfare to the cyberdomain, as we do to sea, air and land, we realize the defense of the nation is better served by capabilities enabling us to take the fight to our adversaries, when necessary, to deter actions detrimental to our interests."
The general isn't alone. In 2003, the entertainment industry tried to get a law passed (.pdf) giving it the right to attack any computer suspected of distributing copyright-protected material. And there probably isn't a sysadmin in the world who doesn't want to strike back at computers that are blindly and repeatedly attacking their networks.
This essay appeared as the first half of a point-counterpoint with Marcus Ranum. Marcus's side can be found on his website.
There are security experts who insist penetration testing is essential for network security, and you have no hope of being secure unless you do it regularly. And there are contrarian security experts who tell you penetration testing is a waste of time; you might as well throw your money away. Both of these views are wrong.
The U.S. National Institute of Standards and Technology is having a competition for a new cryptographic hash function.
This matters. The phrase "one-way hash function" might sound arcane and geeky, but hash functions are the workhorses of modern cryptography.
Identity theft is the information age's new crime. A criminal collects enough personal data on the victim to impersonate him to banks, credit card companies and other financial institutions. Then he racks up debt in the victim's name, collects the cash and disappears. The victim is left holding the bag.
Ever since I wrote about the 34,000 MySpace passwords I analyzed, people have been asking how to choose secure passwords.
My piece aside, there's been a lot written on this topic over the years -- both serious and humorous -- but most of it seems to be based on anecdotal suggestions rather than actual analytic evidence. What follows is some serious advice.
The attack I'm evaluating against is an offline password-guessing attack.
Full disclosure -- the practice of making the details of security vulnerabilities public -- is a damned good idea. Public scrutiny is the only reliable way to improve security, while secrecy only makes us less secure.
Unfortunately, secrecy sounds like a good idea. Keeping software vulnerabilities secret, the argument goes, keeps them out of the hands of the hackers (See The Vulnerability Disclosure Game: Are We More Secure?).
This essay is an update of Information security: How liable should vendors be?, Computerworld, October 28, 2004.
Information insecurity is costing us billions. There are many different ways in which we pay for information insecurity. We pay for it in theft, such as information theft, financial theft and theft of service. We pay for it in productivity loss, both when networks stop functioning and in the dozens of minor security inconveniences we all have to endure on a daily basis.
How good are the passwords people are choosing to protect their computers and online accounts?
It's a hard question to answer because data is scarce. But recently, a colleague sent me some spoils from a MySpace phishing attack: 34,000 actual user names and passwords.
The attack was pretty basic.
Spam is filling up the Internet, and it's not going away anytime soon.
It's not just e-mail. We have voice-over-IP spam, instant message spam, cellphone text message spam, blog comment spam and Usenet newsgroup spam. And, if you think broadly enough, these computer-network spam delivery mechanisms join the ranks of computer telemarketing (phone spam), junk mail (paper spam), billboards (visual space spam) and cars driving through town with megaphones (audio spam).
If you really want to see Microsoft scramble to patch a hole in its software, don't look to vulnerabilities that impact countless Internet Explorer users or give intruders control of thousands of Windows machines. Just crack Redmond's DRM.
Security patches used to be rare. Software vendors were happy to pretend that vulnerabilities in their products were illusory -- and then quietly fix the problem in the next software release.
What could you do if you controlled a network of thousands of computers -- or, at least, could use the spare processor cycles on those machines? You could perform massively parallel computations: model nuclear explosions or global weather patterns, factor large numbers or find Mersenne primes, or break cryptographic problems.
All of these are legitimate applications. And you can visit distributed.net and download software that allows you to donate your spare computer cycles to some of these projects.
Google's $6 billion-a-year advertising business is at risk because it can't be sure that anyone is looking at its ads. The problem is called click fraud, and it comes in two basic flavors.
With network click fraud, you host Google AdSense advertisements on your own website. Google pays you every time someone clicks on its ad on your site.
This essay appeared as part of a point-counterpoint with Marcus Ranum.
I've long been hostile to certifications -- I've met too many bad security professionals with certifications and know many excellent security professionals without certifications. But, I've come to believe that, while certifications aren't perfect, they're a decent way for a security professional to learn some of the things he's going to know, and a potential employer to assess whether a job candidate has the security expertise he's going to need to know.
What's changed? Both the job requirements and the certification programs.
Have you ever been to a retail store and seen this sign on the register: "Your purchase free if you don't get a receipt"? You almost certainly didn't see it in an expensive or high-end store. You saw it in a convenience store, or a fast-food restaurant. Or maybe a liquor store.
When technology serves its owners, it is liberating. When it is designed to serve others, over the owner's objection, it is oppressive. There's a battle raging on your computer right now -- one that pits you against worms and viruses, Trojans, spyware, automatic update features and digital rights management technologies. It's the battle to determine who owns your computer.
One of the basic philosophies of security is defense in depth: overlapping systems designed to provide security even if one of them fails. An example is a firewall coupled with an intrusion-detection system (IDS). Defense in depth provides security, because there's no single point of failure and no assumed single vector for attacks.
It is for this reason that a choice between implementing network security in the middle of the network -- in the cloud -- or at the endpoints is a false dichotomy.
Some years ago, I left my laptop computer on a train from Washington to New York. Replacing the computer was expensive, but at the time I was more worried about the data.
Of course I had good backups, but now a copy of all my e-mail, client files, personal writings and book manuscripts were ... well, somewhere. Probably the drive would be erased by the computer's new owner, but maybe my personal and professional life would end up in places I didn't want them to be.
How would you feel if you invested millions of dollars in quantum cryptography, and then learned that you could do the same thing with a few 25-cent Radio Shack components?
I'm exaggerating a little here, but if a new idea out of Texas A&M University turns out to be secure, we've come close.
Earlier this month, Laszlo Kish proposed securing a communications link, like a phone or computer line, with a pair of resistors. By adding electronic noise, or using the natural thermal noise of the resistors -- called "Johnson noise" -- Kish can prevent eavesdroppers from listening in.
Over the past few years, we have seen hacking transform from a hobbyist activity to a criminal one. Hobbyist threats included defacing web pages, releasing worms that did damage, and running denial-of-service attacks against major networks. The goal was fun, notoriety, or just plain malice.
The new criminal attacks have a more focused goal: profit.
It's a David and Goliath story of the tech blogs defeating a mega-corporation.
On Oct. 31, Mark Russinovich broke the story in his blog: Sony BMG Music Entertainment distributed a copy-protection scheme with music CDs that secretly installed a rootkit on computers. This software tool is run without your knowledge or consent -- if it's loaded on your computer with a CD, a hacker can gain and maintain access to your system and you wouldn't know it.
The Sony code modifies Windows so you can't tell it's there, a process called "cloaking" in the hacker world.
If you’ll forgive the possible comparison to hurricanes, Internet epidemics are much like severe weather: they happen randomly, they affect some segments of the population more than others, and your previous preparation determines how effective your defense is.
Zotob was the first major worm outbreak since MyDoom in January 2004. It happened quickly—less than five days after Microsoft published a critical security bulletin (its 39th of the year). Zotob’s effects varied greatly from organization to organization: some networks were brought to their knees, while others didn’t even notice.
At a security conference last week, Howard Schmidt, the former White House cybersecurity adviser, took the bold step of arguing that software developers should be held personally accountable for the security of the code they write.
He's on the right track, but he's made a dangerous mistake. It's the software manufacturers that should be held liable, not the individual programmers. Getting this one right will result in more-secure software for everyone; getting it wrong will simply result in a lot of messy lawsuits.
Last week California became the first state to enact a law specifically addressing phishing. Phishing, for those of you who have been away from the internet for the past few years, is when an attacker sends you an e-mail falsely claiming to be a legitimate business in order to trick you into giving away your account info -- passwords, mostly. When this is done by hacking DNS, it's called pharming.
Financial companies have until now avoided taking on phishers in a serious way, because it's cheaper and simpler to pay the costs of fraud.
In general, the problems of securing a university network are no different than those of securing any other large corporate network. But when it comes to data security, universities have their own unique problems. It's easy to point fingers at students—a large number of potentially adversarial transient insiders. Yet that's really no different from a corporation dealing with an assortment of employees and contractors—the difference is the culture.
Counterpane Internet Security Inc. monitors more than 450 networks in 35 countries, in every time zone. In 2004 we saw 523 billion network events, and our analysts investigated 648,000 security "tickets." What follows is an overview of what's happening on the Internet right now, and what we expect to happen in the coming months.
In 2004, 41 percent of the attacks we saw were unauthorized activity of some kind, 21 percent were scanning, 26 percent were unauthorized access, 9 percent were DoS (denial of service), and 3 percent were misuse of applications.
Over the past few months, the two attack vectors that we saw in volume were against the Windows DCOM (Distributed Component Object Model) interface of the RPC (remote procedure call) service and against the Windows LSASS (Local Security Authority Subsystem Service).
Recently I published an essay arguing that two-factor authentication is an ineffective defense against identity theft (see www.schneier.com/essay-083.html). For example, issuing tokens to online banking customers won't reduce fraud, because new attack techniques simply ignore the countermeasure. Unfortunately, some took my essay as a condemnation of two-factor authentication in general. This is not true.
It's happened to all of us: We sign up for some online account, choose a difficult-to-remember and hard-to-guess password, and are then presented with a "secret question" to answer. Twenty years ago, there was just one secret question: "What's your mother's maiden name?" Today, there are more: "What street did you grow up on?" "What's the name of your first pet?" "What's your favorite color?" And so on.
The point of all these questions is the same: a backup password. If you forget your password, the secret question can verify your identity so you can choose another password or have the site e-mail your current password to you.
I am regularly asked what average Internet users can do to ensure their security. My first answer is usually, "Nothing--you're screwed."
But that's not true, and the reality is more complicated. You're screwed if you do nothing to protect yourself, but there are many things you can do to increase your security on the Internet.
Two years ago, I published a list of PC security recommendations.
Last month, Google released a beta version of its desktop search software: Google Desktop Search. Install it on your Windows machine, and it creates a searchable index of your data files, including word processing files, spreadsheets, presentations, e-mail messages, cached Web pages and chat sessions. It's a great idea. Windows' searching capability has always been mediocre, and Google fixes the problem nicely.
An update to this essay was published in ENISA Quarterly in January 2007.
Information insecurity is costing us billions. We pay for it in theft: information theft, financial theft. We pay for it in productivity loss, both when networks stop working and in the dozens of minor security inconveniences we all have to endure. We pay for it when we have to buy security products and services to reduce those other two losses.
Considerable confusion exists between the different concepts of secrecy and security, which often causes bad security and surprising political arguments. Secrecy usually contributes only to a false sense of security.
In June 2004, the U.S. Department of Homeland Security urged regulators to keep network outage information secret.
The Data Encryption Standard, or DES, was a mid-'70s brainchild of the National Bureau of Standards: the first modern, public, freely available encryption algorithm. For over two decades, DES was the workhorse of commercial cryptography.
Over the decades, DES has been used to protect everything from databases in mainframe computers, to the communications links between ATMs and banks, to data transmissions between police cars and police stations. Whoever you are, I can guarantee that many times in your life, the security of your data was protected by DES.
U.S. Security Blocks Free Exchange of Ideas
Cryptography is the science of secret codes, and it is a primary Internet security tool to fight hackers, cyber crime, and cyber terrorism. CRYPTO is the world's premier cryptography conference. It's held every August in Santa Barbara.
We in the computer security industry are guilty of over-hyping and under-delivering. Again and again, we tell customers that they need to buy this or that product in order to be secure. Again and again, customers buy the products and are still not secure.
Firewalls didn't keep out network attackers, and ignored the fact that the notion of "perimeter" is severely flawed.
It was a historic moment when, last month, the National Institute of Standards and Technology proposed withdrawing the Data Encryption Standard as an encryption standard.
DES has been the most popular encryption algorithm for 25 years. Developed at IBM, it was chosen by the National Bureau of Standards (now NIST) as the government-standard encryption algorithm in 1976. Since then, it has become an international encryption standard and has been used in thousands of applications, despite concerns about its short key length.
At the Crypto 2004 conference in Santa Barbara, Calif., this week, researchers announced several weaknesses in common hash functions. These results, while mathematically significant, aren't cause for alarm. But even so, it's probably time for the cryptography community to get together and create a new hash standard.
One-way hash functions are a cryptographic construct used in many applications.
Criminals follow money. Today, more and more money is on the Internet: millions of people manage their bank, PayPal, or other accounts-and even their stock portfolios-online. It's a tempting target-if criminals can access one of these accounts, they can steal a lot of money.
And almost all these accounts are protected only by passwords.
If press coverage is any guide, then the Witty worm wasn't all that successful. Blaster, SQL Slammer, Nimda, even Sasser made bigger headlines. Witty infected only about 12,000 machines, almost none of them home users. It didn't seem like a big deal.
The security of your computer and network depends on two things: what you do to secure your computer and network, and what everyone else does to secure their computers and networks. It's not enough for you to maintain a secure network. If other people don't maintain their security, we're all more vulnerable to attack. When many unsecure computers are connected to the Internet, worms spread faster and more extensively, distributed denial-of-service attacks are easier to launch, and spammers have more platforms from which to send e-mail.
Recently I have been receiving e-mails from PayPal. At least, they look like they're from PayPal. They send me to a Web site that looks like it's from PayPal. And it asks for my password, just like PayPal. The problem is that it's not from PayPal, and if I do what the Web site says, some criminal is going to siphon money out of my bank account.
Welcome to the third wave of network attacks, what I have named "semantic attacks." They are much more serious and harder to defend against because they attack the user and not the computers. And they're the future of fraud on the Internet.
The first wave of attacks against the Internet was physical: against the computers, wires and electronics.
Did MSBlast cause the Aug. 14 blackout? The official analysis says "no," but I'm not so sure. A November interim report a panel of government and industry officials issued concluded that the blackout was caused by a series of failures with the chain of events starting at FirstEnergy, a power company in Ohio. A series of human and computer failures then turned a small problem into a major one. And because critical alarm systems failed, workers at FirstEnergy did not stop the cascade, because they did not know what was happening.
Computer security is not a problem that technology can solve. Security solutions have a technological component, but security is fundamentally a people problem. Businesses approach security as they do any other business uncertainty: in terms of risk management. Organizations optimize their activities to minimize their cost-risk product, and understanding those motivations is key to understanding computer security today.
"The Slammer worm was the fastest computer worm in history. As it began spreading throughout the Internet, it doubled in size every 8.5 seconds. It infected more than 90 percent of vulnerable hosts within 10 minutes." (See "Inside the Slammer Worm," p. 33 of this issue.)For the six months prior to the Sapphire (or SQL Slammer) worm's release, the particular vulnerability that Slammer exploited was one of literally hundreds already known.
Testimony and Statement for the Record of Bruce Schneier
Chief Technical Officer, Counterpane Internet Security, Inc.
Hearing on "Overview of the Cyber Problem-A Nation Dependent and Dealing with Risk"
Before the Subcommittee on Cybersecurity, Science, and Research and Development
Committee on Homeland Security
United States House of Representatives
June 25, 2003
2318 Rayburn House Office Building
Mr. Chairman, members of the Committee, thank you for the opportunity to testify today regarding cybersecurity, particularly in its relation to homeland defense and our nation's critical infrastructure. My name is Bruce Schneier, and I have worked in the field of computer security for my entire career. I am the author of seven books on the topic, including the best-selling Secrets and Lies: Digital Security in a Networked World .
Internet security is usually described as a fortress, with the good guys inside the wall and the bad guys outside. Network owners buy products to shore up the barrier, on the logic that a stronger wall will give them better security. Flaws in the network are holes in the barricade, patches the mortar that closes them.
This metaphor might have been appropriate 10 years ago, when the Internet was made up of disparate networks that occasionally communicated, but it's outdated today.
Forget It: Bland PR Document Has Only Recommendations
AT 60 pages, the White House's National Strategy to Secure Cyberspace is an interesting read, but it won't help to secure cyberspace. It's a product of consensus, so it doesn't make any of the hard choices necessary to radically increase cyberspace security. Consensus doesn't work in security design, and invariably results in bad decisions. It's the compromises that are harmful, because the more parties you have in the discussion, the more interests there are that conflict with security.
THERE'S considerable confusion between the concepts of secrecy and security, and it is causing a lot of bad security and some surprising political arguments. Secrecy is not the same as security, and most of the time secrecy contributes to a false feeling of security instead of to real security.
Last month, the SQL Slammer worm ravished the Internet, infecting in some 15 minutes about 13 root servers that direct information traffic, and thus disrupting services as diverse as the 911 network in Seattle and much of Bank of America's 13,000 ATM machines. The worm took advantage of a software vulnerability in a Microsoft database management program, one that allowed a malicious piece of software to take control of the computer.
The full disclosure vs bug secrecy debate is a lot larger than computer security. Blaze's paper on master-key locking systems in this issue is an illustrative case in point. It turns out that the ways we've learned to conceptualize security and attacks in the computer world are directly applicable to other areas of security--like door locks. But the most interesting part of this entire story is that the locksmith community went ballistic after learning about what Blaze did.
Computer security is vital, and IEEE is launching this new magazine devoted to the topic. But there's more to security than what this magazine is going to talk about. If we don't help educate the average computer user about how to be a good security consumer, little of what we do matters.
Dozens of times a day, we are security consumers.
Network security is not a technological problem; it's a business problem. The only way to address it is to focus on business motivations. To improve the security of their products, companies - both vendors and users - must care; for companies to care, the problem must affect stock price. The way to make this happen is to start enforcing liabilities.
Microsoft Chairman Bill Gates should be given credit for making security and privacy a top priority for his legions of engineers, but we'll have to wait to see if his call represents a real change or just another marketing maneuver.
Microsoft has made so many empty claims about its security processes--and the security of its processes--that when I hear another one, I can't help believing it's more of the same flim-flam.
Anyone remember last November when Microsoft's Jim Allchin, group vice president, said in a published interview that all buffer overflows were eliminated in Windows XP? Or that the new operating system installed in a minimalist way, with features turned off by default?
Deciding to outsource network security is difficult. The stakes are high, so it's no wonder that paralysis is a common reaction when contemplating whether to outsource or not:
- The promised benefits of outsourced security are so attractive. The potential to significantly increase network security without hiring half a dozen people or spending a fortune is impossible to ignore.
- The potential risks of outsourcing are considerable. Stories of managed security companies going out of business, and bad experiences with outsourcing other areas of IT, show that selecting the wrong outsourcer can be a costly mistake.
If deciding whether to outsource security is difficult, deciding what to outsource and to whom seems impossible.
You may already be vulnerable
It used to be that when you connected to one of Counterpane's mailers, it responded with a standard SMTP banner that read something like the following:
220 counterpane.com ESMTP Sendmail 8.8.88. 7.5; Mon, 7 May 2001 21:13:35 0600 (MDT
Because this information includes a Sendmail version number, some people sent us mail that read (loosely interpreted): "Heh, heh, heh. Bruce's company runs a stupid Sendmail!"
Until recently, our IT staffs standard response was to smile and say, "Yes, that certainly is what the banner says," leaving the original respondent to wonder why we didn't care. (There are a bunch of reasons we don't care, and explaining them would take both the amusement and security out of it all.)
However, we were getting a bit tired of the whole thing.
In the wake of the devastating attacks on New York's World Trade Center and the Pentagon, Sen. Judd Gregg (R-N.H.), with backing from other high- ranking government officials, quickly seized the opportunity to propose limits on strong encryption and "key-escrow" systems that insure government access. This is a bad move because it will do little to thwart terrorist activities and it will also reduce the security of our critical infrastructure.
As more and more of our nation's critical infrastructure goes digital, cryptography is more important than ever. We need all the digital security we can get; the government shouldn't be doing things that actually reduce it.
Most people don't understand the real lessons of Code Red II.
Code Red II could have been much worse. As it had full control of every machine it took over, it could have been programmed to do anything, including dropping the entire Internet. It could have spread faster and been stealthier.
The arrest of a Russian computer security researcher was a major setback for computer security research. The FBI nabbed Dmitry Sklyarov after he presented a paper at DefCon, the hacker community convention in Las Vegas, on the strengths and the weaknesses of software to encrypt an electronic book.
Although I'm certain the FBI's case will never hold up in court, it shows that free speech is secondary to the entertainment industry's paranoia about copyright protection.
Sklyarov is accused of violating the Digital Millennium Copyright Act (DMCA), which makes publishing critical research on this technology more serious than publishing design information on nuclear weapons.
Testimony and Statement for the Record of Bruce Schneier
Chief Technical Officer, Counterpane Internet Security, Inc.
Hearing on Internet Security before the Subcommittee on Science, Technology, and Space of the Committee on Commerce, Science and Transportation
United States Senate
July 16, 2001
253 Russell Senate Office Building
My name is Bruce Schneier. I am the founder and Chief Technical Officer of Counterpane Internet Security. Inc. Counterpane was founded to address the immediate need for increased Internet security, and essentially provides burglar alarm services for computer networks.
One of the key reasons businesses have yet to link their business applications with telephone services is there's no common interface. While two standards under development promise to let businesses integrate and control telephony services, such as call forwarding and automatic number identification, with software, such as Web-based call center apps, these standards could introduce huge security risks.
These standards address key issues. One organization working in this space is The Parlay Group (www.parlay.org), a consortium of software, hardware and telecommunication service providers.
In warfare, information is power. The better you understand your enemy, the more able you are to defeat him.
In the war against malicious hackers, network intruders and the other black-hat denizens of cyberspace, the good guys have surprisingly little information. Most security experts-even those who design products to protect against attacks-are ignorant of the tools, tactics and motivations of the enemy.
Despite numerous efforts over the years to develop comprehensive computer security standards, it's a goal that remains elusive at best.
As far back as 1985, the U.S. government attempted to establish a general method for evaluating security requirements. This resulted in the "Orange Book," the colloquial name for the U.S.
In a paper he wrote with Roger Needham , Ross Anderson coined the phrase "programming Satan's computer" to describe the problems faced by computer-security engineers. It's a phrase I've used ever since.
Programming a computer is straightforward: keep hammering away at the problem until the computer does what it's supposed to do. Large application programs and operating systems are a lot more complicated, but the methodology is basically the same.
Despite huge investments by corporations in computer security infrastructure, an overwhelming majority of companies are finding that their networks are still being compromised. And there's no reason to believe this will change anytime soon.
About 64 percent of companies' systems have been victims of some form of unauthorized access, according to a recent survey by the Computer Security Institute (CSI). While 25 percent said they had no breaches and 11 percent said they didn't know, I'd bet the actual number of companies that have been compromised is much higher.
Underwriters Laboratories (UL) is an independent testing organization created in 1893, when William Henry Merrill was called in to find out why the Palace of Electricity at the Columbian Exposition in Chicago kept catching on fire (which is not the best way to tout the wonders of electricity). After making the exhibit safe, he realized he had a business model on his hands. Eventually, if your electrical equipment wasn't UL certified, you couldn't get insurance.
Today, UL rates all kinds of equipment, not just electrical.
When a hacker adds a back door to your computer systems for later unauthorized access, that's a serious threat. But it's an even bigger problem if you created the back door yourself.
It seems that Borland did just that with its Interbase database. All versions released for the past seven years (versions 4.x through 6.01) have a back door.
In the future, the computer security industry will be run by the insurance industry. I don't mean insurance companies will start selling firewalls, but rather the kind of firewall you use--along with the kind of authentication scheme you use, the kind of operating system you use, and the kind of network monitoring scheme you use--will be strongly influenced by the constraints of insurance.
Consider security and safety in the real world. Businesses don't install alarms in their warehouses because it makes them safer; they do it because they get a break in their insurance rates.
Reports that PGP, a standard used to encrypt e-mail, is broken are greatly exaggerated. Although a recent criminal investigation has led some to conclude that flaws in the PGP protocol helped the FBI nab its suspect, the truth is that no one has broken the cryptographic algorithms that protect PGP traffic. And no one has discovered a software flaw in the PGP program that would allow someone to read PGP- encrypted traffic. All that happened was that someone installed a keyboard sniffer on a computer, letting that someone eavesdrop on every keystroke the user made.
Eventually, the insurance industry will subsume the computer security industry. Not that insurance companies will start marketing security products, but rather that the kind of firewall you use--along with the kind of authentication scheme you use, the kind of operating system you use and the kind of network monitoring scheme you use--will be strongly influenced by the constraints of insurance.
Consider security, and safety, in the real world. Businesses don't install building alarms because it makes them feel safer; they do it to get a reduction in their insurance rates.
Hacking contests are a popular way for software companies to demonstrate claims of how good their security products are in practice. But companies looking to protect their digital assets shouldn't give too much credence to these challenges.
These contests typically involve a group or vendor offering money to anyone who can break through its firewall, crack its algorithm or make a fraudulent transaction using its technology. The Secure Digital Music Initiative (SDMI), an industry group that's developed encryption methods to protect the copying of digital music files, issued a hacking challenge in September, offering $10,000 to anyone who could strip various copy-protection technologies out of songs provided as examples.
Security threats will continue to loom
For the longest time, cryptography was a solution looking for a problem. And outside the military and a few paranoid individuals, there wasn't any problem. Then along came the Internet, and with the Internet came e-commerce, corporate intranets and extranets, voice over IP, B2B, and the like. Suddenly everyone is talking about cryptography.
Controlling what a user can do with a piece of data assumes a trust paradigm that doesn't exist in the real world. Software copy protection, intellectual property theft, digital watermarking-different companies claim to solve different parts of this growing problem. Some companies market e-mail security solutions in which the e-mail cannot be read after a certain date, effectively "deleting" it. Other companies sell rights-management software: audio and video files that can't be copied or redistributed, data that can be read but not printed and software that can't be copied.
The latest tale of security gaps in Microsoft Corp.'s software is a complicated story, and there are a lot of lessons to take away ... so let's take it chronologically.
On June 27, Georgi Guninski discovered a new vulnerability in Internet Explorer (4.0 or higher) and Microsoft Access (97 or 2000) running on Windows 95, 98, NT 4.0 or 2000. An attacker can compromise a user's system by getting the user to read an HTML e-mail message (not an attachment) or visit a Web site.
I've been writing the CryptoRhythms column for this magazine for a little over a year now. When the editor and I sat down a couple months ago to talk about topics for 2000, I told him I wanted to expand the focus a bit from crypto-specific topics to broader information security subjects. So even though the column still falls under the CryptoRhythms banner, you can expect some (but not all) of this year's columns to address broader security issues that in some way incorporate cryptography. This month's article does just that, focusing on the process of security.
Open any popular article on public-key infrastructure (PKI) and you're likely to read that a PKI is desperately needed for E-commerce to flourish. Don't believe it. E-commerce is flourishing, PKI or no PKI. Web sites are happy to take your order if you don't have a certificate and even if you don't use a secure connection.
Public-key infrastructure (PKI), usually meaning digital certificates from a commercial or corporate certificate authority (CA), is touted as the current cure-all for security problems.
Certificates provide an attractive business model. They cost almost nothing to manufacture, and you can dream of selling one a year to everyone on the Internet. Given that much potential income for CAs, we now see many commercial CAs, producing literature, press briefings and lobbying.
In 1999, the major developments in cryptography were more political than scientific. Of course, there were scientific conferences and scientific announcements, some of which were significant. But, by far, the most important events happened in the areas of law, court cases and regulation. As we move into the new millennium, these political and regulatory shifts could have resounding effects on the implementation of cryptography, especially in how it relates to balancing privacy concerns with the needs of government and law enforcement.
You can't secure what you don't understand.
Ask any 21 experts to predict the future, and they're likely to point in 21 different directions. But whatever the future holds--IP everywhere, smart cards everywhere, video everywhere, Internet commerce everywhere, wireless everywhere, agents everywhere, AI everywhere, everything everywhere--the one thing you can be sure of is that it will be complex. For consumers, this is great. For security professionals, this is terrifying.
A shortened version of this essay appeared in the November 15, 1999 issue of Computerworld as "Satan's Computer: Why Security Products Fail Us."
Almost every week the computer press covers another security flaw: a virus that exploits Microsoft Office, a vulnerability in Windows or UNIX, a Java problem, a security hole in a major Web site, an attack against a popular firewall. Why can't vendors get this right, we wonder? When will it get better?
I don't believe it ever will.
A version of this article appeared as a guest commentary on ZDNet.
The scheme to protect DVDs has been broken. There are now freeware programs on the net that remove the copy protection on DVDs, allowing them to be played, edited, and copied without restriction.
This should be no surprise to anyone, least of all to the entertainment industry.
The protection scheme is seriously flawed in several ways.
Cryptography is often treated as if it were magic security dust: "sprinkle some on your system, and it is secure; then, you're secure as long as the key length is large enough--112 bits, 128 bits, 256 bits" (I've even seen companies boast of 16,000 bits.) "Sure, there are always new developments in cryptanalysis, but we've never seen an operationally useful cryptanalytic attack against a standard algorithm. Even the analyses of DES aren't any better than brute force in most operational situations. As long as you use a conservative published algorithm, you're secure."
This just isn't true. Recently we've seen attacks that hack into the mathematics of cryptography and go beyond traditional cryptanalysis, forcing cryptography to do something new, different, and unexpected.
One of the stranger justifications of U.S. export controls is that they prevent the spread of cryptographic expertise. Years ago, the Administration argued that there were no cryptographic products available outside the U.S. When several studies proved that there were hundreds of products designed, built, and marketed outside the U.S., the Administration changed its story.
1999 is a pivotal year for malicious software ( malware) such as viruses, worms, and Trojan horses. Although the problem is not new, Internet growth and weak system security have evidently increased the risks.
Viruses and worms survive by moving from computer to computer. Prior to the Internet, computers (and viruses!) communicated relatively slowly, mostly through floppy disks and bulletin boards.
A version of this essay appeared on ZDNet.com.
AES is the Advanced Encryption Standard, the encryption algorithm that will eventually replace DES. In 1997, the U.S. government (NIST, actually), solicited candidate algorithms for this standard. By June 1998 (the submission deadline), NIST received fifteen submissions.
A version of this essay appeared on ZDNet.com.
The idea is enticing. Just as you can log onto Hotmail with your browser to send and receive e-mail, there are Web sites you can log on to to send and receive encrypted e-mail. HushMail, ZipLip, YNN-mail, ZixMail. No software to download and install...it just works.
But how well?
Imagine this situation: An engineer builds a bridge. It stands for a day, and then collapses. He builds another. It stands for three days, and then collapses.
Suppose your doctor said, "I realize we have antibiotics that are good at treating your kind of infection without harmful side effects, and that there are decades of research to support this treatment. But I'm going to give you tortilla-chip powder instead, because, uh, it might work." You'd get a new doctor.
Practicing medicine is difficult. The profession doesn't rush to embrace new drugs; it takes years of testing before benefits can be proven, dosages established, and side effects cataloged.
Last month Intel Corp. announced that its new processor chips would come equipped with ID numbers, a unique serial number burned into the chip during manufacture. Intel said that this ID number will help facilitate e-commerce, prevent fraud and promote digital content protection.
Unfortunately, it doesn't do any of these things.
To see the problem, consider this analogy: Imagine that every person was issued a unique identification number on a national ID card.
The following remarks are excerpted from a general session presentation delivered at CSI's NetSec Conference in St. Louis, MO, on June 15th, 1999.
At Counterpane Systems, we evaluate security products and systems for a living. We do a lot of breaking of things for manufacturers and other clients. Over the years, I've built a body of lore about the ways things tend to fail. I want to share my "top 20 list" of what's wrong with security products these days.
1998 was an exciting year to be a cryptographer, considering all the developments in algorithms, attacks and politics. At first glance, the important events of the year seem completely unrelated: done by different people, at different times and for different reasons. But when we step back and reflect on the year-that-was, some common threads emerge -- as do important lessons about the evolution and direction of cryptography.New Algorithms
In June, the NSA declassified KEA and Skipjack.
Key recovery is like trying to fit a square peg into a round hole. No matter how much you finagle it, it's simply not going to work.
In the September issue of Information Security, Commerce Undersecretary William Reinsch suggests that U.S. crypto export policy hinges on the concept of "balance" (Q&A: "Crypto's Key Man").
For key recovery policy to be successful, he argues, it must achieve a balance between privacy and access, between the needs of consumers and the requirements of the law-enforcement community.
For those who have followed the key recovery debate, Reinsch's comments will have a familiar ring.
Today's faster, less expensive computers can crack current encryption algorithms easier than ever before. So what's next?
Cryptographic algorithms have a way of degrading over time. It's a situation that most techies aren't used to: Compression algorithms don't compress less as the years go by, and sorting algorithms don't sort slower.
GCHQ, the British equivalent of the U.S. NSA, released a document on December 1 1997, claiming to have invented publickey cryptography several years before it was discovered by the research community (http://www.cesg.gov.uk/ellisint.htm). According to the paper, GCHQ discovered both RSA and Diffie-Hellman, then kept their discoveries secret.
James Ellis the author of the paper (who died a few days before the paper's release), wrote that he was inspired by an unknown Bell Telephone labs researcher during World War II.
Unlike site-to-site VPNs, where remote offices are hard-wired to a central facility firewall, remote access VPNs are fraught with security problems. Much of the security consists of trusted passwords that traveling workers use on their notebook computers.
To be effective, a VPN's security implementation must be user-friendly while not penalizing your enterprise in other ways, such as by degrading network performance or compromising corporate control of the remote access network.
Think of the lock on the front door of your home.
Magazine articles like to describe cryptography products in terms of algorithms and key length. Algorithms make good sound bites: they can be explained in a few words and they're easy to compare with one another. "128-bit keys mean good security." "Triple-DES means good security." "40-bit keys mean weak security." "2048-bit RSA is better than 1024-bit RSA."
But reality isn't that simple. Longer keys don't always mean more security.
Never underestimate the time and effort attackers will expend to thwart your security systems.These days, security is on the minds of anyone involved in building or using information systems. After all, every form of commerce has had its share of fraud, from farmers rigging their weight scales to counterfeiters passing off phony currency. Electronic commerce is no exception, with fraud taking the form of forgery, misrepresentation, and denial of service. And it doesn't stop with electronic transactions.
From e-mail to cellular communications, from secure Web access to digital cash, cryptography is an essential part of today's information systems. Cryptography helps provide accountability, fairness, accuracy, and confidentiality. It can prevent fraud in electronic commerce and assure the validity of financial transactions. It can protect your anonymity or prove your identity.
From e-mail to cellular communications, from secure Web access to digital cash, cryptography is an essential part of today's information systems. Cryptography helps provide accountability, fairness, accuracy, and confidentiality. It can prevent fraud in electronic commerce and assure the validity of financial transactions. It can prove your identity or protect your anonymity.
You may have just started using the Internet for your business, but scientists, academics, and computer programmers have been using it for years. It was designed specifically as a public network for sharing information. Because the availability of information was the priority, provisions for data security were not considered essential. But now that you're sending proprietary business information over the Internet that openness can become a drawback.
The U.S. State Department recently ruled that some forms of electronic speech are not protected by the First Amendment and can be prohibited from export. This decision raises questions about freedom of speech on the information superhighway. As business communications continue to migrate from paper mail to electronic mail, these questions will become more important.
Good news! The federal government respects and is working to protect your privacy... just as long as you don't want privacy from the government itself.
In April 1994, the Clinton administration, cleaning up old business from the Bush administration, introduced a new cryptography initiative that ensures the government's ability to conduct electronic surveillance.
Macintosh users ignore computer viruses at their peril. Viruses can cause irreparable damage to the system or destroy megabytes of data. Fortunately, unlike their biological namesakes, computer viruses are relatively easy and painless to control. With a leading virus-protection software program, it takes only a few minutes a day to remain virus-free.
"Protecting yourself from Mac virus infection is easy; it's a wonder there are people who don't do it," said Ben Liberman, independent Macintosh consultant in Chicago. There are several good anti-viral software packages, both commercial and free, designed to protect your Mac from attack.
There are two types of anti-viral software: protective and detective. The commercial virus-prevention software packages -Central Point Software Inc.'s Central Point Anti-Virus for Macintosh 2.0, Symantec Corp.'s Symantec Anti-Virus for Macintosh 3.5 and Datawatch Corp.'s Virex 4.1 - support both protective and detective protection.
In April, the Clinton administration, cleaning up business left over from the Bush administration, introduced a cryptography initiative that gives government the ability to conduct electronic surveillance. The first fruit of this initiative is Clipper, a National Security Agency (NSA)-designed, tamper-resistant VLSI chip. The stated purpose of this chip is to secure telecommunications.
Clipper uses a classified encryption algorithm.
Security problems have become almost as commonplace as desktop computers. A disgruntled city employee, trying to get back at the boss, digs into the mayor's personal files and sends damaging information to the press. A woman asks her computer-expert husband to recover an accidentally deleted budget file; he recovers not only that file, but purposely deleted letters to an illicit lover. Or a major corporation loses critical financial data to an industrial spy who dialed in to a company file server.
Convincing people to back up their hard disks is a universal struggle. Most people make backups irregularly, if at all. And whether or not the backups are labeled or even if they can be used to restore data in the event of a disk crash is usually the responsibility of the individual user.
As companies downsize their computing centers, more critical applications are moving from mainframe computers to networked microcomputers.
System 7 and the Mac were designed for ease of use, not security. Networked Macs suffer from many security risks that stand-alone machines don't and, unlike mainframe systems, there is no central computing machine from which to control access.
AppleTalk is a dynamic "plug-and-play" system - any Mac can plug into an existing network and immediately become part of it. AppleTalk also is a peer-to-peer system - any Mac can access resources on, send files to and exchange messages with any other machine.
Macs sitting alone on desert islands don't catch viruses. Even Macs whose users frequently trade disks with each other can be protected easily. With Macs on large networks, however, virus prevention can be a lot more complicated.
"If you have a published volume on your hard disk, someone can drop a virus on your machine without your knowledge," said Jeffrey Shulman, author of Virus Detective and Virus Blockade and president of Shulman Software Co. of Morgentown, W.Va.
Large buildings are often built with fire walls -- fire-resistant barriers between vital parts. A fire may burn out one section of the building, but the fire wall will stop it from spreading. The same philosophy can protect Macintosh networks from unauthorized access and network faults.
A network fire wall usually is nothing more than a router configured to prevent certain network packets from traveling between parts of the network.
Whether you're protecting a nuclear missile or your new recipe for burger sauce, polynomial encryption can prevent people from stealing your secrets.
Let's say you've invented a new, extra-gooey, extra-sweet, creme filling; or a burger sauce that is even more tasteless than before. This stuff is important; you have to keep the recipe secret. You can tell only your most trusted employees the exact mixture of ingredients, but what if one of them defects to the competition? Before, long every grease palace on the block would be making burgers as tasteless as yours.
MacWEEK Special Report: Emerging Technologies
Back when computers stood alone on desks, unconnected to the rest of the world, computer security was simply a matter of locking an office door, putting a lock on the power supply or installing a security software package. Today, the rules of computer security are changing, and in years to come, it's going to be a whole new ball game.
What used to be the concern solely of the military is required by more and more companies. "Between LANs, file servers and dial-up connections, it's hard to regulate who has access to what," said Steven Bass, principal software engineer at Codex Corp., a division of Motorola Inc. in Canton, Mass.
Photo of Bruce Schneier by Per Ervland.
Schneier on Security is a personal website. Opinions expressed are not necessarily those of Resilient, an IBM Company.