Entries Tagged "network security"

Page 3 of 11

Software Monoculture

In 2003, a group of security experts — myself included — published a paper saying that 1) software monocultures are dangerous and 2) Microsoft, being the largest creator of monocultures out there, is the most dangerous. Marcus Ranum responded with an essay that basically said we were full of it. Now, eight years later, Marcus and I thought it would be interesting to revisit the debate.

The basic problem with a monoculture is that it’s all vulnerable to the same attack. The Irish Potato Famine of 1845–9 is perhaps the most famous monoculture-related disaster. The Irish planted only one variety of potato, and the genetically identical potatoes succumbed to a rot caused by Phytophthora infestans. Compare that with the diversity of potatoes traditionally grown in South America, each one adapted to the particular soil and climate of its home, and you can see the security value in heterogeneity.

Similar risks exist in networked computer systems. If everyone is using the same operating system or the same applications software or the same networking protocol, and a security vulnerability is discovered in that OS or software or protocol, a single exploit can affect everyone. This is the problem of large-scale Internet worms: many have affected millions of computers on the Internet.

If our networking environment weren’t homogeneous, a single worm couldn’t do so much damage. We’d be more like South America’s potato crop than Ireland’s. Conclusion: monoculture is bad; embrace diversity or die along with everyone else.

This analysis makes sense as far as it goes, but suffers from three basic flaws. The first is the assumption that our IT monoculture is as simple as the potato’s. When the particularly virulent Storm worm hit, it only affected from 1–10 million of its billion-plus possible victims. Why? Because some computers were running updated antivirus software, or were within locked-down networks, or whatever. Two computers might be running the same OS or applications software, but they’ll be inside different networks with different firewalls and IDSs and router policies, they’ll have different antivirus programs and different patch levels and different configurations, and they’ll be in different parts of the Internet connected to different servers running different services. As Marcus pointed out back in 2003, they’ll be a little bit different themselves. That’s one of the reasons large-scale Internet worms don’t infect everyone — as well as the network’s ability to quickly develop and deploy patches, new antivirus signatures, new IPS signatures, and so on.

The second flaw in the monoculture analysis is that it downplays the cost of diversity. Sure, it would be great if a corporate IT department ran half Windows and half Linux, or half Apache and half Microsoft IIS, but doing so would require more expertise and cost more money. It wouldn’t cost twice the expertise and money — there is some overlap — but there are significant economies of scale that result from everyone using the same software and configuration. A single operating system locked down by experts is far more secure than two operating systems configured by sysadmins who aren’t so expert. Sometimes, as Mark Twain said: “Put all your eggs in one basket, and then guard that basket!”

The third flaw is that you can only get a limited amount of diversity by using two operating systems, or routers from three vendors. South American potato diversity comes from hundreds of different varieties. Genetic diversity comes from millions of different genomes. In monoculture terms, two is little better than one. Even worse, since a network’s security is primarily the minimum of the security of its components, a diverse network is less secure because it is vulnerable to attacks against any of its heterogeneous components.

Some monoculture is necessary in computer networks. As long as we have to talk to each other, we’re all going to have to use TCP/IP, HTML, PDF, and all sorts of other standards and protocols that guarantee interoperability. Yes, there will be different implementations of the same protocol — and this is a good thing — but that won’t protect you completely. You can’t be too different from everyone else on the Internet, because if you were, you couldn’t be on the Internet.

Species basically have two options for propagating their genes: the lobster strategy and the avian strategy. Lobsters lay 5,000 to 40,000 eggs at a time, and essentially ignore them. Only a minuscule percentage of the hatchlings live to be four weeks old, but that’s sufficient to ensure gene propagation; from every 50,000 eggs, an average of two lobsters is expected to survive to legal size. Conversely, birds produce only a few eggs at a time, then spend a lot of effort ensuring that most of the hatchlings survive. In ecology, this is known as r/K selection theory. In either case, each of those offspring varies slightly genetically, so if a new threat arises, some of them will be more likely to survive. But even so, extinctions happen regularly on our planet; neither strategy is foolproof.

Our IT infrastructure is a lot more like a bird than a lobster. Yes, monoculture is dangerous and diversity is important. But investing time and effort in ensuring our current infrastructure’s survival is even more important.

This essay was originally published in Information Security, and is the first half of a point/counterpoint with Marcus Ranum. You can read his response there as well.

EDITED TO ADD (12/13): Commentary.

Posted on December 1, 2010 at 5:55 AMView Comments

WPA Cracking in the Cloud

It’s a service:

The mechanism used involves captured network traffic, which is uploaded to the WPA Cracker service and subjected to an intensive brute force cracking effort. As advertised on the site, what would be a five-day task on a dual-core PC is reduced to a job of about twenty minutes on average. For the more “premium” price of $35, you can get the job done in about half the time. Because it is a dictionary attack using a predefined 135-million-word list, there is no guarantee that you will crack the WPA key, but such an extensive dictionary attack should be sufficient for any but the most specialized penetration testing purposes.

[…]

It gets even better. If you try the standard 135-million-word dictionary and do not crack the WPA encryption on your target network, there is an extended dictionary that contains an additional 284 million words. In short, serious brute force wireless network encryption cracking has become a retail commodity.

FAQ here.

In related news, there might be a man-in-the-middle attack possible against the WPA2 protocol. Man-in-the-middle attacks are potentially serious, but it depends on the details — and they’re not available yet.

EDITED TO ADD (8/8): Details about the MITM attack.

Posted on July 27, 2010 at 6:43 AMView Comments

The NSA's Perfect Citizen

In what creepy back room do they come up with these names?

The federal government is launching an expansive program dubbed “Perfect Citizen” to detect cyber assaults on private companies and government agencies running such critical infrastructure as the electricity grid and nuclear-power plants, according to people familiar with the program.

The surveillance by the National Security Agency, the government’s chief eavesdropping agency, would rely on a set of sensors deployed in computer networks for critical infrastructure that would be triggered by unusual activity suggesting an impending cyber attack, though it wouldn’t persistently monitor the whole system, these people said.

No reason to be alarmed, though. The NSA claims that this is just research.

Posted on July 16, 2010 at 5:19 AMView Comments

Data at Rest vs. Data in Motion

For a while now, I’ve pointed out that cryptography is singularly ill-suited to solve the major network security problems of today: denial-of-service attacks, website defacement, theft of credit card numbers, identity theft, viruses and worms, DNS attacks, network penetration, and so on.

Cryptography was invented to protect communications: data in motion. This is how cryptography was used throughout most of history, and this is how the militaries of the world developed the science. Alice was the sender, Bob the receiver, and Eve the eavesdropper. Even when cryptography was used to protect stored data — data at rest — it was viewed as a form of communication. In “Applied Cryptography,” I described encrypting stored data in this way: “a stored message is a way for someone to communicate with himself through time.” Data storage was just a subset of data communication.

In modern networks, the difference is much more profound. Communications are immediate and instantaneous. Encryption keys can be ephemeral, and systems like the STU-III telephone can be designed such that encryption keys are created at the beginning of a call and destroyed as soon as the call is completed. Data storage, on the other hand, occurs over time. Any encryption keys must exist as long as the encrypted data exists. And storing those keys becomes as important as storing the unencrypted data was. In a way, encryption doesn’t reduce the number of secrets that must be stored securely; it just makes them much smaller.

Historically, the reason key management worked for stored data was that the key could be stored in a secure location: the human brain. People would remember keys and, barring physical and emotional attacks on the people themselves, would not divulge them. In a sense, the keys were stored in a “computer” that was not attached to any network. And there they were safe.

This whole model falls apart on the Internet. Much of the data stored on the Internet is only peripherally intended for use by people; it’s primarily intended for use by other computers. And therein lies the problem. Keys can no longer be stored in people’s brains. They need to be stored on the same computer, or at least the network, that the data resides on. And that is much riskier.

Let’s take a concrete example: credit card databases associated with websites. Those databases are not encrypted because it doesn’t make any sense. The whole point of storing credit card numbers on a website is so it’s accessible — so each time I buy something, I don’t have to type it in again. The website needs to dynamically query the database and retrieve the numbers, millions of times a day. If the database were encrypted, the website would need the key. But if the key were on the same network as the data, what would be the point of encrypting it? Access to the website equals access to the database in either case. Security is achieved by good access control on the website and database, not by encrypting the data.

The same reasoning holds true elsewhere on the Internet as well. Much of the Internet’s infrastructure happens automatically, without human intervention. This means that any encryption keys need to reside in software on the network, making them vulnerable to attack. In many cases, the databases are queried so often that they are simply left in plaintext, because doing otherwise would cause significant performance degradation. Real security in these contexts comes from traditional computer security techniques, not from cryptography.

Cryptography has inherent mathematical properties that greatly favor the defender. Adding a single bit to the length of a key adds only a slight amount of work for the defender, but doubles the amount of work the attacker has to do. Doubling the key length doubles the amount of work the defender has to do (if that — I’m being approximate here), but increases the attacker’s workload exponentially. For many years, we have exploited that mathematical imbalance.

Computer security is much more balanced. There’ll be a new attack, and a new defense, and a new attack, and a new defense. It’s an arms race between attacker and defender. And it’s a very fast arms race. New vulnerabilities are discovered all the time. The balance can tip from defender to attacker overnight, and back again the night after. Computer security defenses are inherently very fragile.

Unfortunately, this is the model we’re stuck with. No matter how good the cryptography is, there is some other way to break into the system. Recall how the FBI read the PGP-encrypted email of a suspected Mafia boss several years ago. They didn’t try to break PGP; they simply installed a keyboard sniffer on the target’s computer. Notice that SSL- and TLS-encrypted web communications are increasingly irrelevant in protecting credit card numbers; criminals prefer to steal them by the hundreds of thousands from back-end databases.

On the Internet, communications security is much less important than the security of the endpoints. And increasingly, we can’t rely on cryptography to solve our security problems.

This essay originally appeared on DarkReading. I wrote it in 2006, but lost it on my computer for four years. I hate it when that happens.

EDITED TO ADD (7/14): As several readers pointed out, I overstated my case when I said that encrypting credit card databases, or any database in constant use, is useless. In fact, there is value in encrypting those databases, especially if the encryption appliance is separate from the database server. In this case, the attacker has to steal both the encryption key and the database. That’s a harder hacking problem, and this is why credit-card database encryption is mandated within the PCI security standard. Given how good encryption performance is these days, it’s a smart idea. But while encryption makes it harder to steal the data, it is only harder in a computer-security sense and not in a cryptography sense.

Posted on June 30, 2010 at 12:53 PMView Comments

DARPA Research into Clean-Slate Network Security Redesign

This looks like a good research direction:

Is it possible that given a clean slate and likely millions of dollars, engineers could come up with the ultimate in secure network technology? The scientists at the Defense Advanced Research Projects Agency (DARPA) think so and this week announced the Clean Slate Design of Resilient, Adaptive, Secure Hosts (CRASH) program that looks to lean heavily on human biology to develop super-smart, highly adaptive, supremely secure networks.

For example, the CRASH program looks to translate human immune system strategies into computational terms.  In the human immune system multiple independent mechanisms constantly monitor the body for pathogens. Even at the cellular level, multiple redundant mechanisms monitor and repair the structure of the DNA. These mechanisms consume tons of resources, but let the body continue functioning and to repair the damage caused by malfunctions and infectious agents, DARPA stated.

Posted on June 9, 2010 at 12:59 PMView Comments

Lt. Gen. Alexander and the U.S. Cyber Command

Lt. Gen. Keith Alexander, the current Director of NSA, has been nominated to head the US Cyber Command. Last week Alexander appeared before the Senate Armed Services Committee to answer questions.

The Chairman of the Armed Services Committee, Senator Carl Levin (D Michigan) began by posing three scenarios to Lieutenant General Alexander:

Scenario 1. A traditional operation against an adversary, country “C”. What rules of engagement would prevail to counter cyberattacks emanating from that country?

Answer: Under Title 10, an “execute” order approved by the President and the Joint Chiefs would presumably grant the theater commander full leeway to defend US military networks and to counter attack.

Title 10 is the legal framework under which the US military operates.

Scenario 2. Same as before but the cyberattacks emanate from a neutral third country.

Answer. Additional authority would have to be granted.

Scenario 3. “Assume you’re in a peacetime setting now. All of a sudden we’re hit with a major attack against the computers that manage the distribution of electric power in the United States. Now, the attacks appear to be coming from computers outside the United States, but they are being routed through computers that are owned by U.S. persons located in the United States, so the routers are in here, in the United States.

Now, how would CYBERCOM respond to that situation and under what authorities?”

Answer: That would be the responsibility of the Department of Homeland Security (DHS) and the FBI.

Alexander was repeatedly asked about privacy and civil liberties impact of his new role, and gave answers that were, well, full of platitudes but essentially uninformative.

He also played up the threat, saying that U.S. military networks are seeing “hundreds of thousands of probes a day,” whatever that means.

Prior to the hearing, Alexander answered written questions from the commitee. Particularly interesting are his answers to questions 24 and 27.

24. Explaining Cybersecurity Plans to the American People

The majority of the funding for the multi-billion dollar Comprehensive National Cybersecurity Initiative (SNCI) is contained in the classified National Intelligence Program budget, which is reviewed and approved by the congressional intelligence committees. Almost all important aspects of the CNCI remain highly classified, including the implementation plan for the Einstein 3 intrusion detection and prevention system. It is widely perceived that the Department of Homeland Security is actually likely to simply extend the cyber security system that the NSA developed for DOD into the civilian and even the private sector for defense of critical infrastructure. DOD is creating a sub-unified Cyber Command with the Director of NSA as its Commander.

24a) In your view, are we risking creating the perception, at home and abroad, that the U.S. government’s dominant interests and objectives in cyberspace are intelligence- and military-related, and if so, is this a perception that we want to exist?

(U) No, I don’t believe we are risking creating this perception as long as we communicate clearly to the American people—and the world—regarding our interests and objectives.

24b) Based on your experience, are the American people likely to accept deployment of classified methods of monitoring electronic communications to defend the government and critical infrastructure without explaining basic aspects of how this monitoring will be conducted and how it may affect them?

(U) I believe the government and the American people expect both NSA and U.S. Cyber Command to support the cyber defense of our nation. Our support does not in any way suggest that we would be monitoring Americans.

(U) I don’t believe we should ask the public to accept blindly some unclear “classified” method. We need to be transparent and communicate to the American people about our objectives to address the national security threat to our nation—the nature of the threat, our overall approach, and the roles and responsibilities of each department and agency involved—including NSA and the Department of Defense. I am personally committed to this transparency, and I know that the Department of Defense, the Intelligence Community, and the rest of the Administration are as well. What needs to remain classified, and I believe that the American people will accept this as reasonable, are the specific foreign threats that we are looking for and how we identify them, and what actions we take when they are identified. For these areas, the American people have you, their elected representatives, to provide the appropriate oversight on their behalf.

(U) Remainder of answer provided in the classified supplement.

24c) What are your views as to the necessity and desirability of maintaining the current level of classification of the CNCI?

(U) In recent months, we have seen an increasing amount of information being shared by the Administration and the departments and agencies on the CNCI and cybersecurity in general, which I believe is consistent with our commitment to transparency. I expect that trend to continue, and personally believe and support this transparency as a foundational element of the dialogue that we need to have with the American people on cybersecurity.

[…]

27. Designing the Internet for Better Security

Cyber security experts emphasize that the Internet was not designed for security.

27a) How could the Internet be designed differently to provide much greater inherent security?

(U) The design of the Internet is—and will continue to evolve—based on technological advancements. These new technologies will enhance mobility and, if properly implemented, security. It is in the best interest of both government and insustry to consider security more prominently in this evolving future Internet architecture. If confirmed, I look forward to working with this Committee, as well as industry leaders, academia, the services, and DOD agencies on these important concerns.

27b) Is it practical to consider adopting those modifications?

(U) Answer provided in the classified supplement.

27c) What would the impact be on privacy, both pro and con?

(U) Answer provided in the classified supplement.

The Electronic Privacy Information Center has filed a Freedom of Information Act request for that classified supplement. I doubt we’ll get it, though.

The U.S. Cyber Command was announced by Secretary of Defense Robert Gates in June 2009. It’s supposed to be operational this year.

Posted on April 19, 2010 at 1:26 PMView Comments

Electronic Health Record Security Analysis

In British Columbia:

When Auditor-General John Doyle and his staff investigated the security of electronic record-keeping at the Vancouver Coastal Health Authority, they found trouble everywhere they looked.

“In every key area we examined, we found serious weaknesses,” wrote Doyle. “Security controls throughout the network and over the database were so inadequate that there was a high risk of external and internal attackers being able to access or extract information without the authority even being aware of it.”

[…]

“No intrusion prevention and detection systems exist to prevent or detect certain types of [online] attacks. Open network connections in common business areas. Dial-in remote access servers that bypass security. Open accounts existing, allowing health care data to be copied even outside the Vancouver Coastal Health Care authority at any time.”

More than 4,000 users were found to have access to the records in the database, many of them at a far higher level than necessary.

[…]

“Former client records and irrelevant records for current clients are still accessible to system users. Hundreds of former users, both employees and contractors, still have access to resources through active accounts, network accounts, and virtual private network accounts.”

While this report is from Canada, the same issues apply to any electronic patient record system in the U.S. What I find really interesting is that the Canadian government actually conducted a security analysis of the system, rather than just maintaining that everything would be fine. I wish the U.S. would do something similar.

The report, “The PARIS System for Community Care Services: Access and Security,” is here.

Posted on March 23, 2010 at 12:23 PMView Comments

Comprehensive National Cybersecurity Initiative

On Tuesday, the White House published an unclassified summary of its Comprehensive National Cybersecurity Initiative (CNCI). Howard Schmidt made the announcement at the RSA Conference. These are the 12 initiatives in the plan:

  • Initiative #1. Manage the Federal Enterprise Network as a single network enterprise with Trusted Internet.
  • Initiative #2. Deploy an intrusion detection system of sensors across the Federal enterprise.
  • Initiative #3. Pursue deployment of intrusion prevention systems across the Federal enterprise.
  • Initiative #4: Coordinate and redirect research and development (R&D) efforts.
  • Initiative #5. Connect current cyber ops centers to enhance situational awareness.
  • Initiative #6. Develop and implement a government-wide cyber counterintelligence (CI) plan.
  • Initiative #7. Increase the security of our classified networks.
  • Initiative #8. Expand cyber education.
  • Initiative #9. Define and develop enduring “leap-ahead” technology, strategies, and programs.
  • Initiative #10. Define and develop enduring deterrence strategies and programs.
  • Initiative #11. Develop a multi-pronged approach for global supply chain risk management.
  • Initiative #12. Define the Federal role for extending cybersecurity into critical infrastructure domains.

While this transparency is a good, in this sort of thing the devil is in the details — and we don’t have any details. We also don’t have any information about the legal authority for cybersecurity, and how much the NSA is, and should be, involved. Good commentary on that here. EPIC is suing the NSA to learn more about its involvement.

Posted on March 4, 2010 at 12:55 PMView Comments

Reacting to Security Vulnerabilities

Last month, researchers found a security flaw in the SSL protocol, which is used to protect sensitive web data. The protocol is used for online commerce, webmail, and social networking sites. Basically, hackers could hijack an SSL session and execute commands without the knowledge of either the client or the server. The list of affected products is enormous.

If this sounds serious to you, you’re right. It is serious. Given that, what should you do now? Should you not use SSL until it’s fixed, and only pay for internet purchases over the phone? Should you download some kind of protection? Should you take some other remedial action? What?

If you read the IT press regularly, you’ll see this sort of question again and again. The answer for this particular vulnerability, as for pretty much any other vulnerability you read about, is the same: do nothing. That’s right, nothing. Don’t panic. Don’t change your behavior. Ignore the problem, and let the vendors figure it out.

There are several reasons for this. One, it’s hard to figure out which vulnerabilities are serious and which are not. Vulnerabilities such as this happen multiple times a month. They affect different software, different operating systems, and different web protocols. The press either mentions them or not, somewhat randomly; just because it’s in the news doesn’t mean it’s serious.

Two, it’s hard to figure out if there’s anything you can do. Many vulnerabilities affect operating systems or Internet protocols. The only sure fix would be to avoid using your computer. Some vulnerabilities have surprising consequences. The SSL vulnerability mentioned above could be used to hack Twitter. Did you expect that? I sure didn’t.

Three, the odds of a particular vulnerability affecting you are small. There are a lot of fish in the Internet, and you’re just one of billions.

Four, often you can’t do anything. These vulnerabilities affect clients and servers, individuals and corporations. A lot of your data isn’t under your direct control — it’s on your web-based email servers, in some corporate database, or in a cloud computing application. If a vulnerability affects the computers running Facebook, for example, your data is at risk, whether you log in to Facebook or not.

It’s much smarter to have a reasonable set of default security practices and continue doing them. This includes:

1. Install an antivirus program if you run Windows, and configure it to update daily. It doesn’t matter which one you use; they’re all about the same. For Windows, I like the free version of AVG Internet Security. Apple Mac and Linux users can ignore this, as virus writers target the operating system with the largest market share.

2. Configure your OS and network router properly. Microsoft’s operating systems come with a lot of security enabled by default; this is good. But have someone who knows what they’re doing check the configuration of your router, too.

3. Turn on automatic software updates. This is the mechanism by which your software patches itself in the background, without you having to do anything. Make sure it’s turned on for your computer, OS, security software, and any applications that have the option. Yes, you have to do it for everything, as they often have separate mechanisms.

4. Show common sense regarding the Internet. This might be the hardest thing, and the most important. Know when an email is real, and when you shouldn’t click on the link. Know when a website is suspicious. Know when something is amiss.

5. Perform regular backups. This is vital. If you’re infected with something, you may have to reinstall your operating system and applications. Good backups ensure you don’t lose your data — documents, photographs, music — if that becomes necessary.

That’s basically it. I could give a longer list of safe computing practices, but this short one is likely to keep you safe. After that, trust the vendors. They spent all last month scrambling to fix the SSL vulnerability, and they’ll spend all this month scrambling to fix whatever new vulnerabilities are discovered. Let that be their problem.

Posted on December 10, 2009 at 1:13 PMView Comments

North Korean Cyberattacks

To hear the media tell it, the United States suffered a major cyberattack last week. Stories were everywhere. "Cyber Blitz hits U.S., Korea" was the headline in Thursday’s Wall Street Journal. North Korea was blamed.

Where were you when North Korea attacked America? Did you feel the fury of North Korea’s armies? Were you fearful for your country? Or did your resolve strengthen, knowing that we would defend our homeland bravely and valiantly?

My guess is that you didn’t even notice, that — if you didn’t open a newspaper or read a news website — you had no idea anything was happening. Sure, a few government websites were knocked out, but that’s not alarming or even uncommon. Other government websites were attacked but defended themselves, the sort of thing that happens all the time. If this is what an international cyberattack looks like, it hardly seems worth worrying about at all.

Politically motivated cyber attacks are nothing new. We’ve seen UK vs. Ireland. Israel vs. the Arab states. Russia vs. several former Soviet Republics. India vs. Pakistan, especially after the nuclear bomb tests in 1998. China vs. the United States, especially in 2001 when a U.S. spy plane collided with a Chinese fighter jet. And so on and so on.

The big one happened in 2007, when the government of Estonia was attacked in cyberspace following a diplomatic incident with Russia about the relocation of a Soviet World War II memorial. The networks of many Estonian organizations, including the Estonian parliament, banks, ministries, newspapers and broadcasters, were attacked and — in many cases — shut down. Estonia was quick to blame Russia, which was equally quick to deny any involvement.

It was hyped as the first cyberwar, but after two years there is still no evidence that the Russian government was involved. Though Russian hackers were indisputably the major instigators of the attack, the only individuals positively identified have been young ethnic Russians living inside Estonia, who were angry over the statue incident.

Poke at any of these international incidents, and what you find are kids playing politics. Last Wednesday, South Korea’s National Intelligence Service admitted that it didn’t actually know that North Korea was behind the attacks: "North Korea or North Korean sympathizers in the South" was what it said. Once again, it’ll be kids playing politics.

This isn’t to say that cyberattacks by governments aren’t an issue, or that cyberwar is something to be ignored. The constant attacks by Chinese nationals against U.S. networks may not be government-sponsored, but it’s pretty clear that they’re tacitly government-approved. Criminals, from lone hackers to organized crime syndicates, attack networks all the time. And war expands to fill every possible theater: land, sea, air, space, and now cyberspace. But cyberterrorism is nothing more than a media invention designed to scare people. And for there to be a cyberwar, there first needs to be a war.

Israel is currently considering attacking Iran in cyberspace, for example. If it tries, it’ll discover that attacking computer networks is an inconvenience to the nuclear facilities it’s targeting, but doesn’t begin to substitute for bombing them.

In May, President Obama gave a major speech on cybersecurity. He was right when he said that cybersecurity is a national security issue, and that the government needs to step up and do more to prevent cyberattacks. But he couldn’t resist hyping the threat with scare stories: "In one of the most serious cyber incidents to date against our military networks, several thousand computers were infected last year by malicious software — malware," he said. What he didn’t add was that those infections occurred because the Air Force couldn’t be bothered to keep its patches up to date.

This is the face of cyberwar: easily preventable attacks that, even when they succeed, only a few people notice. Even this current incident is turning out to be a sloppily modified five-year-old worm that no modern network should still be vulnerable to.

Securing our networks doesn’t require some secret advanced NSA technology. It’s the boring network security administration stuff we already know how to do: keep your patches up to date, install good anti-malware software, correctly configure your firewalls and intrusion-detection systems, monitor your networks. And while some government and corporate networks do a pretty good job at this, others fail again and again.

Enough of the hype and the bluster. The news isn’t the attacks, but that some networks had security lousy enough to be vulnerable to them.

This essay originally appeared on the Minnesota Public Radio website.

Posted on July 13, 2009 at 11:45 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.