Entries Tagged "network security"

Page 7 of 11

The Hidden Benefits of Network Attack

An anonymous note in the Harvard Law Review argues that there is a significant benefit from Internet attacks:

This Note argues that computer networks, particularly the Internet, can be thought of as having immune systems that are strengthened by certain attacks. Exploitation of security holes prompts users and vendors to close those holes, vendors to emphasize security in system development, and users to adopt improved security practices. This constant strengthening of security reduces the likelihood of a catastrophic attack—one that would threaten national or even global security. In essence, certain cybercrime can create more benefits than costs, and cybercrime policy should take this concept into account.

Posted on September 26, 2006 at 6:42 AMView Comments

University Networks and Data Security

In general, the problems of securing a university network are no different than those of securing any other large corporate network. But when it comes to data security, universities have their own unique problems. It’s easy to point fingers at students—a large number of potentially adversarial transient insiders. Yet that’s really no different from a corporation dealing with an assortment of employees and contractors—the difference is the culture.

Universities are edge-focused; central policies tend to be weak, by design, with maximum autonomy for the edges. This means they have natural tendencies against centralization of services. Departments and individual professors are used to being semiautonomous. Because these institutions were established long before the advent of computers, when networking did begin to infuse universities, it developed within existing administrative divisions. Some universities have academic departments with separate IT departments, budgets, and staff, with a central IT group providing bandwidth but little or no oversight. Unfortunately, these smaller IT groups don’t generally count policy development and enforcement as part of their core competencies.

The lack of central authority makes enforcing uniform standards challenging, to say the least. Most university CIOs have much less power than their corporate counterparts; university mandates can be a major obstacle in enforcing any security policy. This leads to an uneven security landscape.

There’s also a cultural tendency for faculty and staff to resist restrictions, especially in the area of research. Because most research is now done online—or, at least, involves online access—restricting the use of or deciding on appropriate uses for information technologies can be difficult. This resistance also leads to a lack of centralization and an absence of IT operational procedures such as change control, change management, patch management, and configuration control.

The result is that there’s rarely a uniform security policy. The centralized servers—the core where the database servers live—are generally more secure, whereas the periphery is a hodgepodge of security levels.

So, what to do? Unfortunately, solutions are easier to describe than implement. First, universities should take a top-down approach to securing their infrastructure. Rather than fighting an established culture, they should concentrate on the core infrastructure.

Then they should move personal, financial, and other comparable data into that core. Leave information important to departments and research groups to them, and centrally store information that’s important to the university as a whole. This can be done under the auspices of the CIO. Laws and regulations can help drive consolidation and standardization.

Next, enforce policies for departments that need to connect to the sensitive data in the core. This can be difficult with older legacy systems, but establishing a standard for best practices is better than giving up. All legacy technology is upgraded eventually.

Finally, create distinct segregated networks within the campus. Treat networks that aren’t under the IT department’s direct control as untrusted. Student networks, for example, should be firewalled to protect the internal core from them. The university can then establish levels of trust commensurate with the segregated networks’ adherence to policies. If a research network claims it can’t have any controls, then let the university create a separate virtual network for it, outside the university’s firewalls, and let it live there. Note, though, that if something or someone on that network wants to connect to sensitive data within the core, it’s going to have to agree to whatever security policies that level of data access requires.

Securing university networks is an excellent example of the social problems surrounding network security being harder than the technical ones. But harder doesn’t mean impossible, and there is a lot that can be done to improve security.

This essay originally appeared in the September/October issue of IEEE Security & Privacy.

Posted on September 20, 2006 at 7:37 AMView Comments

Organized Cybercrime

Cybercrime is getting organized:

Cyberscams are increasingly being committed by organized crime syndicates out to profit from sophisticated ruses rather than hackers keen to make an online name for themselves, according to a top U.S. official.

Christopher Painter, deputy chief of the computer crimes and intellectual property section at the Department of Justice, said there had been a distinct shift in recent years in the type of cybercriminals that online detectives now encounter.

“There has been a change in the people who attack computer networks, away from the ‘bragging hacker’ toward those driven by monetary motives,” Painter told Reuters in an interview this week.

Although media reports often focus on stories about teenage hackers tracked down in their bedroom, the greater danger lies in the more anonymous virtual interlopers.

“There are still instances of these ‘lone-gunman’ hackers but more and more we are seeing organized criminal groups, groups that are often organized online targeting victims via the internet,” said Painter, in London for a cybercrime conference.

I’ve been saying this sort of thing for years, and have long complained that cyberterrorism gets all the press while cybercrime is the real threat. I don’t think this article is fear and hype; it’s a real problem.

Posted on September 19, 2006 at 7:16 AMView Comments

Securing Wireless Networks with Stickers

Does anyone think this California almost-law (it’s awaiting the governor’s signature) will do any good at all?

From 1 October 2007, manufacturers must place warning labels on all equipment capable of receiving Wi-Fi signals, according to the new state law. These can take the form of box stickers, special notification in setup software, notification during the router setup, or through automatic securing of the connection. One warning sticker must be positioned so that it must be removed by a consumer before the product can be used.

Posted on September 5, 2006 at 1:56 PMView Comments

Printer Security

At BlackHat last week, Brendan O’Connor warned about the dangers of insecure printers:

“Stop treating them as printers. Treat them as servers, as workstations,” O’Connor said in his presentation on Thursday. Printers should be part of a company’s patch program and be carefully managed, not forgotten by IT and handled by the most junior person on staff, he said.

I remember the L0pht doing work on printer vulnerabilities, and ways to attack networks via the printers, years ago. But the point is still valid and bears repeating: printers are computers, and have vulnerabilities like any other computers.

Once a printer was under his control, O’Connor said he would be able to use it to map an organization’s internal network—a situation that could help stage further attacks. The breach gave him access to any of the information printed, copied or faxed from the device. He could also change the internal job counter—which can reduce, or increase, a company’s bill if the device is leased, he said.

The printer break-in also enables a number of practical jokes, such as sending print and scan jobs to arbitrary workers’ desktops, O’Connor said. Also, devices could be programmed to include, for example, an image of a paper clip on every print, fax or copy, ultimately driving office staffers to take the machine apart looking for the paper clip.

Getting copies of all printed documents is definitely a security vulnerability, but I think the biggest threat is that the printers are inside the network, and are a more-trusted launching pad for onward attacks.

One of the weaknesses in the Xerox system is an unsecured boot loader, the technology that loads the basic software on the device, O’Connor said. Other flaws lie in the device’s Web interface and in the availability of services such as the Simple Network Management Protocol and Telnet, he said.

O’Connor informed Xerox of the problems in January. The company did issue a fix for its WorkCentre 200 series, it said in a statement. “Thanks to Brendan’s efforts, we were able to post a patch for our customers in mid-January which fixes the issues,” a Xerox representative said in an e-mailed statement.

One of the reasons this is a particularly nasty problem is that people don’t update their printer software. Want to bet approximately 0% of the printer’s users installed that patch? And what about printers whose code can’t be patched?

EDITED TO ADD (8/7): O’Connor’s name corrected.

Posted on August 7, 2006 at 10:59 AMView Comments

Security Certifications

I’ve long been hostile to certifications—I’ve met too many bad security professionals with certifications and know many excellent security professionals without certifications. But, I’ve come to believe that, while certifications aren’t perfect, they’re a decent way for a security professional to learn some of the things he’s going to know, and a potential employer to assess whether a job candidate has the security expertise he’s going to need to know.

What’s changed? Both the job requirements and the certification programs.

Anyone can invent a security system that he himself cannot break. I’ve said this so often that Cory Doctorow has named it “Schneier’s Law”: When someone hands you a security system and says, “I believe this is secure,” the first thing you have to ask is, “Who the hell are you?” Show me what you’ve broken to demonstrate that your assertion of the system’s security means something.

That kind of expertise can’t be found in a certification. It’s a combination of an innate feel for security, extensive knowledge of the academic security literature, extensive experience in existing security systems, and practice. When I’ve hired people to design and evaluate security systems, I’ve paid no attention to certifications. They are meaningless; I need a different set of skills and abilities.

But most organizations don’t need to hire that kind of person. Network security has become standardized; organizations need a practitioner, not a researcher. This is good because there is so much demand for these practitioners that there aren’t enough researchers to go around. Certification programs are good at churning out practitioners.

And over the years, certification programs have gotten better. They really do teach knowledge that security practitioners need. I might not want a graduate designing a security protocol or evaluating a cryptosystem, but certs are fine for any of the handful of network security jobs a large organization needs.

At my company, we encourage our security analysts to take certification courses. We find that it’s the most cost-effective way to give them the skills they need to do ever-more-complex jobs.

Of course, none of this is perfect. I still meet bad security practitioners with certifications, and I still know excellent security professionals without any.

In the end, certifications are like profiling. They work , but they’re sloppy. Just because someone has a particular certification doesn’t mean that he has the security expertise you’re looking for (in other words, there are false positives). And just because someone doesn’t have a security certification doesn’t mean that he doesn’t have the required security expertise (false negatives). But we use them for the same reason we profile: We don’t have the time, patience, or ability to test for what we’re looking for explicitly.

Profiling based on security certifications is the easiest way for an organization to make a good hiring decision, and the easiest way for an organization to train its existing employees. And honestly, that’s usually good enough.

This essay originally appeared as a point-counterpoint with Marcus Ranum in the July 2006 issue of Information Security Magazine. (You have to fill out an annoying survey to read Marcus’s counterpoint, but 1) you can lie, and 2) it’s worth it.)

EDITED TO ADD (7/21): A Guide to Information Security Certifications.

EDITED TO ADD (9/11): Here’s Marcus’s column.

Posted on July 20, 2006 at 7:20 AMView Comments

U.S. Navy Patents Firewall

At least, that’s what it sounds like to me:

In a communication system having a plurality of networks, a method of achieving network separation between first and second networks is described. First and second networks with respective first and second degrees of trust are defined, the first degree of trust being higher than the second degree of trust. Communication between the first and second networks is enabled via a network interface system having a protocol stack, the protocol stack implemented by the network interface system in an application layer. Data communication from the second network to the first network is enabled while data communication from the first network to the second network is minimized.

Posted on July 7, 2006 at 7:06 AMView Comments

1 5 6 7 8 9 11

Sidebar photo of Bruce Schneier by Joe MacInnis.