Entries Tagged "computer security"

Page 31 of 33

Fearmongering About Bot Networks

Bot networks are a serious security problem, but this is ridiculous. From the Independent:

The PC in your home could be part of a complex international terrorist network. Without you realising it, your computer could be helping to launder millions of pounds, attacking companies’ websites or cracking confidential government codes.

This is not the stuff of science fiction or a conspiracy theory from a paranoid mind, but a warning from one of the world’s most-respected experts on computer crime. Dr Peter Tippett is chief technology officer at Cybertrust, a US computer security company, and a senior adviser on the issue to President George Bush. His warning is stark: criminals and terrorists are hijacking home PCs over the internet, creating “bot” computers to carry out illegal activities.

Yes, bot networks are bad. They’re used to send spam (both commercial and phishing), launch denial-of-service attacks (sometimes involving extortion), and stage attacks on other systems. Most bot networks are controlled by kids, but more and more criminals are getting into the act.

But your computer a part of an international terrorist network? Get real.

Once a criminal has gathered together what is known as a “herd” of bots, the combined computing power can be dangerous. “If you want to break the nuclear launch code then set a million computers to work on it. There is now a danger of nation state attacks,” says Dr Tippett. “The vast majority of terrorist organisations will use bots.”

I keep reading that last sentence, and wonder if “bots” is just a typo for “bombs.” And the line about bot networks being used to crack nuclear launch codes is nothing more than fearmongering.

Clearly I need to write an essay on bot networks.

Posted on May 17, 2005 at 3:33 PMView Comments

The Potential for an SSH Worm

SSH, or secure shell, is the standard protocol for remotely accessing UNIX systems. It’s used everywhere: universities, laboratories, and corporations (particularly in data-intensive back office services). Thanks to SSH, administrators can stack hundreds of computers close together into air-conditioned rooms and administer them from the comfort of their desks.

When a user’s SSH client first establishes a connection to a remote server, it stores the name of the server and its public key in a known_hosts database. This database of names and keys allows the client to more easily identify the server in the future.

There are risks to this database, though. If an attacker compromises the user’s account, the database can be used as a hit-list of follow-on targets. And if the attacker knows the username, password, and key credentials of the user, these follow-on targets are likely to accept them as well.

A new paper from MIT explores the potential for a worm to use this infection mechanism to propagate across the Internet. Already attackers are exploiting this database after cracking passwords. The paper also warns that a worm that spreads via SSH is likely to evade detection by the bulk of techniques currently coming out of the worm detection community.

While a worm of this type has not been seen since the first Internet worm of 1988, attacks have been growing in sophistication and most of the tools required are already in use by attackers. It’s only a matter of time before someone writes a worm like this.

One of the countermeasures proposed in the paper is to store hashes of host names in the database, rather than the names themselves. This is similar to the way hashes of passwords are stored in password databases, so that security need not rely entirely on the secrecy of the database.

The authors of the paper have worked with the open source community, and version 4.0 of OpenSSH has the option of hashing the known-hosts database. There is also a patch for OpenSSH 3.9 that does the same thing.

The authors are also looking for more data to judge the extent of the problem. Details about the research, the patch, data collection, and whatever else thay have going on can be found here.

Posted on May 10, 2005 at 9:06 AMView Comments

The PITAC Report on CyberSecurity

I finally got around to reading the President’s Information Technology Advisory Committee (PITAC) report entitled “Cyber Security: A Crisis of Prioritization” (dated February 2005). The report looks at the current state of federal involvement in cybersecurity research, and makes recommendations for the future. It’s a good report, and one which the administration would do well to listen to.

The report’s recommendations are based on two observations. The observations are that 1) cybersecurity research is primarily focused on current threats, and not long-term threats, and 2) there simply aren’t enough cybersecurity researchers, and no good mechanism for producing them. The federal government isn’t doing enough to foster cybersecurity research, and the effects of this shortfall will be felt more in the long term than the short term.

To remedy this problem, the report makes four specific recommendations (in much more detail than I summarize here). One, the government needs to increase funding for basic cybersecurity research. Two, the government needs to increase the number of researchers working in cybersecurity. Three, the government need to better foster the transfer of technology from research to product development. And four, the government needs to improve its own cybersecurity coordination and oversight. Four good recommendations.

More specifically, the report lists ten technologies that need more research. They are (not in any priority order):

Authentication Technologies
Secure Fundamental Protocols
Secure Software Engineering and Software Assurance
Holistic System Security
Monitoring and Detection
Mitigation and Recovery Methodologies
Cyber Forensics
Modeling and Testbeds for New Technologies
Metrics, Benchmarks, and Best Practices
Non-Technology Issues that Can Compromise Cyber Security

It’s a good list, and I am especially pleased to see the tenth item—one that is usually forgotten. I would add something on the order of “Dynamic Cyber Security Systems”—I think we need serious basic research in how systems should react to new threats and how to update the security of already fielded system—but that’s all I would change.

The report itself is a bit repetitive, but it’s definitely worth skimming.

Posted on April 27, 2005 at 8:52 AMView Comments

Security Trade-Offs

An essay by an anonymous CSO. This is how it begins:

On any given day, we CSOs come to work facing a multitude of security risks. They range from a sophisticated hacker breaching the network to a common thug picking a lock on the loading dock and making off with company property. Each of these scenarios has a probability of occurring and a payout (in this case, a cost to the company) should it actually occur. To guard against these risks, we have a finite budget of resources in the way of time, personnel, money and equipment—poker chips, if you will.

If we’re good gamblers, we put those chips where there is the highest probability of winning a high payout. In other words, we guard against risks that are most likely to occur and that, if they do occur, will cost the company the most money. We could always be better, but as CSOs, I think we’re getting pretty good at this process. So lately I’ve been wondering—as I watch spending on national security continue to skyrocket, with diminishing marginal returns—why we as a nation can’t apply this same logic to national security spending. If we did this, the war on terrorism would look a lot different. In fact, it might even be over.

The whole thing is worth reading.

Posted on April 22, 2005 at 12:32 PMView Comments

The Price of Restricting Vulnerability Information

Interesting law article:

There are calls from some quarters to restrict the publication of information about security vulnerabilities in an effort to limit the number of people with the knowledge and ability to attack computer systems. Scientists in other fields have considered similar proposals and rejected them, or adopted only narrow, voluntary restrictions. As in other fields of science, there is a real danger that publication restrictions will inhibit the advancement of the state of the art in computer security. Proponents of disclosure restrictions argue that computer security information is different from other scientific research because it is often expressed in the form of functioning software code. Code has a dual nature, as both speech and tool. While researchers readily understand the information expressed in code, code enables many more people to do harm more readily than with the non-functional information typical of most research publications. Yet, there are strong reasons to reject the argument that code is different, and that restrictions are therefore good policy. Code’s functionality may help security as much as it hurts it and the open distribution of functional code has valuable effects for consumers, including the ability to pressure vendors for more secure products and to counteract monopolistic practices.

Posted on April 4, 2005 at 7:25 AMView Comments

Sybase Practices Dumb Security

From Computerworld:

A threat by Sybase Inc. to sue a U.K.-based security research firm if it publicly discloses the details of eight holes it found in Sybase’s database software last year is evoking sharp criticism from some IT managers but sympathetic comments from others.

I can see why Sybase would prefer it if people didn’t know about vulnerabilities in their software—it’s bad for business—but disclosure is the reason companies are fixing them. If researchers are prohibited from publishing, then software developers are free to ignore security problems.

Posted on April 1, 2005 at 1:24 PMView Comments

Tracking Bot Networks

This is a fascinating piece of research on bot networks: networks of compromised computers that can be remotely controlled by an attacker. The paper details how bots and bot networks work, who uses them, how they are used, and how to track them.

From the conclusion:

In this paper we have attempted to demonstrate how honeynets can help us understand how botnets work, the threat they pose, and how attackers control them. Our research shows that some attackers are highly skilled and organized, potentially belonging to well organized crime structures. Leveraging the power of several thousand bots, it is viable to take down almost any website or network instantly. Even in unskilled hands, it should be obvious that botnets are a loaded and powerful weapon. Since botnets pose such a powerful threat, we need a variety of mechanisms to counter it.

Decentralized providers like Akamai can offer some redundancy here, but very large botnets can also pose a severe threat even against this redundancy. Taking down of Akamai would impact very large organizations and companies, a presumably high value target for certain organizations or individuals. We are currently not aware of any botnet usage to harm military or government institutions, but time will tell if this persists.

In the future, we hope to develop more advanced honeypots that help us to gather information about threats such as botnets. Examples include Client honeypots that actively participate in networks (e.g. by crawling the web, idling in IRC channels, or using P2P-networks) or modify honeypots so that they capture malware and send it to anti-virus vendors for further analysis. As threats continue to adapt and change, so must the security community.

Posted on March 14, 2005 at 10:46 AMView Comments

Fixing Unicode

The Unicode community is working on fixing the security vulnerabilities I talked about here and here. They have a draft technical report that they’re looking for comments on. A solution to these security problems will take some concerted efforts, since there are many different kinds of issues involved. (In some ways, the “paypal.com” hack is one of the simpler cases.)

Posted on March 13, 2005 at 9:31 AMView Comments

Bank Sued for Unauthorized Transaction

This story is interesting:

A Miami businessman is suing Bank of America over $90,000 he says was stolen from his online banking account in a case that highlights the thorny question of who is responsible when a customer’s computer is hacked into.

The typical press coverage of this story is along the lines of “Bank of America sued because customer’s PC was hacked.” But that’s not it. Bank of America is being sued because they allowed an unauthorized transaction to occur, and they’re not making good on that mistake. The transaction happened to occur because the customer’s PC was hacked.

I know nothing about the actual suit and its merits, but this is a problem that is not going away. And while I think that banks should not be held responsible for what’s on their customers’ machines, they should be held responsible for allowing unauthorized transactions to occur. The bank’s internal systems, however set up, for whatever reason, permitted the fraudulent transaction.

There is a simple economic incentive problem here. As long as the banks are not responsible for financial losses from fraudulent transactions over the Internet, banks have no incentive to improve security. But if banks are held responsible for these transactions, you can bet that they won’t allow such shoddy security.

Posted on February 9, 2005 at 8:00 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.