Entries Tagged "computer security"

Page 28 of 33

U.S. Immigration Database Security

In September, the Inspector General of the Department of Homeland Security published a report on the security of the USCIS (United States Citizenship and Immigration Services) databases. It’s called: “Security Weaknesses Increase Risks to Critical United States Citizenship and Immigration Services Database,” and a redacted version (.pdf) is on the DHS website.

This is from the Executive Summary:

Although USCIS has not established adequate or effective database security controls for the Central Index System, it has implemented many essential security controls such as procedures for controlling temporary or emergency system access, a configuration management plan, and procedures for implementing routine and emergency changes. Further, we did not identify any significant configuration weaknesses during our technical tests of the Central Index System. However, additional work remains to implement the access controls, configuration management procedures, and continuity of operations safeguards necessary to protect sensitive Central Index System data effectively. Specifically, USCIS has not: 1) implemented effective user administration procedures; 2) reviewed and retained [REDACTED] effectively, 3) ensured that system changes are properly controlled; 4) developed and tested an adequate Information technology (IT) contingency plan; 5) implemented [REDACTED]; or 6) monitored system security functions sufficiently. These database security exposures increase the risk that unauthorized individuals could gain access to critical USCIS database resources and compromise the confidentiality, integrity, and availability of sensitive Central Index System data. [REDACTED]

Posted on December 8, 2005 at 7:38 AMView Comments

OpenDocument Format and the State of Massachusetts

OpenDocument format (ODF) is an alternative to the Microsoft document, spreadsheet, and etc. file formats. (Here’s the homepage for the ODF standard; it’ll put you to sleep, I promise you.)

So far, nothing here is relevant to this blog. Except that Microsoft, with its proprietary Office document format, is spreading rumors that ODF is somehow less secure.

This, from the company that allows Office documents to embed arbitrary Visual Basic programs?

Yes, there is a way to embed scripts in ODF; this seems to be what Microsoft is pointing to. But at least ODF has a clean and open XML format, which allows layered security and the ability to remove scripts as needed. This is much more difficult in the binary Microsoft formats that effectively hide embedded programs.

Microsoft’s claim that the the open ODF is inherently less secure than the proprietary Office format is essentially an argument for security through obscurity. ODF is no less secure than current .doc and other proprietary formats, and may be—marginally, at least—more secure.

This document document from the ODF people says it nicely:

There is no greater security risk, no greater ability to “manipulate code” or gain access to content using ODF than alternative document formats. Security should be addressed through policy decisions on information sharing, regardless of document format. Security exposures caused by programmatic extensions such as the visual basic macros that can be imbedded in Microsoft Office documents are well known and notorious, but there is nothing distinct about ODF that makes it any more or less vulnerable to security risks than any other format specification. The many engineers working to enhance the ODF specification are working to develop techniques to mitigate any exposure that may exist through these extensions.

This whole thing has heated up because Massachusetts recently required public records be held in OpenDocument format, which has put Microsoft into a bit of a tizzy. (Here are two commentaries on the security of that move.) I don’t know if it’s why Microsoft is submitting its Office Document Formats to ECMA for “open standardization,” but I’m sure it’s part of the reason.

Posted on December 7, 2005 at 2:21 PMView Comments

Benevolent Worms

Yet another story about benevolent worms and how they can secure our networks. This idea shows up every few years. (I wrote about it in 2000, and again in 2003. This quote (emphasis mine) from the article shows what the problem is:

Simulations show that the larger the network grows, the more efficient this scheme should be. For example, if a network has 50,000 nodes (computers), and just 0.4% of those are honeypots, just 5% of the network will be infected before the immune system halts the virus, assuming the fix works properly. But, a 200-million-node network ­ with the same proportion of honeypots ­ should see just 0.001% of machines get infected.

This is from my 2003 essay:

A worm is not “bad” or “good” depending on its payload. Viral propagation mechanisms are inherently bad, and giving them beneficial payloads doesn’t make things better. A worm is no tool for any rational network administrator, regardless of intent.

A good software distribution mechanism has the following characteristics:

  1. People can choose the options they want.
  2. Installation is adapted to the host it’s running on.
  3. It’s easy to stop an installation in progress, or uninstall the software.
  4. It’s easy to know what has been installed where.

A successful worm, on the other hand, runs without the consent of the user. It has a small amount of code, and once it starts to spread, it is self-propagating, and will keep going automatically until it’s halted.

These characteristics are simply incompatible. Giving the user more choice, making installation flexible and universal, allowing for uninstallation—all of these make worms harder to propagate. Designing a better software distribution mechanism, makes it a worse worm, and vice versa. On the other hand, making the worm quieter and less obvious to the user, making it smaller and easier to propagate, and making it impossible to contain, all make for bad software distribution.

All of this makes worms easy to get wrong and hard to recover from. Experimentation, most of it involuntary, proves that worms are very hard to debug successfully: in other words, once worms starts spreading it’s hard to predict exactly what they will do. Some viruses were written to propagate harmlessly, but did damage—ranging from crashed machines to clogged networks—because of bugs in their code. Many worms were written to do damage and turned out to be harmless (which is even more revealing).

Intentional experimentation by well-meaning system administrators proves that in your average office environment, the code that successfully patches one machine won’t work on another. Indeed, sometimes the results are worse than any threat of external attack. Combining a tricky problem with a distribution mechanism that’s impossible to debug and difficult to control is fraught with danger. Every system administrator who’s ever distributed software automatically on his network has had the “I just automatically, with the press of a button, destroyed the software on hundreds of machines at once!” experience. And that’s with systems you can debug and control; self-propagating systems don’t even let you shut them down when you find the problem. Patching systems is fundamentally a human problem, and beneficial worms are a technical solution that doesn’t work.

Posted on December 5, 2005 at 2:50 PMView Comments

Cybercrime Pays

This sentence jumped out at me in an otherwise pedestrian article on criminal fraud:

“Fraud is fundamentally fuelling the growth of organised crime in the UK, earning more from fraud than they do from drugs,” Chris Hill, head of fraud at the Norwich Union, told BBC News.

I’ll bet that most of that involves the Internet to some degree.

And then there’s this:

Global cybercrime turned over more money than drug trafficking last year, according to a US Treasury advisor. Valerie McNiven, an advisor to the US government on cybercrime, claimed that corporate espionage, child pornography, stock manipulation, phishing fraud and copyright offences cause more financial harm than the trade in illegal narcotics such as heroin and cocaine.

This doesn’t bode well for computer security in general.

Posted on November 30, 2005 at 6:05 AMView Comments

A Science-Fiction Movie-Plot Threat

This has got to be the most bizarre movie-plot threat to date: alien viruses downloaded via the SETI project:

In his [Richard Carrigan, a particle physicist at the US Fermi National Accelerator Laboratory in Illinois] report, entitled “Do potential Seti signals need to be decontaminated?”, he suggests the Seti scientists may be too blase about finding a signal. “In science fiction, all the aliens are bad, but in the world of science, they are all good and simply want to get in touch.” His main concern is that, intentionally or otherwise, an extra-terrestrial signal picked up by the Seti team could cause widespread damage to computers if released on to the internet without being checked.

Here’s his website.

Although you have to admit, it could make a cool movie

EDITED TO ADD (12/16): Here’s a good rebuttal.

Posted on November 29, 2005 at 7:16 AMView Comments

Vote Someone Else's Shares

Do you own shares of a Janus mutual fund? Can you vote your shares through a website called vote.proxy-direct.com? If so, you can vote the shares of others.

If you have a valid proxy number, you can add 1300 to the number to get another valid proxy number. Once entered, you get another person’s name, address, and account number at Janus! You could then vote their shares too.

It’s easy.

Probably illegal.

Definitely a great resource for identity thieves.

Certainly pathetic.

Posted on November 24, 2005 at 10:41 AMView Comments

Possible Net Objects Fusion 9 Vulnerability

I regularly get anonymous e-mail from people exposing software vulnerabilities. This one looks interesting.

Beta testers have discovered a serious security flaw that exposes a site created using Net Objects Fusion 9 (NOF9) that has the potential to expose an entire site to hacking, including passwords and log in info for that site. The vulnerability exists for any website published using versioning (that is, all sites using nPower).

The vulnerability is easy to exploit. In your browser enter:
http://domain.com/_versioning_repository_/rollbacklog.xml

Now enter:
http://domain.com/_versioning_repository_/n.zip, where n is the number you got from rollback.xml.

Then, open Fusion and create a new site from the d/l’ed template. Edit and republish.

This means that anyone can edit a NOF9 site and get any usernames and passwords involved in it. Every site using versioning in NOF9 is exposing their site.

Website Pros has refused to fix the hole. The only concession that they have made is to put a warning in the publishing dialog box telling the user to “Please make sure your profiles repository are [sic] stored in a secure area of your remote server.”

I don’t use NOF9, and I haven’t tested this vulnerability. Can someone do so and get back to me? And if it is a real problem, spread the word. I don’t know yet if Website Pros prefers to pay lawyers to suppress information rather than pay developers to fix software vulnerabilities.

Posted on November 21, 2005 at 12:31 PMView Comments

Cold War Software Bugs

Here’s a report that the CIA slipped software bugs to the Soviets in the 1980s:

In January 1982, President Ronald Reagan approved a CIA plan to sabotage the economy of the Soviet Union through covert transfers of technology that contained hidden malfunctions, including software that later triggered a huge explosion in a Siberian natural gas pipeline, according to a new memoir by a Reagan White House official.

A CIA article from 1996 also describes this.

EDITED TO ADD (11/14): Marcus Ranum wrote about this.

Posted on November 14, 2005 at 8:04 AMView Comments

Howard Schmidt on Software Vulnerabilities

Howard Schmidt was misquoted in the article that spurred my rebuttal.

This essay outlines what he really thinks:

Like it or not, the hard work of developers often takes the brunt of malicious hacker attacks.

Many people know that developers are often under intense pressure to deliver more features on time and under budget. Few developers get the time to review their code for potential security vulnerabilities. When they do get the time, they often don’t have secure-coding training and lack the automated tools to prevent hackers from using hundreds of common exploit techniques to trigger malicious attacks.

So what can software vendors do? In a sense, a big part of the answer is relatively old fashioned; the developers need to be accountable to their employers and provided with incentives, better tools and proper training.

He’s against making vendors liable for defects in their products, unlike every other industry:

I always have been, and continue to be, against any sort of liability actions as long as we continue to see market forces improve software. Unfortunately, introducing vendor liability to solve security flaws hurts everybody, including employees, shareholders and customers, because it raises costs and stifles innovation.

After all, when companies are faced with large punitive judgments, a frequent step is often to cut salaries, increase prices or even reduce employees. This is not good for anyone.

And he closes with:

In the end, what security requires is the same attention any business goal needs. Employers should expect their employees to take pride in and own a certain level of responsibility for their work. And employees should expect their employers to provide the tools and training they need to get the job done. With these expectations established and goals agreed on, perhaps the software industry can do a better job of strengthening the security of its products by reducing software vulnerabilities.

That first sentence, I think, nicely sums up what’s wrong with his argument. If security is to be a business goal, then it needs to make business sense. Right now, it makes more business sense not to produce secure software products than it does to produce secure software products. Any solution needs to address that fundamental market failure, instead of simply wishing it were true.

Posted on November 8, 2005 at 7:34 AMView Comments

Sony Secretly Installs Rootkit on Computers

Mark Russinovich discovered a rootkit on his system. After much analysis, he discovered that the rootkit was installed as a part of the DRM software linked with a CD he bought. The package cannot be uninstalled. Even worse, the package actively cloaks itself from process listings and the file system.

At that point I knew conclusively that the rootkit and its associated files were related to the First 4 Internet DRM software Sony ships on its CDs. Not happy having underhanded and sloppily written software on my system I looked for a way to uninstall it. However, I didn’t find any reference to it in the Control Panel’s Add or Remove Programs list, nor did I find any uninstall utility or directions on the CD or on First 4 Internet’s site. I checked the EULA and saw no mention of the fact that I was agreeing to have software put on my system that I couldn’t uninstall. Now I was mad.

Removing the rootkit kills Windows.

Could Sony have violated the the Computer Misuse Act in the UK? If this isn’t clearly in the EULA, they have exceeded their privilege on the customer’s system by installing a rootkit to hide their software.

Certainly Mark has a reasonable lawsuit against Sony in the U.S.

EDITED TO ADD: The Washington Post is covering this story.

Sony lies about their rootkit:

November 2, 2005 – This Service Pack removes the cloaking technology component that has been recently discussed in a number of articles published regarding the XCP Technology used on SONY BMG content protected CDs. This component is not malicious and does not compromise security. However to alleviate any concerns that users may have about the program posing potential security vulnerabilities, this update has been released to enable users to remove this component from their computers.

Their update does not remove the rootkit, it just gets rid of the $sys$ cloaking.

Ed Felton has a great post on the issue:

The update is more than 3.5 megabytes in size, and it appears to contain new versions of almost all the files included in the initial installation of the entire DRM system, as well as creating some new files. In short, they’re not just taking away the rootkit-like function—they’re almost certainly adding things to the system as well. And once again, they’re not disclosing what they’re doing.

No doubt they’ll ask us to just trust them. I wouldn’t. The companies still assert—falsely—that the original rootkit-like software “does not compromise security” and “[t]here should be no concern” about it. So I wouldn’t put much faith in any claim that the new update is harmless. And the companies claim to have developed “new ways of cloaking files on a hard drive”. So I wouldn’t derive much comfort from carefully worded assertions that they have removed “the … component .. that has been discussed”.

And you can use the rootkit to avoid World of Warcraft spyware.

World of Warcraft hackers have confirmed that the hiding capabilities of Sony BMG’s content protection software can make tools made for cheating in the online world impossible to detect.

.

EDITED TO ADD: F-Secure makes a good point:

A member of our IT security team pointed out quite chilling thought about what might happen if record companies continue adding rootkit based copy protection into their CDs.

In order to hide from the system a rootkit must interface with the OS on very low level and in those areas theres no room for error.

It is hard enough to program something on that level, without having to worry about any other programs trying to do something with same parts of the OS.

Thus if there would be two DRM rootkits on the same system trying to hook same APIs, the results would be highly unpredictable. Or actually, a system crash is quite predictable result in such situation.

EDITED TO ADD: Declan McCullagh has a good essay on the topic. There will be lawsuits.

EDITED TO ADD: The Italian police are getting involved.

EDITED TO ADD: Here’s a Trojan that uses Sony’s rootkit to hide.

EDITED TO ADD: Sony temporarily halts production of CDs protected with this technology.

Posted on November 1, 2005 at 10:17 AMView Comments

1 26 27 28 29 30 33

Sidebar photo of Bruce Schneier by Joe MacInnis.