Entries Tagged "operating systems"

Page 4 of 11

Forged Memory

A scary development in rootkits:

Rootkits typically modify certain areas in the memory of the running operating system (OS) to hijack execution control from the OS. Doing so forces the OS to present inaccurate results to detection software (anti-virus, anti-rootkit).

For example rootkits may hide files, registries, processes, etc., from detection software. So rootkits typically modify memory. And anti-rootkit tools inspect memory areas to identify such suspicious modifications and alarm users.

This particular rootkit also modifies a memory location (installs a hook) to prevent proper disk access by detection software. Let us say that location is X. It is noteworthy that this location X is well known for being modified by other rootkit families, and is not unique to this particular rootkit.

Now since the content at location X is known to be altered by rootkits in general, most anti-rootkit tools will inspect the content at memory location X to see if it has been modified.

[…]

In the case of this particular rootkit, the original (what’s expected) content at location X is moved by the rootkit to a different location, Y. When an anti-rootkit tool tries to read the contents at location X, it is served contents from location Y. So, the anti-rootkit tool thinking everything is as it should be, does not warn the user of suspicious activity.

Posted on May 6, 2011 at 12:32 PMView Comments

Whitelisting vs. Blacklisting

The whitelist/blacklist debate is far older than computers, and it’s instructive to recall what works where. Physical security works generally on a whitelist model: if you have a key, you can open the door; if you know the combination, you can open the lock. We do it this way not because it’s easier—although it is generally much easier to make a list of people who should be allowed through your office door than a list of people who shouldn’t—but because it’s a security system that can be implemented automatically, without people.

To find blacklists in the real world, you have to start looking at environments where almost everyone is allowed. Casinos are a good example: everyone can come in and gamble except those few specifically listed in the casino’s black book or the more general Griffin book. Some retail stores have the same model—a Google search on “banned from Wal-Mart” results in 1.5 million hits, including Megan Fox—although you have to wonder about enforcement. Does Wal-Mart have the same sort of security manpower as casinos?

National borders certainly have that kind of manpower, and Marcus is correct to point to passport control as a system with both a whitelist and a blacklist. There are people who are allowed in with minimal fuss, people who are summarily arrested with as minimal a fuss as possible, and people in the middle who receive some amount of fussing. Airport security works the same way: the no-fly list is a blacklist, and people with redress numbers are on the whitelist.

Computer networks share characteristics with your office and Wal-Mart: sometimes you only want a few people to have access, and sometimes you want almost everybody to have access. And you see whitelists and blacklists at work in computer networks. Access control is whitelisting: if you know the password, or have the token or biometric, you get access. Antivirus is blacklisting: everything coming into your computer from the Internet is assumed to be safe unless it appears on a list of bad stuff. On computers, unlike the real world, it takes no extra manpower to implement a blacklist—the software can do it largely for free.

Traditionally, execution control has been based on a blacklist. Computers are so complicated and applications so varied that it just doesn’t make sense to limit users to a specific set of applications. The exception is constrained environments, such as computers in hotel lobbies and airline club lounges. On those, you’re often limited to an Internet browser and a few common business applications.

Lately, we’re seeing more whitelisting on closed computing platforms. The iPhone works on a whitelist: if you want a program to run on the phone, you need to get it approved by Apple and put in the iPhone store. Your Wii game machine works the same way. This is done primarily because the manufacturers want to control the economic environment, but it’s being sold partly as a security measure. But in this case, more security equals less liberty; do you really want your computing options limited by Apple, Microsoft, Google, Facebook, or whoever controls the particular system you’re using?

Turns out that many people do. Apple’s control over its apps hasn’t seemed to hurt iPhone sales, and Facebook’s control over its apps hasn’t seemed to affect Facebook’s user numbers. And honestly, quite a few of us would have had an easier time over the Christmas holidays if we could have implemented a whitelist on the computers of our less-technical relatives.

For these two reasons, I think the whitelist model will continue to make inroads into our general purpose computers. And those of us who want control over our own environments will fight back—perhaps with a whitelist we maintain personally, but more probably with a blacklist.

This essay previously appeared in Information Security as the first half of a point-counterpoint with Marcus Ranum. You can read Marcus’s half there as well.

Posted on January 28, 2011 at 5:02 AMView Comments

Indian OS

India is writing its own operating system so it doesn’t have to rely on Western technology:

India’s Defence Research and Development Organisation (DRDO) wants to build an OS, primarily so India can own the source code and architecture. That will mean the country won’t have to rely on Western operating systems that it thinks aren’t up to the job of thwarting cyber attacks. The DRDO specifically wants to design and develop its own OS that is hack-proof to prevent sensitive data from being stolen.

On the one hand, this is great. We could use more competition in the OS market—as more and more applications move into the cloud and are only accessed via an Internet browser, OS compatible matters less and less—and an OS that brands itself as “more secure” can only help. But this security by obscurity thinking just isn’t true:

“The only way to protect it is to have a home-grown system, the complete architecture … source code is with you and then nobody knows what’s that,” he added.

The only way to protect it is to design and implement it securely. Keeping control of your source code didn’t magically make Windows secure, and it won’t make this Indian OS secure.

Posted on October 15, 2010 at 3:12 AMView Comments

Protecting OSs from RootKits

Interesting research: “Countering Kernel Rootkits with Lightweight Hook Protection,” by Zhi Wang, Xuxian Jiang, Weidong Cui, and Peng Ning.

Abstract: Kernel rootkits have posed serious security threats due to their stealthy manner. To hide their presence and activities, many rootkits hijack control flows by modifying control data or hooks in the kernel space. A critical step towards eliminating rootkits is to protect such hooks from being hijacked. However, it remains a challenge because there exist a large number of widely-scattered kernel hooks and many of them could be dynamically allocated from kernel heap and co-located together with other kernel data. In addition, there is a lack of flexible commodity hardware support, leading to the socalled protection granularity gap ­ kernel hook protection requires byte-level granularity but commodity hardware only provides pagelevel protection.

To address the above challenges, in this paper, we present HookSafe, a hypervisor-based lightweight system that can protect thousands of kernel hooks in a guest OS from being hijacked. One key observation behind our approach is that a kernel hook, once initialized, may be frequently “read”-accessed, but rarely “write”-accessed. As such, we can relocate those kernel hooks to a dedicated page-aligned memory space and then regulate accesses to them with hardware-based page-level protection. We have developed a prototype of HookSafe and used it to protect more than 5, 900 kernel hooks in a Linux guest. Our experiments with nine real-world rootkits show that HookSafe can effectively defeat their attempts to hijack kernel hooks. We also show that HookSafe achieves such a large-scale protection with a small overhead (e.g., around 6% slowdown in performance benchmarks).

The research will be presented at the 16th ACM Conference on Computer and Communications Security this week. Here’s an article on the research.

Posted on November 10, 2009 at 1:26 PMView Comments

Ballmer Blames the Failure of Windows Vista on Security

According to the Telegraph:

Mr Ballmer said: “We got some uneven reception when [Vista] first launched in large part because we made some design decisions to improve security at the expense of compatibility. I don’t think from a word-of-mouth perspective we ever recovered from that.”

Commentary:

Vista’s failure and Ballmer’s faulting security is a bit of being careful for what you wish. Vista (codename “Longhorn” during its development) was always intended to be a more secure operating system. Following the security disasters and 2000 and 2001 that befell Windows 98 and 2000, Microsoft shut down all software development and launched the Trustworthy Computing Initiative that advocated secure coding practices. Microsoft retrained thousands of programmers to eliminate common security problems such as buffer overflows. The immediate result was a retooling of Windows XP to make it more secure for its 2002 launch. Long-term, though, was to make Vista the most secure operating system in Microsoft’s history.

What made XP and Vista more secure? Eliminating (or reducing) buffer overflow errors helped. But what really made a difference is shutting off services by default. Many of the vulnerabilities exploited in Windows 98, NT and 2000 were actually a result of unused services that were active by default. Microsoft’s own vulnerability tracking shows that Vista has far less reported vulnerabilities than any of its predecessors. Unfortunately, a Vista locked down out of the box made it less palatable to users.

Now security obstacles aren’t the only ills that Vista suffered. Huge memory footprint, incompatible graphics requirements, slow responsiveness and a general sense that it was already behind competing Mac and Linux OSes in functionality and features made Vista thud. In my humble opinion, the security gains in Vista were worth many of the tradeoffs; and it was the other technical requirements and incompatible applications that doomed this operating system.

There was also the problem of Vista’s endless security warnings. The problem is that they were almost always false alarms, and there were no adverse effects of ignoring them. So users did, which means they ended up being nothing but an annoyance.

Security warnings are often a way for the developer to avoid making a decision. “We don’t know what to do here, so we’ll put up a warning and ask the user.” But unless the users have the information and the expertise to make the decision, they’re not going to be able to. We need user interfaces that only put up warnings when it matters.

I never upgraded to Vista. I’m hoping Windows 7 is worth upgrading to. We’ll see.

EDITED TO ADD (10/22): Another opinion.

Posted on October 21, 2009 at 7:46 AMView Comments

Proving a Computer Program's Correctness

This is interesting:

Professor Gernot Heiser, the John Lions Chair in Computer Science in the School of Computer Science and Engineering and a senior principal researcher with NICTA, said for the first time a team had been able to prove with mathematical rigour that an operating-system kernel—the code at the heart of any computer or microprocessor—was 100 per cent bug-free and therefore immune to crashes and failures.

Don’t expect this to be practical any time soon:

Verifying the kernel—known as the seL4 microkernel—involved mathematically proving the correctness of about 7,500 lines of computer code in an project taking an average of six people more than five years.

That’s 250 lines of code verified per man-year. Both Linux and Windows have something like 50 million lines of code; verifying that would take 200,000 man-years, assuming no increased complexity resulting from the increased complexity. Clearly some efficiency improvements are required.

Posted on October 2, 2009 at 7:01 AMView Comments

Texas Instruments Signing Keys Broken

Texas Instruments’ calculators use RSA digital signatures to authenticate any updates to their operating system. Unfortunately, their signing keys are too short: 512-bits. Earlier this month, a collaborative effort factored the moduli and published the private keys. Texas Instruments responded by threatening websites that published the keys with the DMCA, but it’s too late.

So far, we have the operating-system signing keys for the TI-92+, TI-73, TI-89, TI-83+/TI-83+ Silver Edition, Voyage 200, TI-89 Titanium, and the TI-84+/TI-84 Silver Edition, and the date-stamp signing key for the TI-73, Explorer, TI-83 Plus, TI-83 Silver Edition, TI-84 Plus, TI-84 Silver Edition, TI-89, TI-89 Titanium, TI-92 Plus, and the Voyage 200.

Moral: Don’t assume that if your application is obscure, or if there’s no obvious financial incentive for doing so, that your cryptography won’t be broken if you use too-short keys.

Posted on September 25, 2009 at 6:17 AMView Comments

Making an Operating System Virus Free

Commenting on Google’s claim that Chrome was designed to be virus-free, I said:

Bruce Schneier, the chief security technology officer at BT, scoffed at Google’s promise. “It’s an idiotic claim,” Schneier wrote in an e-mail. “It was mathematically proved decades ago that it is impossible—not an engineering impossibility, not technologically impossible, but the 2+2=3 kind of impossible—to create an operating system that is immune to viruses.”

What I was referring to, although I couldn’t think of his name at the time, was Fred Cohen’s 1986 Ph.D. thesis where he proved that it was impossible to create a virus-checking program that was perfect. That is, it is always possible to write a virus that any virus-checking program will not detect.

This reaction to my comment is accurate:

That seems to us like he’s picking on the semantics of Google’s statement just a bit. Google says that users “won’t have to deal with viruses,” and Schneier is noting that it’s simply not possible to create an OS that can’t be taken down by malware. While that may be the case, it’s likely that Chrome OS is going to be arguably more secure than the other consumer operating systems currently in use today. In fact, we didn’t take Google’s statement to mean that Chrome OS couldn’t get a virus EVER; we just figured they meant it was a lot harder to get one on their new OS – didn’t you?

When I said that I had not seen Google’s statement. I was responding to what the reporter was telling me on the phone. So yes, I jumped on the reporter’s claim about Google’s claim. I did try to temper my comment:

Redesigning an operating system from scratch, “[taking] security into account all the way up and down,” could make for a more secure OS than ones that have been developed so far, Schneier said. But that’s different from Google’s promise that users won’t have to deal with viruses or malware, he added.

To summarize, there is a lot that can be done in an OS to reduce the threat of viruses and other malware. If the Chrome team started from scratch and took security seriously all through the design and development process, they have to potential to develop something really secure. But I don’t know if they did.

Posted on July 10, 2009 at 9:44 AMView Comments

Malware Steals ATM Data

One of the risks of using a commercial OS for embedded systems like ATMs: it’s easier to write malware against it:

The report does not detail how the ATMs are infected, but it seems likely that the malware is encoded on a card that can be inserted in an ATM card reader to mount a buffer overflow attack. The machine is compromised by replacing the isadmin.exe file to infect the system.

The malicious isadmin.exe program then uses the Windows API to install the functional attack code by replacing a system file called lsass.exe in the C:WINDOWS directory.

Once the malicious lsass.exe program is installed, it collects users account numbers and PIN codes and waits for a human controller to insert a specially crafted control card to take over the ATM.

After the ATM is put under control of a human attacker, they can perform various functions, including harvesting the purloined data or even ejecting the cash box.

EDITED TO ADD (6/14): Seems like the story I quoted was jumping to conclusions. The actual report says “the malware is installed and activated through a dropper file (a file that an attacker can use to deploy tools onto a compromised system) by the name of isadmin.exe,” which doesn’t really sound like it’s referring to a buffer overflow attack carried out through a card emulator. Also, The Register says “[the] malicious programs can be installed only by people with physical access to the machines, making some level of insider cooperation necessary.”

Posted on June 10, 2009 at 1:51 PMView Comments

Update on Computer Science Student's Computer Seizure

In April, I blogged about the Boston police seizing a student’s computer for, among other things, running Linux. (Anyone who runs Linux instead of Windows is obviously a scary bad hacker.)

Last week, the Massachusetts Supreme Court threw out the search warrant:

Massachusetts Supreme Judicial Court Associate Justice Margot Botsford on Thursday said that Boston College and Massachusetts State Police had insufficient evidence to search the dorm room of BC senior Riccardo Calixte. During the search, police confiscated a variety of electronic devices, including three laptop computers, two iPod music players, and two cellphones.

Police obtained a warrant to search Calixte’s dorm after a roommate accused him of breaking into the school’s computer network to change other students’ grades, and of spreading a rumor via e-mail that the roommate is gay.

Botsford said the search warrant affidavit presented considerable evidence that the e-mail came from Calixte’s laptop computer. But even if it did, she said, spreading such rumors is probably not illegal. Botsford also said that while breaking into BC’s computer network would be criminal activity, the affidavit supporting the warrant presented little evidence that such a break-in had taken place.

Posted on June 2, 2009 at 12:01 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.