Entries Tagged "Windows"

Page 5 of 10

Indian OS

India is writing its own operating system so it doesn’t have to rely on Western technology:

India’s Defence Research and Development Organisation (DRDO) wants to build an OS, primarily so India can own the source code and architecture. That will mean the country won’t have to rely on Western operating systems that it thinks aren’t up to the job of thwarting cyber attacks. The DRDO specifically wants to design and develop its own OS that is hack-proof to prevent sensitive data from being stolen.

On the one hand, this is great. We could use more competition in the OS market—as more and more applications move into the cloud and are only accessed via an Internet browser, OS compatible matters less and less—and an OS that brands itself as “more secure” can only help. But this security by obscurity thinking just isn’t true:

“The only way to protect it is to have a home-grown system, the complete architecture … source code is with you and then nobody knows what’s that,” he added.

The only way to protect it is to design and implement it securely. Keeping control of your source code didn’t magically make Windows secure, and it won’t make this Indian OS secure.

Posted on October 15, 2010 at 3:12 AMView Comments

Stuxnet

Computer security experts are often surprised at which stories get picked up by the mainstream media. Sometimes it makes no sense. Why this particular data breach, vulnerability, or worm and not others? Sometimes it’s obvious. In the case of Stuxnet, there’s a great story.

As the story goes, the Stuxnet worm was designed and released by a government—the U.S. and Israel are the most common suspects—specifically to attack the Bushehr nuclear power plant in Iran. How could anyone not report that? It combines computer attacks, nuclear power, spy agencies and a country that’s a pariah to much of the world. The only problem with the story is that it’s almost entirely speculation.

Here’s what we do know: Stuxnet is an Internet worm that infects Windows computers. It primarily spreads via USB sticks, which allows it to get into computers and networks not normally connected to the Internet. Once inside a network, it uses a variety of mechanisms to propagate to other machines within that network and gain privilege once it has infected those machines. These mechanisms include both known and patched vulnerabilities, and four “zero-day exploits”: vulnerabilities that were unknown and unpatched when the worm was released. (All the infection vulnerabilities have since been patched.)

Stuxnet doesn’t actually do anything on those infected Windows computers, because they’re not the real target. What Stuxnet looks for is a particular model of Programmable Logic Controller (PLC) made by Siemens (the press often refers to these as SCADA systems, which is technically incorrect). These are small embedded industrial control systems that run all sorts of automated processes: on factory floors, in chemical plants, in oil refineries, at pipelines—and, yes, in nuclear power plants. These PLCs are often controlled by computers, and Stuxnet looks for Siemens SIMATIC WinCC/Step 7 controller software.

If it doesn’t find one, it does nothing. If it does, it infects it using yet another unknown and unpatched vulnerability, this one in the controller software. Then it reads and changes particular bits of data in the controlled PLCs. It’s impossible to predict the effects of this without knowing what the PLC is doing and how it is programmed, and that programming can be unique based on the application. But the changes are very specific, leading many to believe that Stuxnet is targeting a specific PLC, or a specific group of PLCs, performing a specific function in a specific location—and that Stuxnet’s authors knew exactly what they were targeting.

It’s already infected more than 50,000 Windows computers, and Siemens has reported 14 infected control systems, many in Germany. (These numbers were certainly out of date as soon as I typed them.) We don’t know of any physical damage Stuxnet has caused, although there are rumors that it was responsible for the failure of India’s INSAT-4B satellite in July. We believe that it did infect the Bushehr plant.

All the anti-virus programs detect and remove Stuxnet from Windows systems.

Stuxnet was first discovered in late June, although there’s speculation that it was released a year earlier. As worms go, it’s very complex and got more complex over time. In addition to the multiple vulnerabilities that it exploits, it installs its own driver into Windows. These have to be signed, of course, but Stuxnet used a stolen legitimate certificate. Interestingly, the stolen certificate was revoked on July 16, and a Stuxnet variant with a different stolen certificate was discovered on July 17.

Over time the attackers swapped out modules that didn’t work and replaced them with new ones—perhaps as Stuxnet made its way to its intended target. Those certificates first appeared in January. USB propagation, in March.

Stuxnet has two ways to update itself. It checks back to two control servers, one in Malaysia and the other in Denmark, but also uses a peer-to-peer update system: When two Stuxnet infections encounter each other, they compare versions and make sure they both have the most recent one. It also has a kill date of June 24, 2012. On that date, the worm will stop spreading and delete itself.

We don’t know who wrote Stuxnet. We don’t know why. We don’t know what the target is, or if Stuxnet reached it. But you can see why there is so much speculation that it was created by a government.

Stuxnet doesn’t act like a criminal worm. It doesn’t spread indiscriminately. It doesn’t steal credit card information or account login credentials. It doesn’t herd infected computers into a botnet. It uses multiple zero-day vulnerabilities. A criminal group would be smarter to create different worm variants and use one in each. Stuxnet performs sabotage. It doesn’t threaten sabotage, like a criminal organization intent on extortion might.

Stuxnet was expensive to create. Estimates are that it took 8 to 10 people six months to write. There’s also the lab setup—surely any organization that goes to all this trouble would test the thing before releasing it—and the intelligence gathering to know exactly how to target it. Additionally, zero-day exploits are valuable. They’re hard to find, and they can only be used once. Whoever wrote Stuxnet was willing to spend a lot of money to ensure that whatever job it was intended to do would be done.

None of this points to the Bushehr nuclear power plant in Iran, though. Best I can tell, this rumor was started by Ralph Langner, a security researcher from Germany. He labeled his theory “highly speculative,” and based it primarily on the facts that Iran had an unusually high number of infections (the rumor that it had the most infections of any country seems not to be true), that the Bushehr nuclear plant is a juicy target, and that some of the other countries with high infection rates—India, Indonesia, and Pakistan—are countries where the same Russian contractor involved in Bushehr is also involved. This rumor moved into the computer press and then into the mainstream press, where it became the accepted story, without any of the original caveats.

Once a theory takes hold, though, it’s easy to find more evidence. The word “myrtus” appears in the worm: an artifact that the compiler left, possibly by accident. That’s the myrtle plant. Of course, that doesn’t mean that druids wrote Stuxnet. According to the story, it refers to Queen Esther, also known as Hadassah; she saved the Persian Jews from genocide in the 4th century B.C. “Hadassah” means “myrtle” in Hebrew.

Stuxnet also sets a registry value of “19790509” to alert new copies of Stuxnet that the computer has already been infected. It’s rather obviously a date, but instead of looking at the gazillion things—large and small—that happened on that the date, the story insists it refers to the date Persian Jew Habib Elghanain was executed in Tehran for spying for Israel.

Sure, these markers could point to Israel as the author. On the other hand, Stuxnet’s authors were uncommonly thorough about not leaving clues in their code; the markers could have been deliberately planted by someone who wanted to frame Israel. Or they could have been deliberately planted by Israel, who wanted us to think they were planted by someone who wanted to frame Israel. Once you start walking down this road, it’s impossible to know when to stop.

Another number found in Stuxnet is 0xDEADF007. Perhaps that means “Dead Fool” or “Dead Foot,” a term that refers to an airplane engine failure. Perhaps this means Stuxnet is trying to cause the targeted system to fail. Or perhaps not. Still, a targeted worm designed to cause a specific sabotage seems to be the most likely explanation.

If that’s the case, why is Stuxnet so sloppily targeted? Why doesn’t Stuxnet erase itself when it realizes it’s not in the targeted network? When it infects a network via USB stick, it’s supposed to only spread to three additional computers and to erase itself after 21 days—but it doesn’t do that. A mistake in programming, or a feature in the code not enabled? Maybe we’re not supposed to reverse engineer the target. By allowing Stuxnet to spread globally, its authors committed collateral damage worldwide. From a foreign policy perspective, that seems dumb. But maybe Stuxnet’s authors didn’t care.

My guess is that Stuxnet’s authors, and its target, will forever remain a mystery.

This essay originally appeared on Forbes.com.

My alternate explanations for Stuxnet were cut from the essay. Here they are:

  • A research project that got out of control. Researchers have accidentally released worms before. But given the press, and the fact that any researcher working on something like this would be talking to friends, colleagues, and his advisor, I would expect someone to have outed him by now, especially if it was done by a team.
  • A criminal worm designed to demonstrate a capability. Sure, that’s possible. Stuxnet could be a prelude to extortion. But I think a cheaper demonstration would be just as effective. Then again, maybe not.
  • A message. It’s hard to speculate any further, because we don’t know who the message is for, or its context. Presumably the intended recipient would know. Maybe it’s a “look what we can do” message. Or an “if you don’t listen to us, we’ll do worse next time” message. Again, it’s a very expensive message, but maybe one of the pieces of the message is “we have so many resources that we can burn four or five man-years of effort and four zero-day vulnerabilities just for the fun of it.” If that message were for me, I’d be impressed.
  • A worm released by the U.S. military to scare the government into giving it more budget and power over cybersecurity. Nah, that sort of conspiracy is much more common in fiction than in real life.

Note that some of these alternate explanations overlap.

EDITED TO ADD (10/7): Symantec published a very detailed analysis. It seems like one of the zero-day vulnerabilities wasn’t a zero-day after all. Good CNet article. More speculation, without any evidence. Decent debunking. Alternate theory, that the target was the uranium centrifuges in Natanz, Iran.

Posted on October 7, 2010 at 9:56 AMView Comments

Hacking ATMs

Hacking ATMs to spit out money, demonstrated at the Black Hat conference:

The two systems he hacked on stage were made by Triton and Tranax. The Tranax hack was conducted using an authentication bypass vulnerability that Jack found in the system’s remote monitoring feature, which can be accessed over the Internet or dial-up, depending on how the owner configured the machine.

Tranax’s remote monitoring system is turned on by default, but Jack said the company has since begun advising customers to protect themselves from the attack by disabling the remote system.

To conduct the remote hack, an attacker would need to know an ATM’s Internet IP address or phone number. Jack said he believes about 95 percent of retail ATMs are on dial-up; a hacker could war dial for ATMs connected to telephone modems, and identify them by the cash machine’s proprietary protocol.

The Triton attack was made possible by a security flaw that allowed unauthorized programs to execute on the system. The company distributed a patch last November so that only digitally signed code can run on them.

Both the Triton and Tranax ATMs run on Windows CE.

Using a remote attack tool, dubbed Dillinger, Jack was able to exploit the authentication bypass vulnerability in Tranax’s remote monitoring feature and upload software or overwrite the entire firmware on the system. With that capability, he installed a malicious program he wrote, called Scrooge.

EDITED TO ADD (7/30): Another two articles.

Posted on July 30, 2010 at 8:55 AMView Comments

New Windows Attack

It’s still only in the lab, but nothing detects it right now:

The attack is a clever “bait-and-switch” style move. Harmless code is passed to the security software for scanning, but as soon as it’s given the green light, it’s swapped for the malicious code. The attack works even more reliably on multi-core systems because one thread doesn’t keep an eye on other threads that are running simultaneously, making the switch easier.

The attack, called KHOBE (Kernel HOok Bypassing Engine), leverages a Windows module called the System Service Descriptor Table, or SSDT, which is hooked up to the Windows kernel. Unfortunately, SSDT is utilized by antivirus software.

Posted on May 14, 2010 at 11:50 AMView Comments

Ballmer Blames the Failure of Windows Vista on Security

According to the Telegraph:

Mr Ballmer said: “We got some uneven reception when [Vista] first launched in large part because we made some design decisions to improve security at the expense of compatibility. I don’t think from a word-of-mouth perspective we ever recovered from that.”

Commentary:

Vista’s failure and Ballmer’s faulting security is a bit of being careful for what you wish. Vista (codename “Longhorn” during its development) was always intended to be a more secure operating system. Following the security disasters and 2000 and 2001 that befell Windows 98 and 2000, Microsoft shut down all software development and launched the Trustworthy Computing Initiative that advocated secure coding practices. Microsoft retrained thousands of programmers to eliminate common security problems such as buffer overflows. The immediate result was a retooling of Windows XP to make it more secure for its 2002 launch. Long-term, though, was to make Vista the most secure operating system in Microsoft’s history.

What made XP and Vista more secure? Eliminating (or reducing) buffer overflow errors helped. But what really made a difference is shutting off services by default. Many of the vulnerabilities exploited in Windows 98, NT and 2000 were actually a result of unused services that were active by default. Microsoft’s own vulnerability tracking shows that Vista has far less reported vulnerabilities than any of its predecessors. Unfortunately, a Vista locked down out of the box made it less palatable to users.

Now security obstacles aren’t the only ills that Vista suffered. Huge memory footprint, incompatible graphics requirements, slow responsiveness and a general sense that it was already behind competing Mac and Linux OSes in functionality and features made Vista thud. In my humble opinion, the security gains in Vista were worth many of the tradeoffs; and it was the other technical requirements and incompatible applications that doomed this operating system.

There was also the problem of Vista’s endless security warnings. The problem is that they were almost always false alarms, and there were no adverse effects of ignoring them. So users did, which means they ended up being nothing but an annoyance.

Security warnings are often a way for the developer to avoid making a decision. “We don’t know what to do here, so we’ll put up a warning and ask the user.” But unless the users have the information and the expertise to make the decision, they’re not going to be able to. We need user interfaces that only put up warnings when it matters.

I never upgraded to Vista. I’m hoping Windows 7 is worth upgrading to. We’ll see.

EDITED TO ADD (10/22): Another opinion.

Posted on October 21, 2009 at 7:46 AMView Comments

Microsoft Bans Memcopy()

This seems smart:

Microsoft plans to formally banish the popular programming function that’s been responsible for an untold number of security vulnerabilities over the years, not just in Windows but in countless other applications based on the C language. Effective later this year, Microsoft will add memcpy(), CopyMemory(), and RtlCopyMemory() to its list of function calls banned under its secure development lifecycle.

Here’s the list of banned function calls. This doesn’t help secure legacy code, of course, but you have to start somewhere.

Posted on May 20, 2009 at 6:17 AMView Comments

Secure Version of Windows Created for the U.S. Air Force

I have long argued that the government should use its massive purchasing power to pressure software vendors to improve security. Seems like the U.S. Air Force has done just that:

The Air Force, on the verge of renegotiating its desktop-software contract with Microsoft, met with Ballmer and asked the company to deliver a secure configuration of Windows XP out of the box. That way, Air Force administrators wouldn’t have to spend time re-configuring, and the department would have uniform software across the board, making it easier to control and maintain patches.

Surprisingly, Microsoft quickly agreed to the plan, and Ballmer got personally involved in the project.

“He has half-a-dozen clients that he personally gets involved with, and he saw that this just made a lot of sense,” Gilligan said. “They had already done preliminary work themselves trying to identify what would be a more secure configuration. So we fine-tuned and added to that.”

The NSA got together with the National Institute of Standards and Technology, the Defense Information Systems Agency and the Center for Internet Security to decide what to lock down in the Air Force special edition.

Many of the changes were complex and technical, but Gilligan says one of the most important and simplest was an obvious fix to how Windows XP handled passwords. The Air Force insisted the system be configured so administrative passwords were unique, and different from general user passwords, preventing an average user from obtaining administrative privileges. Specifications were added to increase the length and complexity of passwords and expire them every 60 days.

It then took two years for the Air Force to catalog and test all the software applications on its networks against the new configuration to uncover conflicts. In some cases, where internally designed software interacted with Windows XP in an insecure way, they had to change the in-house software.

Now I want Microsoft to offer this configuration to everyone.

EDITED TO ADD (5/6): Microsoft responds:

Thanks for covering this topic, but unfortunately the reporter for the original article got a lot of the major facts, which you relied upon, wrong. For instance, there isn’t a special version of Windows for the Air Force. They use the same SKUs as everyone else. We didn’t deliver a special settings that only the Air Force can access. The Air Force asked us to help them to create a hardened gpos and images, which the AF could use as the standard image. We agreed to assist, as we do with any company that hires us to assist in setting their own security policy as implemented in Windows.

The work from the AF ended up morphing into the Federal Desktop Core Configuration (FDCC) recommendations maintained by NIST. There are differences, but they are essentially the same thing. NIST initially used even more secure settings in the hardening process (many of which have since been relaxed because of operational issues, and is now even closer to what the AF created).

Anyone can download the FDCC settings, documentation, and even complete images. I worked on the FDCC project for little over a year, and Aaron Margosis has been involved for many years, and continues to be involved. He offers all sorts of public knowledge and useful tools. Here, Aaron has written a couple of tools that anyone can use to apply FDCC settings to local group policy. It includes the source code, if anyone wants to customize them.

In the initial article, a lot of the other improvements, such as patching, came from the use of better tools (SCCM, etc.), and were not necessarily solely due to the changes in the base image (although that certainly didn’t hurt). So, it seems the author mixed up some of the different technology pushes and wrapped them up into a single story. He also seem to imply that this is something special and secret, but the truth is there is more openness with the FDCC program and the surrounding security outcomes than anything we’ve ever done before. Even better, there are huge agencies that have already gone first in trying to use these harden settings, and essentially been beta testers for the rest of the world. The FDCC settings may not be the best fit for every company, but it is a good model to compare against.

Let me know if you have any questions.

Roger A. Grimes, Security Architect, ACE Team, Microsoft

EDITED TO ADD (5/12): FDCC policy specs.

Posted on May 6, 2009 at 6:43 AMView Comments

Security Fears Drive Iran to Linux

According to The Age in Australia:

“We would have to pay a lot of money,” said Sephery-Rad, noting that most of the government’s estimated one million PCs and the country’s total of six to eight million computers were being run almost exclusively on the Windows platform.

“Secondly, Microsoft software has a lot of backdoors and security weaknesses that are always being patched, so it is not secure. We are also under US sanctions. All this makes us think we need an alternative operating system.”

[…]

“Microsoft is a national security concern. Security in an operating system is an important issue, and when it is on a computer in the government it is of even greater importance,” said the official.

Posted on March 27, 2009 at 5:52 AMView Comments

Interview with an Adware Developer

Fascinating:

I should probably first speak about how adware works. Most adware targets Internet Explorer (IE) users because obviously they’re the biggest share of the market. In addition, they tend to be the less-savvy chunk of the market. If you’re using IE, then either you don’t care or you don’t know about all the vulnerabilities that IE has.

IE has a mechanism called a Browser Helper Object (BHO) which is basically a gob of executable code that gets informed of web requests as they’re going. It runs in the actual browser process, which means it can do anything the browser can do—which means basically anything. We would have a Browser Helper Object that actually served the ads, and then we made it so that you had to kill all the instances of the browser to be able to delete the thing. That’s a little bit of persistence right there.

If you also have an installer, a little executable, you can make a Registry entry and every time this thing reboots, the installer will check to make sure the BHO is there. If it is, great. If it isn’t, then it will install it. That’s fine until somebody goes and deletes the executable.

The next thing that Direct Revenue did—actually I should say what I did, because I was pretty heavily involved in this—was make a poller which continuously polls about every 10 seconds or so to see if the BHO was there and alive. If it was, great. If it wasn’t, [ the poller would ] install it. To make sure the poller was less likely to be detected, we developed this algorithm (a really trivial one) for making a random-looking filename that was consistent per machine but was not easy to guess. I think it was the first 6 or 8 characters of the DES-encoded MAC address. You take the MAC address, encode it with DES, take the first six characters and that was it. That was pretty good, except the file itself would be the same binary. If you md5-summed the file it would always be the same everywhere, and it was always in the same location.

Next we made a function shuffler, which would go into an executable, take the functions and randomly shuffle them. Once you do that, then of course the signature’s all messed up. [ We also shuffled ] a lot of the pointers within each actual function. It completely changed the shape of the executable.

We then made a bootstrapper, which was a tiny tiny piece of code written in Assembler which would decrypt the executable in memory, and then just run it. At the same time, we also made a virtual process executable. I’ve never heard of anybody else doing this before. Windows has this thing called Create Remote Thread. Basically, the semantics of Create Remote Thread are: You’re a process, I’m a different process. I call you and say “Hey! I have this bit of code. I’d really like it if you’d run this.” You’d say, “Sure,” because you’re a Windows process—you’re all hippie-like and free love. Windows processes, by the way, are insanely promiscuous. So! We would call a bunch of processes, hand them all a gob of code, and they would all run it. Each process would all know about two of the other ones. This allowed them to set up a ring…mutual support, right?

So we’ve progressed now from having just a Registry key entry, to having an executable, to having a randomly-named executable, to having an executable which is shuffled around a little bit on each machine, to one that’s encrypted—really more just obfuscated—to an executable that doesn’t even run as an executable. It runs merely as a series of threads. Now, those threads can communicate with one another, they would check to make sure that the BHO was there and up, and that the whatever other software we had was also up.

There was one further step that we were going to take but didn’t end up doing, and that is we were going to get rid of threads entirely, and just use interrupt handlers. It turns out that in Windows, you can get access to the interrupt handler pretty easily. In fact, you can register with the OS a chunk of code to handle a given interrupt. Then all you have to do is arrange for an interrupt to happen, and every time that interrupt happens, you wake up, do your stuff and go away. We never got to actually do that, but it was something we were thinking we’d do.

EDITED TO ADD (1/30): Good commentary on the interview, showing how it whitewashes history.

EDITED TO ADD (2/13): Some more commentary.

Posted on January 30, 2009 at 6:19 AMView Comments

1 3 4 5 6 7 10

Sidebar photo of Bruce Schneier by Joe MacInnis.