Entries Tagged "vulnerabilities"

Page 42 of 49

WiFi Driver Attack

In this attack, you can seize control of someone’s computer using his WiFi interface, even if he’s not connected to a network.

The two researchers used an open-source 802.11 hacking tool called LORCON (Loss of Radio Connectivity) to throw an extremely large number of wireless packets at different wireless cards. Hackers use this technique, called fuzzing, to see if they can cause programs to fail, or perhaps even run unauthorized software when they are bombarded with unexpected data.

Using tools like LORCON, Maynor and Ellch were able to discover many examples of wireless device driver flaws, including one that allowed them to take over a laptop by exploiting a bug in an 802.11 wireless driver. They also examined other networking technologies including Bluetooth, Ev-Do (EVolution-Data Only), and HSDPA (High Speed Downlink Packet Access).

The two researchers declined to disclose the specific details of their attack before the August 2 presentation, but they described it in dramatic terms.

“This would be the digital equivalent of a drive-by shooting,” said Maynor. An attacker could exploit this flaw by simply sitting in a public space and waiting for the right type of machine to come into range.

The victim would not even need to connect to a network for the attack to work.

No details yet. The researchers are presenting their results at BlackHat on August 2.

Posted on July 6, 2006 at 1:52 PMView Comments

Brennan Center Report on Security of Voting Systems

I have been participating in the Brennan Center’s Task Force on Voting Security. Last week we released a report on the security of voting systems.

From the Executive Summary:

In 2005, the Brennan Center convened a Task Force of internationally renowned government, academic, and private-sector scientists, voting machine experts and security professionals to conduct the nation’s first systematic analysis of security vulnerabilities in the three most commonly purchased electronic voting systems. The Task Force spent more than a year conducting its analysis and drafting this report. During this time, the methodology, analysis, and text were extensively peer reviewed by the National Institute of Standards and Technology (“NIST”).

[…]

The Task Force examined security threats to the technologies used in Direct Recording Electronic voting systems (“DREs”), DREs with a voter verified auditable paper trail (“DREs w/ VVPT”) and Precinct Count Optical Scan (“PCOS”) systems. The analysis assumes that appropriate physical security and accounting procedures are all in place.

[…]

Three fundamental points emerge from the threat analysis in the Security Report:

  • All three voting systems have significant security and reliability vulnerabilities, which pose a real danger to the integrity of national, state, and local elections.
  • The most troubling vulnerabilities of each system can be substantially remedied if proper countermeasures are implemented at the state and local level.
  • Few jurisdictions have implemented any of the key countermeasures that could make the least difficult attacks against voting systems much more difficult to execute successfully.

[…]

There are a number of steps that jurisdictions can take to address the vulnerabilities identified in the Security Report and make their voting systems significantly more secure. We recommend adoption of the following security measures:

  1. Conduct automatic routine audits comparing voter verified paper records to the electronic record following every election. A voter verified paper record accompanied by a solid automatic routine audit of those records can go a long way toward making the least difficult attacks much more difficult.
  2. Perform “parallel testing” (selection of voting machines at random and testing them as realistically as possible on Election Day.) For paperless DREs, in particular, parallel testing will help jurisdictions detect software-based attacks, as well as subtle software bugs that may not be discovered during inspection and other testing.
  3. Ban use of voting machines with wireless components. All three voting systems are more vulnerable to attack if they have wireless components.
  4. Use a transparent and random selection process for all auditing procedures. For any auditing to be effective (and to ensure that the public is confident in
    such procedures), jurisdictions must develop and implement transparent and random selection procedures.

  5. Ensure decentralized programming and voting system administration. Where a single entity, such as a vendor or state or national consultant, performs key tasks for multiple jurisdictions, attacks against statewide elections become easier.
  6. Institute clear and effective procedures for addressing evidence of fraud or error. Both automatic routine audits and parallel testing are of questionable security value without effective procedures for action where evidence of machine malfunction and/or fraud is discovered. Detection of fraud without an appropriate response will not prevent attacks from succeeding.

    The report is long, but I think it’s worth reading. If you’re short on time, though, at least read the Executive Summary.

    The report has generated some press. Unfortunately, the news articles recycle some of the lame points that Diebold continues to make in the face of this kind of analysis:

    Voting machine vendors have dismissed many of the concerns, saying they are theoretical and do not reflect the real-life experience of running elections, such as how machines are kept in a secure environment.

    “It just isn’t the piece of equipment,” said David Bear, a spokesman for Diebold Election Systems, one of the country’s largest vendors. “It’s all the elements of an election environment that make for a secure election.”

    “This report is based on speculation rather than an examination of the record. To date, voting systems have not been successfully attacked in a live election,” said Bob Cohen, a spokesman for the Election Technology Council, a voting machine vendors’ trade group. “The purported vulnerabilities presented in this study, while interesting in theory, would be extremely difficult to exploit.”

    I wish The Washington Post found someone to point out that there have been many, many irregularities with electronic voting machines over the years, and the lack of convincing evidence of fraud is exactly the problem with their no-audit-possible systems. Or that the “it’s all theoretical” argument is the same on that software vendors used to use to discredit security vulnerabilities before the full-disclosure movement forced them to admit that their software had problems.

    Posted on July 5, 2006 at 6:12 AMView Comments

    Getting a Personal Unlock Code for Your O2 Cell Phone

    O2 is a UK cell phone network. The company gives you the option of setting up a PIN on your phone. The idea is that if someone steals your phone, they can’t make calls. If they type the PIN incorrectly three times, the phone is blocked. To deal with the problems of phone owners mistyping their PIN—or forgetting it—they can contact O2 and get a Personal Unlock Code (PUK). Presumably, the operator goes through some authentication steps to ensure that the person calling is actually the legitimate owner of the phone.

    So far, so good.

    But O2 has decided to automate the PUK process. Now anyone on the Internet can visit this website, type in a valid mobile telephone number, and get a valid PUK to reset the PIN—without any authentication whatsoever.

    Oops.

    EDITED TO ADD (7/4): A representitive from O2 sent me the following:

    “Yes, it does seem there is a security risk by O2 supplying such a service, but in fact we believe this risk is very small. The risk is when a customer’s phone is lost or stolen. There are two scenarios in that event:

    “Scenario 1 – The phone is powered off. A PIN number would be required at next power on. Although the PUK code will indeed allow you to reset the PIN, you need to know the telephone number of the SIM in order to get it – there is no way to determine the telephone number from the SIM or handset itself. Should the telephone number be known the risk is then same as scenario 2.

    “Scenario 2 – The phone remains powered on: Here, the thief can use the phone in any case without having to acquire PUK.

    “In both scenarios we have taken the view that the principle security measure is for the customer to report the loss/theft as quickly as possible, so that we can remotely disable both the SIM and also the handset (so that it cannot be used with any other SIM).”

    Posted on July 3, 2006 at 2:26 PM

    Load ActiveX Controls on Vista Without Administrator Privileges

    This seems like a bad idea to me:

    Microsoft is adding a brand-new feature to Windows Vista to allow businesses to load ActiveX controls on systems running without admin privileges.

    The new feature, called ActiveX Installer Service, will be fitted into the next public release of Vista to provide a way for enterprises to cope with the UAC (User Account Control) security mechanism.

    UAC, formerly known as LUA (Limited User Account), is enabled by default in Vista to separate Standard User privileges from those that require admin rights to harden the operating system against malware and malicious hacker attacks.

    However, because UAC will block the installation of ActiveX controls on Standard User systems, enterprise applications that use the technology will encounter breakages. ActiveX controls are objects used to enhance a user’s interaction with an application.

    Posted on July 3, 2006 at 8:31 AMView Comments

    Hacking Computers Over USB

    I’ve previously written about the risks of small portable computing devices; how more and more data can be stored on them, and then lost or stolen. But there’s another risk: if an attacker can convince you to plug his USB device into your computer, he can take it over.

    Plug an iPod or USB stick into a PC running Windows and the device can literally take over the machine and search for confidential documents, copy them back to the iPod or USB’s internal storage, and hide them as “deleted” files. Alternatively, the device can simply plant spyware, or even compromise the operating system. Two features that make this possible are the Windows AutoRun facility and the ability of peripherals to use something called direct memory access (DMA). The first attack vector you can and should plug; the second vector is the result of a design flaw that’s likely to be with us for many years to come.

    The article has the details, but basically you can configure a file on your USB device to automatically run when it’s plugged into a computer. That file can, of course, do anything you want it to.

    Recently I’ve been seeing more and more written about this attack. The Spring 2006 issue of 2600 Magazine, for example, contains a short article called “iPod Sneakiness” (unfortunately, not on line). The author suggests that you can innocently ask someone at an Internet cafe if you can plug your iPod into his computer to power it up—and then steal his passwords and critical files.

    And here’s an article about someone who used this trick in a penetration test:

    We figured we would try something different by baiting the same employees that were on high alert. We gathered all the worthless vendor giveaway thumb drives collected over the years and imprinted them with our own special piece of software. I had one of my guys write a Trojan that, when run, would collect passwords, logins and machine-specific information from the user’s computer, and then email the findings back to us.

    The next hurdle we had was getting the USB drives in the hands of the credit union’s internal users. I made my way to the credit union at about 6 a.m. to make sure no employees saw us. I then proceeded to scatter the drives in the parking lot, smoking areas, and other areas employees frequented.

    Once I seeded the USB drives, I decided to grab some coffee and watch the employees show up for work. Surveillance of the facility was worth the time involved. It was really amusing to watch the reaction of the employees who found a USB drive. You know they plugged them into their computers the minute they got to their desks.

    I immediately called my guy that wrote the Trojan and asked if anything was received at his end. Slowly but surely info was being mailed back to him. I would have loved to be on the inside of the building watching as people started plugging the USB drives in, scouring through the planted image files, then unknowingly running our piece of software.

    There is a defense. From the first article:

    AutoRun is just a bad idea. People putting CD-ROMs or USB drives into their computers usually want to see what’s on the media, not have programs automatically run. Fortunately you can turn AutoRun off. A simple manual approach is to hold down the “Shift” key when a disk or USB storage device is inserted into the computer. A better way is to disable the feature entirely by editing the Windows Registry. There are many instructions for doing this online (just search for “disable autorun”) or you can download and use Microsoft’s TweakUI program, which is part of the Windows XP PowerToys download. With Windows XP you can also disable AutoRun for CDs by right-clicking on the CD drive icon in the Windows explorer, choosing the AutoPlay tab, and then selecting “Take no action” for each kind of disk that’s listed. Unfortunately, disabling AutoPlay for CDs won’t always disable AutoPlay for USB devices, so the registry hack is the safest course of action.

    In the 1990s, the Macintosh operating system had this feature, which was removed after a virus made use of it in 1998. Microsoft needs to remove this feature as well.

    EDITED TO ADD (6/12): In the penetration test, they didn’t use AutoRun.

    Posted on June 8, 2006 at 1:34 PMView Comments

    Dangers of Reporting a Computer Vulnerability

    This essay makes the case that there no way to safely report a computer vulnerability.

    The first reason is that whenever you do something “unnecessary,” such as reporting a vulnerability, police wonder why, and how you found out. Police also wonders if you found one vulnerability, could you have found more and not reported them? Who did you disclose that information to? Did you get into the web site, and do anything there that you shouldn’t have? It’s normal for the police to think that way. They have to. Unfortunately, it makes it very uninteresting to report any problems.

    A typical difficulty encountered by vulnerability researchers is that administrators or programmers often deny that a problem is exploitable or is of any consequence, and request a proof. This got Eric McCarty in trouble—the proof is automatically a proof that you breached the law, and can be used to prosecute you! Thankfully, the administrators of the web site believed our report without trapping us by requesting a proof in the form of an exploit and fixed it in record time. We could have been in trouble if we had believed that a request for a proof was an authorization to perform penetration testing. I believe that I would have requested a signed authorization before doing it, but it is easy to imagine a well-meaning student being not as cautious (or I could have forgotten to request the written authorization, or they could have refused to provide it…). Because the vulnerability was fixed in record time, it also protected us from being accused of the subsequent break-in, which happened after the vulnerability was fixed, and therefore had to use some other means. If there had been an overlap in time, we could have become suspects.

    Interesting essay, and interesting comments. And here’s an article on the essay.

    Remember, full disclosure is the best tool we have to improve security. It’s an old argument, and I wrote about it way back in 2001. If people can’t report security vulnerabilities, then vendors won’t fix them.

    EDITED TO ADD (5/26): Robert Lemos on “Ethics and the Eric McCarty Case.”

    Posted on May 26, 2006 at 7:35 AMView Comments

    Diebold Doesn't Get It

    This quote sums up nicely why Diebold should not be trusted to secure election machines:

    David Bear, a spokesman for Diebold Election Systems, said the potential risk existed because the company’s technicians had intentionally built the machines in such a way that election officials would be able to update their systems in years ahead.

    “For there to be a problem here, you’re basically assuming a premise where you have some evil and nefarious election officials who would sneak in and introduce a piece of software,” he said. “I don’t believe these evil elections people exist.”

    If you can’t get the threat model right, you can’t hope to secure the system.

    Posted on May 22, 2006 at 3:22 PMView Comments

    Major Vulnerability Found in Diebold Election Machines

    This is a big deal:

    Elections officials in several states are scrambling to understand and limit the risk from a “dangerous” security hole found in Diebold Election Systems Inc.’s ATM-like touch-screen voting machines.

    The hole is considered more worrisome than most security problems discovered on modern voting machines, such as weak encryption, easily pickable locks and use of the same, weak password nationwide.

    Armed with a little basic knowledge of Diebold voting systems and a standard component available at any computer store, someone with a minute or two of access to a Diebold touch screen could load virtually any software into the machine and disable it, redistribute votes or alter its performance in myriad ways.

    “This one is worse than any of the others I’ve seen. It’s more fundamental,” said Douglas Jones, a University of Iowa computer scientist and veteran voting-system examiner for the state of Iowa.

    “In the other ones, we’ve been arguing about the security of the locks on the front door,” Jones said. “Now we find that there’s no back door. This is the kind of thing where if the states don’t get out in front of the hackers, there’s a real threat.”

    This newspaper is withholding some details of the vulnerability at the request of several elections officials and scientists, partly because exploiting it is so simple and the tools for doing so are widely available.

    […]

    Scientists said Diebold appeared to have opened the hole by making it as easy as possible to upgrade the software inside its machines. The result, said Iowa’s Jones, is a violation of federal voting system rules.

    “All of us who have heard the technical details of this are really shocked. It defies reason that anyone who works with security would tolerate this design,” he said.

    The immediate solution to this problem isn’t a patch. What that article refers to is election officials ensuring that they are running the “trusted” build of the software done at the federal labs and stored at the NSRL, just in case someone installed something bad in the meantime.

    This article compares the security of electronic voting machines with the security of electronic slot machines. (My essay on the security of elections and voting machines.)

    EDITED TO ADD (5/11): The redacted report is available.

    Posted on May 11, 2006 at 1:08 PMView Comments

    1 40 41 42 43 44 49

    Sidebar photo of Bruce Schneier by Joe MacInnis.