Entries Tagged "vulnerabilities"

Page 33 of 45

Hacking Security Cameras

Clever:

If you’ve seen a Hollywood caper movie in the last 20 years you know the old video-camera-spoofing trick. That’s where the criminal mastermind taps into a surveillance camera system and substitutes his own video stream, leaving hapless security guards watching an endless loop of absolutely-nothing-happening while the bank robber empties the vault.

Now white-hat hackers have demonstrated a technique that neatly replicates that old standby.

Amir Azam and Adrian Pastor, researchers at London-based security firm ProCheckUp, discovered that they can redirect what video file is played back by an AXIS 2100 surveillance camera, a common industrial security camera that boasts a web interface, allowing guards to monitor a building from anywhere in the world.

Posted on October 8, 2007 at 6:39 AMView Comments

Microsoft Updates Both XP and Vista Without User Permission or Notification

The details are still fuzzy, but if this is true, it’s a huge deal.

Not that Microsoft can do this; that’s just stupid company stuff. But what’s to stop anyone else from using Microsoft’s stealth remote install capability to put anything onto anyone’s computer? How long before some smart hacker exploits this, and then writes a program that will allow all the dumb hackers to do it?

When you build a capability like this into your system, you decrease your overall security.

Posted on September 17, 2007 at 6:12 AMView Comments

"Cyber Crime Toolkits" Hit the News

On the BBC website:

“They are starting to pop up left and right,” said Tim Eades from security company Sana, of the sites offering downloadable hacking tools. “It’s the classic verticalisation of a market as it starts to mature.”

Malicious hackers had evolved over the last few years, he said, and were now selling the tools they used to use to the growing numbers of fledgling cyber thieves.

Mr Eades said some hacking groups offer boutique virus writing services that produce malicious programs that security software will not spot. Individual malicious programs cost up to £17 (25 euros), he said.

At the top end of the scale, said Mr Eades, were tools like the notorious MPack which costs up to £500.

The regular updates for the software ensure it uses the latest vulnerabilities to help criminals hijack PCs via booby-trapped webpages. It also includes a statistical package that lets owners know how successful their attack has been and where victims are based.

In one sense, there’s nothing new here. There have been rootkits and virus construction kits available on the Internet for years. The very definition of a “script kiddie” is someone who uses these tools without really understanding them. What is new is the market: these new tools aren’t for wannabe hackers, they’re for criminals. And with the new market comes a for-profit business model.

Posted on September 5, 2007 at 7:10 AMView Comments

Assurance

Over the past several months, the state of California conducted the most comprehensive security review yet of electronic voting machines. People I consider to be security experts analyzed machines from three different manufacturers, performing both a red-team attack analysis and a detailed source code review. Serious flaws were discovered in all machines and, as a result, the machines were all decertified for use in California elections.

The reports are worth reading, as is much of the blog commentary on the topic. The reviewers were given an unrealistic timetable and had trouble getting needed documentation. The fact that major security vulnerabilities were found in all machines is a testament to how poorly they were designed, not to the thoroughness of the analysis. Yet California Secretary of State Debra Bowen has conditionally recertified the machines for use, as long as the makers fix the discovered vulnerabilities and adhere to a lengthy list of security requirements designed to limit future security breaches and failures.

While this is a good effort, it has security completely backward. It begins with a presumption of security: If there are no known vulnerabilities, the system must be secure. If there is a vulnerability, then once it’s fixed, the system is again secure. How anyone comes to this presumption is a mystery to me. Is there any version of any operating system anywhere where the last security bug was found and fixed? Is there a major piece of software anywhere that has been, and continues to be, vulnerability-free?

Yet again and again we react with surprise when a system has a vulnerability. Last weekend at the hacker convention DefCon, I saw new attacks against supervisory control and data acquisition (SCADA) systems—those are embedded control systems found in infrastructure systems like fuel pipelines and power transmission facilities—electronic badge-entry systems, MySpace, and the high-security locks used in places like the White House. I will guarantee you that the manufacturers of these systems all claimed they were secure, and that their customers believed them.

Earlier this month, the government disclosed that the computer system of the US-Visit border control system is full of security holes. Weaknesses existed in all control areas and computing device types reviewed, the report said. How exactly is this different from any large government database? I’m not surprised that the system is so insecure; I’m surprised that anyone is surprised.

We’ve been assured again and again that RFID passports are secure. When researcher Lukas Grunwald successfully cloned one last year at DefCon, we were told there was little risk. This year, Grunwald revealed that he could use a cloned passport chip to sabotage passport readers. Government officials are again downplaying the significance of this result, although Grunwald speculates that this or another similar vulnerability could be used to take over passport readers and force them to accept fraudulent passports. Anyone care to guess who’s more likely to be right?

It’s all backward. Insecurity is the norm. If any system—whether a voting machine, operating system, database, badge-entry system, RFID passport system, etc.—is ever built completely vulnerability-free, it’ll be the first time in the history of mankind. It’s not a good bet.

Once you stop thinking about security backward, you immediately understand why the current software security paradigm of patching doesn’t make us any more secure. If vulnerabilities are so common, finding a few doesn’t materially reduce the quantity remaining. A system with 100 patched vulnerabilities isn’t more secure than a system with 10, nor is it less secure. A patched buffer overflow doesn’t mean that there’s one less way attackers can get into your system; it means that your design process was so lousy that it permitted buffer overflows, and there are probably thousands more lurking in your code.

Diebold Election Systems has patched a certain vulnerability in its voting-machine software twice, and each patch contained another vulnerability. Don’t tell me it’s my job to find another vulnerability in the third patch; it’s Diebold’s job to convince me it has finally learned how to patch vulnerabilities properly.

Several years ago, former National Security Agency technical director Brian Snow began talking about the concept of “assurance” in security. Snow, who spent 35 years at the NSA building systems at security levels far higher than anything the commercial world deals with, told audiences that the agency couldn’t use modern commercial systems with their backward security thinking. Assurance was his antidote:

Assurances are confidence-building activities demonstrating that:

  1. The system’s security policy is internally consistent and reflects the requirements of the organization,
  2. There are sufficient security functions to support the security policy,
  3. The system functions to meet a desired set of properties and only those properties,
  4. The functions are implemented correctly, and
  5. The assurances hold up through the manufacturing, delivery and life cycle of the system.

Basically, demonstrate that your system is secure, because I’m just not going to believe you otherwise.

Assurance is less about developing new security techniques than about using the ones we have. It’s all the things described in books like Building Secure Software, Software Security and Writing Secure Code. It’s some of what Microsoft is trying to do with its Security Development Lifecycle (SDL). It’s the Department of Homeland Security’s Build Security In program. It’s what every aircraft manufacturer goes through before it puts a piece of software in a critical role on an aircraft. It’s what the NSA demands before it purchases a piece of security equipment. As an industry, we know how to provide security assurance in software and systems; we just tend not to bother.

And most of the time, we don’t care. Commercial software, as insecure as it is, is good enough for most purposes. And while backward security is more expensive over the life cycle of the software, it’s cheaper where it counts: at the beginning. Most software companies are short-term smart to ignore the cost of never-ending patching, even though it’s long-term dumb.

Assurance is expensive, in terms of money and time for both the process and the documentation. But the NSA needs assurance for critical military systems; Boeing needs it for its avionics. And the government needs it more and more: for voting machines, for databases entrusted with our personal information, for electronic passports, for communications systems, for the computers and systems controlling our critical infrastructure. Assurance requirements should be common in IT contracts, not rare. It’s time we stopped thinking backward and pretending that computers are secure until proven otherwise.

This essay originally appeared on Wired.com.

Posted on August 9, 2007 at 8:19 AMView Comments

Florida E-Voting Study

Florida just recently released another study of the Diebold voting
machines. They—and it was real security researchers like the California study, and not posers—studied v4.6.5 of the Diebold TSx and v1.96.8 of the Diebold Optical Scan. (California studied older versions (v4.6.4 of the TSx and v1.96.6 of the Optical Scan).

The most interesting issues are (1) Diebold’s apparent “find- then-patch” approach to computer security, and (2) Diebold’s lousy use of cryptography.

Among the findings:

  • Section 3.5. They use RSA signatures, apparently to address previously documented flaws in the literature. But their signature verification step has a problem. It computes H = signature**3 mod N, and then compares _only 160 bits of H_ with the SHA1 hash of a message. This is a natural way to implement RSA signatures if you just read a security textbook. But this approach is also insecure—the report demonstrates how to create a 250-line Java program to forge RSA signatures over (basically) arbitrary messages of their choosing.
  • Section 3.10.3. The original Hopkins report talked about the lack of crypto for network (or dialup) communications between a TSX voting machine and the back-end GEMs server. Apparently, Diebold tried to use SSL to fix the problem. The RABA report analyzed Diebold’s SSL usage and found a security problem. Diebold then tried to patch their SSL implementation. This new report looks at the patched version, and finds that it is still vulnerable to a man-in-the-middle attack.
  • Section 3.7.1.1. Key management. Avi Rubin has already summarized some of the highlights.

    This is arguably worse than having a fixed static key in all of the machines. Because with knowledge of the machine’s serial number, anyone can calculate all of the secret keys. Whereas before, someone would have needed access to the source code or the binary in the machine.

    Other attacks mentioned in the report include swapping two candidate vote counters and many other vote switching attacks. The supervisor PIN is protected with weak cryptography, and once again Diebold has shown that they do not have even a basic understanding of how to apply cryptographic mechanisms.

Avi Rubin has a nice overall summary, too:

So, Diebold is doing some things better than they did before when they had absolutely no security, but they have yet to do them right. Anyone taking any of our cryptography classes at Johns Hopkins, for example, would do a better job applying cryptography. If you read the SAIT report, this theme repeats throughout.

Right. These are classic examples of problems that can arise if (1) you “roll your own” crypto and/or (2) employ “find and patch” rather than a principled approach to security.

It all makes me wonder what new problems will arise from future security patches.

The good news is that Florida has decided not to certify the TSX at this time. They may try to certify a revised version of the OS (optical scan) system.

Posted on August 6, 2007 at 6:34 AMView Comments

More on the California Voting Machine Review

This is a follow-on to this post. What’s new is that the source code reviews are now available.

I haven’t had the chance to review the reports. Matt Blaze has a good summary on his blog:

We found significant, deeply-rooted security weaknesses in all three vendors’ software. Our newly-released source code analyses address many of the supposed shortcomings of the red team studies, which have been (quite unfairly, I think) criticized as being “unrealistic”. It should now be clear that the red teams were successful not because they somehow “cheated,” but rather because the built-in security mechanisms they were up against simply don’t work properly. Reliably protecting these systems under operational conditions will likely be very hard.

I just read Matt Bishop’s description of the miserable schedule and support that the California Secretary of State’s office gave to the voting-machine review effort:

The major problem with this study is time. Although the study did not start until mid-June, the end date was set at July 20, and the Secretary of States said that under no circumstandes would it be extended.

[…]

The second problem was lack of information. In particular, various documents did not become available until July 13, too late to be of any value to the red teams, and the red teams did not have several security-related documents. Further, some software that would have materially helped the study was never made available.

Matt Blaze, who led the team that reviewed the Sequoia code, had similar things to say:

Reviewing that much code in less than two months was, to say the least, a huge undertaking. We spent our first week (while we were waiting for the code to arrive) setting up infrastructure, including a Trac Wiki on the internal network that proved invaluable for keeping everyone up to speed as we dug deeper and deeper into the system. By the end of the project, we were literally working around the clock.

It seems that we have a new problem to worry about: the Secretary of State has no clue how to get a decent security review done. Perversely, it was good luck that the voting machines tested were so horribly bad that the reviewers found vulnerabilities despite a ridiculous schedule—one month simply isn’t reasonable—and egregious foot-dragging by vendors in providing needed materials.

Next time, we might not be so lucky. If one vendor sees he can avoid embarrassment by stalling delivery of his most vulnerable source code for four weeks, we might end up with the Secretary of State declaring that the system survived vigorous testing and therefore is secure. Given that refusing cooperation incurred no penalty in this series of tests, we can expect vendors to work that angle more energetically in the future.

The Secretary of State’s own web page gives top billing to the need “to restore the public’s confidence in the integrity of the electoral process,” while the actual security of the machines is relegated to second place.

We need real security evaluations, not feel-good fake tests. I wish this were more the former than the latter.

EDITED TO ADD (8/4): California Secretary of State Bowen’s certification decisions are online.

She has totally decertified the ES&S Inkavote Plus system, used in L.A. County, because of ES&S noncompliance with the Top to Bottom Review. The Diebold and Sequoia systems have been decertified and conditionally recertified. The same was done with one Hart Intercivic system (system 6.2.1). (Certification of the Hart system 6.1 was voluntarily withdrawn.)

To those who thought she was staging this review as security theater, this seems like evidence to the contrary. She wants to do the right thing, but has no idea how to conduct a security review.

Another article.

EDITED TO ADD (8/4): The Diebold software is pretty bad.

EDITED TO ADD (8/5): Ed Felten comments:

It is interesting (at least to me as a computer security guy) to see how often the three companies made similar mistakes. They misuse cryptography in the same ways: using fixed unchangeable keys, using ciphers in ECB mode, using a cyclic redundancy code for data integrity, and so on. Their central tabulators use poorly protected database software. Their code suffers from buffer overflows, integer overflow errors, and format string vulnerabilities. They store votes in a way that compromises the secret ballot.

And Avi Rubin comments:

As I read the three new reports, I could not help but marvel at the fact that so many places in the US are using these machines. When it comes to prescription medications, we perform extensive tests before drugs hit the market. When it comes to aviation, planes are held to standards and tested before people fly on them. But, it seems that the voting machines we are using are even more poorly designed and poorly implemented than I had realized.

He’s right, of course.

Posted on August 3, 2007 at 12:55 PMView Comments

California Voting Machine Audit Results

The state of California conducted a security review of their electronic voting machines earlier this year. This was a serious review, with real security researchers getting access to the source code. The report was issued last week, and the researchers were able to compromise all three machines—by Diebold Election Systems, Hart Intercivic, and Sequoia Voting Systems—multiple ways. (They said they could probably find more ways, if they had more time.)

Final report and details about the audit here. Good blog entries here and here. We don’t know what California will do now.

This is no surprise, really. The notion that electronic voting machines were somehow more secure every other computer system ever built was ridiculous from the start. And the claims by machine manufacturers that releasing their source code would hurt the security of the machine was—like all these sorts of claims—really an attempt to prevent embarrassment to the company.

Not everyone gets this, unfortunately. And not everyone involved in voting:

Letting the hackers have the source codes, operating manuals and unlimited access to the voting machines “is like giving a burglar the keys to your house,” said Steve Weir, clerk-recorder of Contra Costa County and head of the state Association of Clerks and Election Officials.

No. It’s like giving burglars the schematics, installation manuals, and unlimited access to your front door lock. If your lock is good, it will survive the burglar having that information. If your lock isn’t good, the burglar will get in.

I have two essays on this, from 2004: “Why Election Technology is Hard,” and “Electronic Voting Machines.” This essay—”Voting and Technology“—was written in 2000.

EDITED TO ADD (7/31): Another article.

EDITED TO ADD (8/2): Good commentary.

Posted on July 31, 2007 at 10:57 AMView Comments

Vulnerabilities in the DHS Networks

Wired.com has the story:

Congress asked Homeland Security’s chief information officer, Scott Charbo, who has a Masters in plant science, to account for more than 800 self-reported vulnerabilities over the last two years and for recently uncovered systemic security problems in US-VISIT, the massive computer network intended to screen and collect the fingerprints and photos of visitors to the United States.

Charbo’s main tactic before the House Homeland Security subcommittee Wednesday was to downplay the seriousness of the threats and to characterize the security investigation of US-VISIT as simultaneously old news and news so new he hasn’t had time to meet with the investigators.

“Key systems operated by Customs and Border Patrol were riddled by control weaknesses,” the Government Accountability Office’s director of Information Security issues Gregory Wilshusen told the committee. Poor security practices and a lack of an authoritative internal map of how various systems interconnect increases the risk that contractors, employees or would-be hackers can or have penetrated and disrupted key DHS computer systems, Wilshusen and Keith Rhodes Director, the GAO’s director of the Center for Technology and Engineering told the committee.

Posted on June 22, 2007 at 10:37 AMView Comments

1 31 32 33 34 35 45

Sidebar photo of Bruce Schneier by Joe MacInnis.