Entries Tagged "keys"

Page 12 of 15

Kaspersky Labs Trying to Crack 1024-bit RSA

I can’t figure this story out. Kaspersky Lab is launching an international distributed effort to crack a 1024-bit RSA key used by the Gpcode Virus. From their website:

We estimate it would take around 15 million modern computers, running for about a year, to crack such a key.

What are they smoking at Kaspersky? We’ve never factored a 1024-bit number—at least, not outside any secret government agency—and it’s likely to require a lot more than 15 million computer years of work. The current factoring record is a 1023-bit number, but it was a special number that’s easier to factor than a product-of-two-primes number used in RSA. Breaking that Gpcode key will take a lot more mathematical prowess than you can reasonably expect to find by asking nicely on the Internet. You’ve got to understand the current best mathematical and computational optimizations of the Number Field Sieve, and cleverly distribute the parts that can be distributed. You can’t just post the products and hope for the best.

Is this just a way for Kaspersky to generate itself some nice press, or are they confused in Moscow?

EDITED TO ADD (6/15): Kaspersky <a href=http://www.securityfocus.com/news/11523″>now says:

The company clarified, however, that it’s more interested in getting help in finding flaws in the encryption implementation.

“We are not trying to crack the key,” Roel Schouwenberg, senior antivirus researcher with Kaspersky Lab, told SecurityFocus. “We want to see collectively whether there are implementation errors, so we can do what we did with previous versions and find a mistake to help us find the key.”

Schouwenberg agrees that, if no implementation flaw is found, searching for the decryption key using brute-force computing power is unlikely to work.

“Clarified” is overly kind. There was nothing confusing about Kaspersky’s post that needed clarification, and what they’re saying now completely contradicts what they did post. Seems to me like they’re trying to pretend it never happened.

EDITED TO ADD (6/30): A Kaspersky virus analyst comments on this entry.

Posted on June 12, 2008 at 12:30 PMView Comments

KeeLoq Still Broken

That’s the key entry system used by Chrysler, Daewoo, Fiat, General Motors, Honda, Toyota, Lexus, Volvo, Volkswagen, Jaguar, and probably others. It’s broken:

The KeeLoq encryption algorithm is widely used for security relevant applications, e.g., in the form of passive Radio Frequency Identification (RFID) transponders for car immobilizers and in various access control and Remote Keyless Entry (RKE) systems, e.g., for opening car doors and garage doors.

We present the first successful DPA (Differential Power Analysis) attacks on numerous commercially available products employing KeeLoq. These so-called side-channel attacks are based on measuring and evaluating the power consumption of a KeeLoq device during its operation. Using our techniques, an attacker can reveal not only the secret key of remote controls in less than one hour, but also the manufacturer key of the corresponding receivers in less than one day. Knowing the manufacturer key allows for creating an arbitrary number of valid new keys and generating new remote controls.

We further propose a new eavesdropping attack for which monitoring of two ciphertexts, sent from a remote control employing KeeLoq code hopping (car key, garage door opener, etc.), is sufficient to recover the device key of the remote control. Hence, using the methods described by us, an attacker can clone a remote control from a distance and gain access to a target that is protected by the claimed to be “highly secure” KeeLoq algorithm.

We consider our attacks to be of serious practical interest, as commercial KeeLoq access control systems can be overcome with modest effort.

I’ve written about this before, but the above link has much better data.

EDITED TO ADD (4/4): A good article.

Posted on April 4, 2008 at 6:03 AMView Comments

Animal Rights Activists Forced to Hand Over Encryption Keys

In the UK:

In early November about 30 animal rights activists are understood to have received letters from the Crown Prosecution Service in Hampshire inviting them to provide passwords that will decrypt material held on seized computers.

The letter is the first stage of a process set out under RIPA which governs how the authorities handle requests to examine encrypted material.

Once a request has been issued the authorities can then issue what is known as a Section 49 notice demanding that a person turn the data into an “intelligible” form or, under Section 51 hand over keys.

Although much of RIPA came into force many years ago, the part governing the handing over of keys only passed in to law on 1 October 2007. This is why the CPS is only now asking for access to files on the seized machines.

Alongside a S49 notice, the authorities can also issue a Section 54 notice that prevents a person revealing that they are subject to this part of RIPA.

Actually, we don’t know if the activists actually handed the police their encryption keys yet. More about the law here.

If you remember, this was sold to the public as essential for fighting terrorism. It’s already being misused.

Posted on November 28, 2007 at 12:12 PMView Comments

British Nuclear Security Kind of Slipshod

No two-person control or complicated safety features: until 1998, you could arm British nukes with a bicycle lock key.

To arm the weapons you just open a panel held by two captive screws—like a battery cover on a radio—using a thumbnail or a coin.

Inside are the arming switch and a series of dials which you can turn with an Allen key to select high yield or low yield, air burst or groundburst and other parameters.

The Bomb is actually armed by inserting a bicycle lock key into the arming switch and turning it through 90 degrees. There is no code which needs to be entered or dual key system to prevent a rogue individual from arming the Bomb.

Certainly most of the security was procedural. But still….

Posted on November 21, 2007 at 12:50 PMView Comments

UK Police Can Now Demand Encryption Keys

Under a new law that went into effect this month, it is now a crime to refuse to turn a decryption key over to the police.

I’m not sure of the point of this law. Certainly it will have the effect of spooking businesses, who now have to worry about the police demanding their encryption keys and exposing their entire operations.

Cambridge University security expert Richard Clayton said in May of 2006 that such laws would only encourage businesses to house their cryptography operations out of the reach of UK investigators, potentially harming the country’s economy. “The controversy here [lies in] seizing keys, not in forcing people to decrypt. The power to seize encryption keys is spooking big business,” Clayton said.

“The notion that international bankers would be wary of bringing master keys into UK if they could be seized as part of legitimate police operations, or by a corrupt chief constable, has quite a lot of traction,” he added. “With the appropriate paperwork, keys can be seized. If you’re an international banker you’ll plonk your headquarters in Zurich.”

But if you’re guilty of something that can only be proved by the decrypted data, you might be better off refusing to divulge the key (and facing the maximum five-year penalty the statue provides) instead of being convicted for whatever more serious charge you’re actually guilty of.

I think this is just another skirmish in the “war on encryption” that has been going on for the past fifteen years. (Anyone remember the Clipper chip?) The police have long maintained that encryption is an insurmountable obstacle to law and order:

The Home Office has steadfastly proclaimed that the law is aimed at catching terrorists, pedophiles, and hardened criminals—all parties which the UK government contents are rather adept at using encryption to cover up their activities.

We heard the same thing from FBI Director Louis Freeh in 1993. I called them “The Four Horsemen of the Information Apocalypse“—terrorists, drug dealers, kidnappers, and child pornographers—and have been used to justify all sorts of new police powers.

Posted on October 11, 2007 at 6:40 AMView Comments

Florida E-Voting Study

Florida just recently released another study of the Diebold voting
machines. They—and it was real security researchers like the California study, and not posers—studied v4.6.5 of the Diebold TSx and v1.96.8 of the Diebold Optical Scan. (California studied older versions (v4.6.4 of the TSx and v1.96.6 of the Optical Scan).

The most interesting issues are (1) Diebold’s apparent “find- then-patch” approach to computer security, and (2) Diebold’s lousy use of cryptography.

Among the findings:

  • Section 3.5. They use RSA signatures, apparently to address previously documented flaws in the literature. But their signature verification step has a problem. It computes H = signature**3 mod N, and then compares _only 160 bits of H_ with the SHA1 hash of a message. This is a natural way to implement RSA signatures if you just read a security textbook. But this approach is also insecure—the report demonstrates how to create a 250-line Java program to forge RSA signatures over (basically) arbitrary messages of their choosing.
  • Section 3.10.3. The original Hopkins report talked about the lack of crypto for network (or dialup) communications between a TSX voting machine and the back-end GEMs server. Apparently, Diebold tried to use SSL to fix the problem. The RABA report analyzed Diebold’s SSL usage and found a security problem. Diebold then tried to patch their SSL implementation. This new report looks at the patched version, and finds that it is still vulnerable to a man-in-the-middle attack.
  • Section 3.7.1.1. Key management. Avi Rubin has already summarized some of the highlights.

    This is arguably worse than having a fixed static key in all of the machines. Because with knowledge of the machine’s serial number, anyone can calculate all of the secret keys. Whereas before, someone would have needed access to the source code or the binary in the machine.

    Other attacks mentioned in the report include swapping two candidate vote counters and many other vote switching attacks. The supervisor PIN is protected with weak cryptography, and once again Diebold has shown that they do not have even a basic understanding of how to apply cryptographic mechanisms.

Avi Rubin has a nice overall summary, too:

So, Diebold is doing some things better than they did before when they had absolutely no security, but they have yet to do them right. Anyone taking any of our cryptography classes at Johns Hopkins, for example, would do a better job applying cryptography. If you read the SAIT report, this theme repeats throughout.

Right. These are classic examples of problems that can arise if (1) you “roll your own” crypto and/or (2) employ “find and patch” rather than a principled approach to security.

It all makes me wonder what new problems will arise from future security patches.

The good news is that Florida has decided not to certify the TSX at this time. They may try to certify a revised version of the OS (optical scan) system.

Posted on August 6, 2007 at 6:34 AMView Comments

Federal Agents Using Spyware

U.S. drug enforcement agents use key loggers to bypass both PGP and Hushmail encryption:

An agent with the Drug Enforcement Administration persuaded a federal judge to authorize him to sneak into an Escondido, Calif., office believed to be a front for manufacturing the drug MDMA, or Ecstasy. The DEA received permission to copy the hard drives’ contents and inject a keystroke logger into the computers.

That was necessary, according to DEA Agent Greg Coffey, because the suspects were using PGP and the encrypted Web e-mail service Hushmail.com. Coffey asserted that the DEA needed “real-time and meaningful access” to “monitor the keystrokes” for PGP and Hushmail passphrases.

And the FBI used spyware to monitor someone suspected of making bomb threats:

In an affidavit seeking a search warrant to use the software, filed last month in U.S. District Court in the Western District of Washington, FBI agent Norman Sanders describes the software as a “computer and internet protocol address verifier,” or CIPAV.

The full capabilities of the FBI’s “computer and internet protocol address verifier” are closely guarded secrets, but here’s some of the data the malware collects from a computer immediately after infiltrating it, according to a bureau affidavit acquired by Wired News.

  • IP address
  • MAC address of ethernet cards
  • A list of open TCP and UDP ports
  • A list of running programs
  • The operating system type, version and serial number
  • The default internet browser and version
  • The registered user of the operating system, and registered company name, if any
  • The current logged-in user name
  • The last visited URL

Once that data is gathered, the CIPAV begins secretly monitoring the computer’s internet use, logging every IP address to which the machine connects.

All that information is sent over the internet to an FBI computer in Virginia, likely located at the FBI’s technical laboratory in Quantico.

Sanders wrote that the spyware program gathers a wide range of information, including the computer’s IP address; MAC address; open ports; a list of running programs; the operating system type, version and serial number; preferred internet browser and version; the computer’s registered owner and registered company name; the current logged-in user name and the last-visited URL.

The CIPAV then settles into a silent “pen register” mode, in which it lurks on the target computer and monitors its internet use, logging the IP address of every computer to which the machine connects for up to 60 days.

Another article.

I’ve been saying this for a while: the easiest way to get at someone’s communications is not by intercepting it in transit, but by accessing it on the sender’s or recipient’s computers.

EDITED TO ADD (7/20): I should add that the police got a warrant in both cases. This is not a story about abuse of police power or surveillance without a warrant. This is a story about how the police conducts electronic surveillance, and how they bypass security technologies.

Posted on July 20, 2007 at 6:52 AMView Comments

Triggering Bombs by Remote Key Entry Devices

I regularly read articles about terrorists using cell phones to trigger bombs. The Thai government seems to be particularly worried about this; two years ago I blogged about a particularly bizarre movie-plot threat along these lines. And last year I blogged about the cell phone network being restricted after the Mumbai terrorist bombings.

Efforts to restrict cell phone usage because of this threat are ridiculous. It’s a perfect example of a “movie-plot threat“: by focusing on the specfics of a particular tactic rather than the broad threat, we simply force the bad guys to modify their tactics. Lots of money spent: no security gained.

And that’s exactly what happened in Thailand:

Authorities said yesterday that police are looking for 40 Daihatsu keyless remote entry devices, some of which they believe were used to set off recent explosions in the deep South.

Militants who have in the past used mobile phones to set off bombs are being forced to change their detonation methods as security forces continue to block mobile phone signals while carrying out security missions, preventing them from carrying out their attacks.

[…]

Police found one of the Daihatsu keys near a blast site in Yala on April 13. It is thought the bomber dropped it while fleeing the scene. The key had been modified so its signal covered a longer distance, police said.

Posted on April 26, 2007 at 1:28 PMView Comments

Dept of Homeland Security Wants DNSSEC Keys

This is a big deal:

The shortcomings of the present DNS have been known for years but difficulties in devising a system that offers backward compatability while scaling to millions of nodes on the net have slowed down the implementation of its successor, Domain Name System Security Extensions (DNSSEC). DNSSEC ensures that domain name requests are digitally signed and authenticated, a defence against forged DNS data, a product of attacks such as DNS cache poisoning used to trick surfers into visiting bogus websites that pose as the real thing.

Obtaining the master key for the DNS root zone would give US authorities the ability to track DNS Security Extensions (DNSSec) “all the way back to the servers that represent the name system’s root zone on the internet”.

Access to the “key-signing key” would give US authorities a supervisory role over DNS lookups, vital for functions ranging from email delivery to surfing the net. At a recent ICANN meeting in Lisbon, Bernard Turcotte, president of the Canadian Internet Registration Authority, said managers of country registries were concerned about the proposal to allow the US to control the master keys, giving it privileged control of internet resources, Heise reports.

Another news report.

Posted on April 9, 2007 at 9:45 AMView Comments

Breaking WEP in Under a Minute

WEP (Wired Equivalent Privacy) was the protocol used to secure wireless networks. It’s known to be insecure and has been replaced by Wi-Fi Protected Access, but it’s still in use.

This paper, “Breaking 104 bit WEP in less than 60 seconds,” is the best attack against WEP to date:

Abstract:

We demonstrate an active attack on the WEP protocol that is able to recover a 104-bit WEP key using less than 40.000 frames with a success probability of 50%. In order to succeed in 95% of all cases, 85.000 packets are needed. The IV of these packets can be randomly chosen. This is an improvement in the number of required frames by more than an order of magnitude over the best known key-recovery attacks for WEP. On a IEEE 802.11g network, the number of frames required can be obtained by re-injection in less than a minute. The required computational effort is approximately 2^20 RC4 key setups, which on current desktop and laptop CPUs in negligible.

Posted on April 4, 2007 at 12:46 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.