Comments
Chris Jefferson • October 29, 2012 7:07 AM
Some more info at http://www.ps3devwiki.com/wiki/Boot_Order , and a better article (I would say) at http://www.eurogamer.net/articles/2011-01-08-deep-insecurity-article .
This article is actually more positive than it should be. As far as I can tell, this is the end of the road. The PS3 has a number of levels of encryption, and this cracks the last one which can be undated remotely.
I feel it’s worth giving sony some respect here. While there have been various flaws found in the encryption, there are quite a lot of people who would have liked to have seen this cracked much earlier in the life-cycle of the PS3, and unlike the Wii and Xbox-360, which both very quickly fell to hackers, the PS3 has remained secure until fairly late in it’s life-cycle. While there have been hacks which allowed pirated games, requiring new firmware for new games has severely limited their usefulness.
foetus • October 29, 2012 7:16 AM
the key was hacked about a year ago, it’s just publicly released now to avoid chinese to sell ps3 piracy products out of it, keyword here is sell.
Chris W • October 29, 2012 7:59 AM
@ Gweihir
Sony used a public key crypto, but they used a bad implementation that allowed someone, once capable of decrypting the signature block, to derive the private key. Basically updates are signed and also encrypted. Once they managed to decrypt the update (with the key available in the bootloader), they could once again attack the flawed signature.
From what I understand this process has been like a peeling-an-onion. Where each hacked layer could be counteracted by an update of a lower-level firmware.
If not for the flawed signature implementation, PS3 would’ve remained ‘secure’ for it’s entire lifecycle, which is rather impressive.
I do wonder whether it becomes possible to push malicious updates to all PS3s on the internet. Probably requires a combination of fake certificates, dns poisoning. Or replace the on-disk update on a major title. That could be scary.
moo • October 29, 2012 8:27 AM
Anybody surprised by this obviously doesn’t remember what happened back in 2010:
Some hackers who were pissed at Sony for removing their OtherOS functionality with a firmware update, decided to try and break the PS3’s security, and it turned out that Sony had used the same “random” number in the key generation process for two unrelated keys. A bit of algebra later, the hackers had Sony’s private keys…
Gweihir • October 29, 2012 9:18 AM
Thanks for the info!
Seems this is (once again) people without a a basic clue about cryptography using crypto-primitives and getting it badly wrong. What astounds me is that on something this important they do not have anybody that really understands cryptography and apparently are unable to even read and/or understand the preconditions for the used algorithm being secure.
Maybe somebody thought “random” can also be the same fixed number every time. In that case, these people do not even understand basic statistics. If the precondition was “nonce” however (which is suspect) then this is beyond stupid as they apparently did not even look up the definition.
And no, I am not surprised. I am a bit surprised that they managed to stay unbroken for so long, but one of the referenced articles makes the good point that initially you could run Linux on the box and that decreased hacker interest considerably.
curtmack • October 29, 2012 9:55 AM
Sony’s updates were signed with ECDSA, but they used the same random number for each signature. In a way this is probably less obviously stupid than Nintendo’s strncmp bug, but it’s also much worse because the strncmp bug was fixable. Leaking the master key isn’t.
Nick P • October 29, 2012 11:11 AM
What makes the random number funnier is that Cell has an onboard TRNG.
http://www.ibm.com/developerworks/power/library/pa-cellsecurity/
With that security architecture, there are numerous possible optimizations to a firmware load process that improve security. This includes both things that provably beat certain attack vectors & forms of obfuscation. For non-firmware trusted software functions, Green Hills’ port of their INTEGRITY platform is a viable option.
http://www.ghs.com/news/20120207_IBM_cell.html
I think their best option is to contract RobertT’s company to let him spend an hour or two on the PS4’s firmware protection. This will probably accomplish more than their designers have in the past few years. 😉
moo • October 29, 2012 12:18 PM
@Gweihir:
The main reason the PS3 security stayed unbroken for the first few years while the 360, Wii etc. were hacked, was that Sony supported the OtherOS feature that let anybody who wanted to run e.g. linux on their PS3. The real hacker types who were doing homebrew-type stuff just used that, and they didn’t have much incentive to try and break the system’s security. The people who just wanted to run pirated games had incentive, but not the necessary skills.
Then geohot started working on parts of it and Sony got nervous: http://en.wikipedia.org/wiki/George_Hotz#Hacking_the_PlayStation_3
They released a new model that didn’t have OtherOS support, and even went so far as to remove the OtherOS from older consoles in a firmware update. So players who had bought the console with the promise of having both (play the latest games, AND run linux if you want to) were now forced to choose either one or the other (update your firmware so the latest games will run, but no OtherOS — or keep the old firmware and run OtherOS, but you can’t run any of the newer crop of games). The hackers (the ones with real skills, not the kiddies) were now angry at Sony and had some real incentive to try and break the system, and it didn’t take long for them to crack it wide open. You may recall there were some high-profile breaches of the PlayStation Network in this time period, too — customer information was stolen, and Sony had to shut PlayStation Network down for weeks while they reviewed their security and tried to plug the holes that enabled the breach.
The moral of the story is: Don’t try to prevent people from using their own hardware however they want to. It only takes one angry hacker to break your stupid DRM and release the jailbreak to the masses, and your huge investment in the DRM scheme is nullified.
curtmack • October 29, 2012 1:14 PM
@moo
Or, as Extra Credits put it:
“Sony, a word to the wise: Do not piss off the kinds of people who install Linux on their PlayStations. You are wasting your time.”
RobertT • October 29, 2012 6:03 PM
@Nick P
“I think their best option is to contract RobertT’s company to let him spend an hour or two on the PS4’s firmware protection. This will probably accomplish more than their designers have in the past few years. ”
Thanks for the plug, but I’m not sure that I’d be interested, the once mighty Sony is now only a shell of the former company and it does seem to me that the whole Playstation concept is somewhat outmoded. I’m not a gamer so there is probably value, at Sony, that I cant see, however it seems to me that this value is quickly diminishing.
To be honest, I have not read anything about this attack, but from what I can gather their updates require a mutual authentication step before proceeding. Fundamentally this requires you to store a key on the Playstation hardware. Unfortunately if you put the key on the chip then anyone with the right skills can recover it. There are ways to protect the authentication exchange (zero knowledge protocols, etc) but you are still left with the hard problem of storing information on the chip in such a way as to prevent even the most skilled of hardware hackers. Doing this properly really comes down to a battle of wits between the security hardware designer and his nemesis the skilled hardware hacker. As we have discussed before, there are ways to protect the onchip keys by storing fragments of the total key in various different places and require the system to run normally for the whole process to be completed. I’m sure that Sony already does some basic hardware key obscuring steps, IBM certainly understands how to do this. So I’d need to understand what failed in the current implementation before I could even think about improvements. Unfortunately what really failed is often a very closely guarded secret, (usually so that nobody is embarrassed by their own stupidity). Usually you need to pry the door open by defeating one layer of protection at a time. So knowing what failed first is always a good starting point.
Steve • October 31, 2012 11:12 AM
Has it ever occurred to anyone that by using easily broken keys in these things folks get substantially more market exposure and advertising near the end of the their products lifecycle? As in, perhaps it’s weak by design?
Their goal is not security. It’s profit. Very different.
Nick P • November 2, 2012 2:26 PM
@ Steve
It’s an interesting idea. They don’t need weak keys to do that. First option is to leak the key & pretend it was hacked. That way, they can secure the key to the point that nobody breaks it too early. Second option is to tweak their business model to do something different and market-enhancing with the console toward the end.
I think it’s highly unlikely it was made insecure on purpose for some end of life scheme. That’s stretching it.
Subscribe to comments on this entry
Leave a comment
Sidebar photo of Bruce Schneier by Joe MacInnis.
Gweihir • October 29, 2012 6:53 AM
Does anybody have more information about the details? The article indicates the key was extracted from the console, which means it cannot be public-key-crypto.