The Human Side of Heartbleed

The announcement on April 7 was alarming. A new Internet vulnerability called Heartbleed could allow hackers to steal your logins and passwords. It affected a piece of security software that is used on half a million websites worldwide. Fixing it would be hard: It would strain our security infrastructure and the patience of users everywhere.

It was a software insecurity, but the problem was entirely human.

Software has vulnerabilities because it's written by people, and people make mistakes -- thousands of mistakes. This particular mistake was made in 2011 by a German graduate student who was one of the unpaid volunteers working on a piece of software called OpenSSL. The update was approved by a British consultant.

In retrospect, the mistake should have been obvious, and it's amazing that no one caught it. But even though thousands of large companies around the world used this critical piece of software for free, no one took the time to review the code after its release.

The mistake was discovered around March 21, 2014, and was reported on April 1 by Neel Mehta of Google's security team, who quickly realized how potentially devastating it was. Two days later, in an odd coincidence, researchers at a security company called Codenomicon independently discovered it.

When a researcher discovers a major vulnerability in a widely used piece of software, he generally discloses it responsibly. Why? As soon as a vulnerability becomes public, criminals will start using it to hack systems, steal identities, and generally create mayhem, so we have to work together to fix the vulnerability quickly after it's announced.

The researchers alerted some of the larger companies quietly so that they could fix their systems before the public announcement. (Who to tell early is another very human problem: If you tell too few, you're not really helping, but if you tell too many, the secret could get out.) Then Codenomicon announced the vulnerability.

One of the biggest problems we face in the security community is how to communicate these sorts of vulnerabilities. The story is technical, and people often don't know how to react to the risk. In this case, the Codenomicon researchers did well. They created a public website explaining (in simple terms) the vulnerability and how to fix it, and they created a logo -- a red bleeding heart -- that every news outlet used for coverage of the story.

The first week of coverage varied widely, as some people panicked and others downplayed the threat. This wasn't surprising: There was a lot of uncertainty about the risk, and it wasn't immediately obvious how disastrous the vulnerability actually was.

The major Internet companies were quick to patch vulnerable systems. Individuals were less likely to update their passwords, but by and large, that was OK.

True to form, hackers started exploiting the vulnerability within minutes of the announcement. We assume that governments also exploited the vulnerability while they could. I'm sure the U.S. National Security Agency had advance warning.

By now, it's largely over. There are still lots of unpatched systems out there. (Many of them are embedded hardware systems that can't be patched.) The risk of attack is still there, but minimal. In the end, the actual damage was also minimal, although the expense of restoring security was great.

The question that remains is this: What should we expect in the future -- are there more Heartbleeds out there?

Yes. Yes there are. The software we use contains thousands of mistakes -- many of them security vulnerabilities. Lots of people are looking for these vulnerabilities: Researchers are looking for them. Criminals and hackers are looking for them. National intelligence agencies in the United States, the United Kingdom, China, Russia, and elsewhere are looking for them. The software vendors themselves are looking for them.

What happens when a vulnerability is found depends on who finds it. If the vendor finds it, it quietly fixes it. If a researcher finds it, he or she alerts the vendor and then reports it to the public. If a national intelligence agency finds the vulnerability, it either quietly uses it to spy on others or -- if we're lucky -- alerts the vendor. If criminals and hackers find it, they use it until a security company notices and alerts the vendor, and then it gets fixed -- usually within a month.

Heartbleed was unique because there was no single fix. The software had to be updated, and then websites had to regenerate their encryption keys and get new public-key certificates. After that, people had to update their passwords. This multi-stage process had to take place publicly, which is why the announcement happened the way it did.

Yes, it'll happen again. But most of the time, it'll be easier to deal with than this.

This essay previously appeared on The Mark News.

Posted on June 4, 2014 at 6:23 AM • 35 Comments


William ConnolleyJune 4, 2014 6:46 AM

What I find interesting, and an apparently missed opportunity, was the period between first discovery and public disclosure when (as far as I can see) people could have been logging their systems to see exactly who, if anyone, was sending malformed packets. This would have answered the "did NSA do it" type question - if there were a pile of clearly carefully constructed mal requests, you'd at least know that the big boyz knew about it. Are you aware of anyone with interesting logging for sources of attack?

Mike the goat (horn equipped)June 4, 2014 7:20 AM

Excellent essay Bruce, as always your writing is educational and informative to an audience who isn't necessarily computer literate, rhe latter being a rare skill that is all too often required when conversing with company executives and others in charge who should have a clue but don't.

Regarding disclosure - I am all for full disclosure. So called "responsible" disclosure is an issue in itself as it - by necessity - creates an "in group" of people that have advance knowledge. I can only imagine what benefits the aforementioned advance notice could bring when the bug is big and the ramifications potentially massive - collusion and advantageous buying/selling on the stock market is one possibility.

We should always assume that our adversaries - whether they be opportunistic hackers or perhaps a foreign (or our own) goverbment

WilsonJune 4, 2014 7:23 AM

The thing that puzzle me is the fact that some browser (including Chrome, with default settings) don't verify certificate revocation, even now.

And the same browsers seem to be very scared about self signed certificates.

Utter nonsense.

Mike the goat (horn equipped)June 4, 2014 7:26 AM

(Sorry, hit submit prematurely).. government* - already know about said vulnerability.

We know there is a flourishing market for 0days. There is now an incentive for security researchers not to disclose and instead make some fast bitcoin. The buyers are almost certainly a mix of hacking crews and governments.

Immediate and full disclosure may potentially give an adversary insight into the bug/vuln and provide a "window" while you or your vendor respond with patches, but despite this I believe it is the right thing to do. It motivates everyone all the way up the chain to patch and mitigate *quickly* - something that just doesn't happen now. Sure, it seems to happen - but when you dig deeper you often note that the vendor had lead times of months not days.

I know my view will be controversial, but with immediate and full disclosure at least we are all on a level playing field and a select group of people (who could potentially leak or sell the vuln) aren't privy to potentially explosive information for a period of time.

Mike the goat (horn equipped)June 4, 2014 7:29 AM

Wilson: . . . perhaps it is because when enabled the whole OCSP cert checking behavior is such that the CRL servers basically get a log of every TLS site you visit. I liked the old static revocation lists much better.

Clive RobinsonJune 4, 2014 8:09 AM

One thing to note as we move forward, yes more vulnerabilities will be found and a sensible mechanism needs to be in place to handle it. But importantly it should not be a government agency.

They have a history of being untrustworthy with such things, take for instance the DHS CERT and it's behaviour over Industrial Contrtol Systems security. They also take a nationalistic as opposed to international perspective. Thus they will inform a limited subset of the companies in their country, but neglect to tell others.

Thus whilst I understand "Responsable Disclosure" I think we should also take guidance from what has happened with what major software vendors used to describe as irresponsable disclosure.

I found my self at the time of heartbleed braking, when it became known of Neel Mehta's priority, just how long it would have remained quiet for if not for Codenomicon's anouncment.

After all it's known that in the past researchers effectivly sat on serious vulnerabilities for fear that they might be exploited, major software houses when told effectivly did little or nothing, or got their legal departments on the case of researchers, all the while millions were vulnerable and in some cases being exploited.

We would be fooling ourselves to think that these practices would not return if it was not for disclosure in what vested interests would consider an irresposible way, hanging over their heads. Likewise I think it quite likely that there are lobyists out there trying to get a Super-DMCA or similar so that they can go back to their "good old days". And it would be reasonably safe to assume that any political involvment via legislation or agency would fairly quickly be captured by industry and the "bad old days" from our perspective return.

zJune 4, 2014 8:10 AM

Heartbleed is the best example of branding and spreading the word about a very technical topic I have ever seen. When my aunt calls me to ask what this Heartbleed thing is and what she should do about it, you know the people behind it did a good job publicity-wise.

EricJune 4, 2014 8:31 AM

What amazed me was the FUD surrounding this vulnerability. For example, name 10 sites which were completely compromised as a result of this vulnerability. I can't and I doubt the average person can either.

Why is this? Well basically because it was a difficult vulnerability to leverage to compromise a host. You can get small chunks of memory from a server randomly over a period of time, you then need to assemble, order and make sense of these fragments. If you are lucky you can glean sensitive information from what you piecemeal and use it to compromise the server.

This really didn't happen on a wide scale, it was just too much trouble to exploit.

I could be wrong, anyone have any information to the contrary?

Heartbleed was bad, but pales in comparison to Code Red/Nimda outbreaks from back in the day.

JJune 4, 2014 8:37 AM

Wilson: In addition to being a big privacy leak, revocation checking doesn't actually provide any real protection.

In almost any realistic scenario where a stolen certificate is misused, the same mechanism used to steal traffic from the legitimate certificate holder can also be used to intercept any attempt to check whether the certificate has been revoked. And so revocation checking simply does not work unless you're willing to use the revocation servers as a single point of failure for your entire internet connection.

This is why threat modelling is important when trying to protect against an attack - if you don't know what you're trying to protect against, it's easy to come up with something that looks okay on the surface but is actually worse than doing nothing.

Miss TakenJune 4, 2014 9:27 AM

Mistake mistake mistake mistake. Mistake, mistake mistake mistake mistake. Mistake mistake!
Mistake mistake - mistake mistake mistake.

Intriguing, this perseveration of an unsupported assertion. One gathers that the question of intentional sabotage of security infrastructure is not to be contemplated.

RSaundersJune 4, 2014 10:43 AM

What does Heartbleed tell us about the cost of a vulnerability?

It gets directly at one of Bruce's favorite computer security problems: interest and capability must be aligned.

SSL is a wonderful technology and many websites stand to benefit from the security it provides. Saying "But even though thousands of large companies around the world used this critical piece of software for free, no one took the time to review the code after its release." missed the point, however. How many of those large companies had the capability to analyze the math of OpenSSL, or would have benefited sufficiently to have expended the capability. Any capability used to analyze OpenSSL would have a lost opportunity cost by virtue of not being available for other applications.

What would it have cost to have better software development review, security testing, and improved software quality in OpenSSL? How does that cost compare to the cost of responding to the disclosure of the vulnerability? OpenSSL couldn't afford to spend more up front, because Open Source Software can't recoup the higher costs; even when thousands of large companies are users. Is this an argument against OSS?

How can the interest of companies which wish to avoid the costs of responding to the next Heartbleed be channeled to the developers with the capability to make security software less buggy? Would a central capability simply become the next government tampering target after Lavabit and TrueCrypt??

Perhaps the response to Heartbleed argues for another concept: cyber-threats are no big deal. Heartbleed was central to large chunks of the Internet's gross product, and yet nothing really that bad happened. Sure millions of users had to change their password, but that's an acceptable cost of doing business. We have the response capability to handle a problem on the scale of Heartbleed. What does that tell us about our ability to mitigate and recover from the next cyber-disaster? Are cyber threats the next example of Twain's Weather?? (Everybody talks about it and nobody does anything about it.)

luckylukeJune 4, 2014 11:03 AM

The best thing that happened after the heartbeat discovery was found
is that the OpenBSD proactive security guys made a fork of openssl
which is now better known as libressl.

What a discovery of disaster and catastrophic programming
habbits they found and 'neutralized'.

Most important of all, within libressl there is:
*) no more FIPS compliance
*) no more SSL2 support

Further, got removed:
*) all unnessary architectures noone uses for more than a decade
*) comments written years ago noone cared to implement or look at, or delete
*) 'OPENSSL C' - essentially it's own dialect
*) old unused stuff, practices, hardware support noone cares and uses

Finally official bug reports all got applied after years of rotting in a queue
for all kind of flaws, leaks, buffer overruns, errors, ...

It seems that libressl is the now prefered place to get your fixes and improvements applied
instead of openssl upstream. No wonder at all.

All the hack and slash, taking an axe and getting rid of so much codelines
leads to less vulnerable code and better readability that strenghtens
libressl cryptography. - lists more of the hilarious comments while fixing openssl. - Here are some slides, funny,
but also very scary because we still use and rely upon it.

@ Bruce: I know you're participating in the linux foundation 'Core Infrastructure Initiative',
would you mind and considering to discuss to fund libressl as well.
I strongly believe that when the portable libressl is done,
we all should switch to it immediately and deprecate openssl as it is beyond repair.
The same project is responsible for openssh, which is also portable available.
Who is more qualified than OpenBSD ( most secure OS in the world )
to take over ownership and responsibility and deliver us
the most secure cryptography library ever.

When is libressl finished? (Taken from the official libressl website):
'LibreSSL is primarily developed by the OpenBSD Project,
and its first inclusion into an operating system will be in OpenBSD 5.6.'

Considering the punctual releases it will be on November 1st, 2014.

GnuTLS is just right after openssl the next security disaster library, hopefully libressl with its fast-paced development replaces that as well or at least gnutls support can be disabled in favor of libressl.

secret policeJune 4, 2014 11:12 AM

From LibReSSL rewrite:

A few months back there was a big community fuss regarding direct-use of the intel RDRAND instruction. Consensus was RDRAND should probably only be used as an additional source of entropy in a mixer. Guess which library bends over backwards to provide easy access to RDRAND? Yep. Guess which applications are using this support? Not even one... but still, this is being placed as a trap for someone. Send this support straight to the abyss.

Makes you wonder if OpenSSL and other standards are being purposely sabotaged.

Nick PJune 4, 2014 11:36 AM

@ RSaunders

"How can the interest of companies which wish to avoid the costs of responding to the next Heartbleed be channeled to the developers with the capability to make security software less buggy? Would a central capability simply become the next government tampering target after Lavabit and TrueCrypt??"

I've always been for foundations of knowledgeable volunteers who direct money at professionals for various things like this. Companies might contribute anything from a few bucks to a hundred dollars. Small change for a critical capability. The collected money is given to paid developers or reviewers with a track record for quality. Also, languages, tools, or coding guidelines are used that reduce odds of serious problems.

The lack of donations or responsibility by for-profit groups is the reason I think of alternatives to freeware. The better option, I think, is a non-profit organization that develops software like this, ensures the deployment part is easy/safe (eg configuration), sells it at cost, and distributes the source + build instructions for review. The schemes I've worked on for multi-national, high assurance systems build on this foundation.

Inspiration comes from the commercial world of I.P. production and licensing. There's quite a few vendors creating, optionally certifying, and selling crypto. Apparently, it works well enough that they keep investing in it. Although, it could be a loss that they do for another reason. In any case, keeping end result cheap for end users, ensuring there's income for development/review, and allowing donations for extra efforts should have better results than the model of simply giving it away.

Quick edit: My previous versions of this scheme also included a maximum potential price per offering (with inflation modifier), a license to make arbitrary modifications for internal use, and perpetual license of the product/code in event product is taken off market. The latter should be especially nice for big companies concerned with future-proofing and lock-in.

Nick PJune 4, 2014 11:43 AM

My recent favorite excerpt from

"T.61 was proposed in 93. Utf8 later the same year. Utf8 was
recommended from 94. In 2004 OpenSSL caught up with the recommendation,
and decided to go against it to be compatible with Netscape Navigator,
which at that time had a massive 2% of the market. In 2005 The behaviour
of the openssl binaries were "fixed" by changing the config file.
2014 the default in the libraries still hasn't been changed, 20 years after the
original deprecation of T.61 in x509 standards. "

There's a lot more problems in this codebase than Bruce's article would lead you to believe. It's been horrid through and through. I'm amazed it had as few (dozens? hundreds?) security risks as it did.

JeffJune 4, 2014 12:04 PM

"If a national intelligence agency finds the vulnerability, it either quietly uses it to spy on others or -- if we're lucky -- alerts the vendor."

Are there any known examples of where a USA intelligence agency has alerted a vendor?

PetterJune 4, 2014 12:05 PM

How come two independent teams discovered the bug within a couple of weeks of each other when the bug have been they for years?

Why did they focus on this part of the code at this time?
Wonder how many others have actually found it and used/abused it?

Clive RobinsonJune 4, 2014 12:20 PM

@ Nick P,

It's been my experiance that companies don't do "donations" unless their acountants know taxation "gift" code/law fairly well.

However they do, do "invoicing" the problem being turning what would otherwise be a gift into what the Taxman will except as a legitimate business expendeture.

The solution appears to be to offer a time restricted service such as some form of "support" that would not otherwise be available.

The usual route appears to be either a support contract or some form of limited licence, in either case their needs to be a provable service that is not otherwise available except through payment.

The question is "what" forfills the Taxman's requirments as a legitimate service, and since the advent of software licencing and then Foss support it appears to now be very little and reducing with time (though this might change dramaticaly it there becomes a suspicion it's a tax avoidance scheam or associated with crime in some way such as money laundering or paying for illegal substances or services etc).

One way is the alowance or non alowance of using some form of associatd IP, thus the right to display a logo or trade mark or some other associated branding. Thus it might be akin to "Intel Inside" etc. Another might be more direct access to the developers via say a private mail list, this might be just the developers indicating what changes are currently being made to the code base, why and the reason/rational.

Thus whilst the software it's self would remain FOSS a company can be legitimatly invoiced for "extras", which would make the lives of both the company accountants and dvelopers a lot lot easier, and keep the taxman happy. The trick however is to "give value" without it being a "destracting effort" from development or other necassary activities.

Ross ReedstromJune 4, 2014 1:36 PM

Petter - Sometimes coincidence is just that. Happens a lot in science, as well.

JacobJune 4, 2014 1:53 PM

Re openSSL funding, a lot has changed during the last few weeks:

"The Linux Foundation, a non-profit group promoting open-source software, announced in late April it would step in to help: a Core Infrastructure Initiative (CII) working group was set up to help identify and fund open-source internet projects in need of financial support. Large companies including Amazon Web Services, Facebook, Google, IBM and Microsoft signed up to the programme. OpenSSL, with its single main developer scraping by without a fair salary, was highlighted as a project that needed most attention.

The Linux Foundation on May 29th announced the first $1.7m of CII funding from its $5.1m pot. It will allow two part-time coders, Mr Henson and Andy Polyakov, who handle the day-to-day coding of the OpenSSL security protocol, to work full-time on the project. Mr Henson has called the funding a “marvellous opportunity”; he hopes it will allow him to make major improvements to OpenSSL.

The foundation also released the names of five new CII members: Adobe, Bloomberg, HP, Huawei and In total 17 firms have now pledged to contribute $100,000 annually for a minimum of three years, which will be funnelled to three projects: OpenSSL, OpenSSH, another piece of encryption protection software, and Network Time Protocol, which synchronises computer clocks. "Open source software warrants a level of support on par with the dominant role it plays supporting today's global information infrastructure," says Jim Zemlin, executive director of the Linux Foundation.

The amounts involved will make a big difference for the projects. But $1.7m per year split three ways—the Linux Foundation declined to declare specific per-project funding amounts, but said money was allocated on need—doesn’t seem that much. And the individual pledges to the CII of $100,000 a year are mere rounding errors for big businesses such as Google, which make tens of billions in revenue every year. CII support takes up just 0.00017% of the search behemoth's turnover.

Interestingly, Chinese firms are more generous. As well as participating in the CII, Huawei is also privately sponsoring OpenSSL to the tune of $50,000 annually. And Smartisan Technologies, a smartphone manufacturer, has pledged $160,000 of extra support per year.

At the height of the panic about Heartbleed, your correspondent asked Steve Marquess, the public face of OpenSSL, how much money was needed for the project. “A few million a year would do grandly,” he said. “There should be half a dozen guys working full-time, plus support.”

Yesterday's announcement goes some way to reaching that goal. But some more money from the west's largest IT companies, many of whom have previously used open source tools with minimal payment in return, will surely be welcomed."

ChuckJune 5, 2014 7:38 AM

@Petter This is what (most probably) happened. Codenomicon found the bug, started to investigate, maybe contacted a few people and definitely the national CERT, and started the co-ordination efforts between CERT and vendor(s). Somewhere along the line this information reached someone, who was willing to disclose it immediately for some reason. Somehow I find it really hard to believe that a bug that had stayed under the radar for 3 years is discovered by 2 independent parties almost at the same time.

kruemiJune 5, 2014 9:06 AM

What I'm wondering starts earlier than the buggy code.
When people write cote, the make mistakes. Nothing new there.

My big question is: Why did they implement a heartbeat function into SSL. On TCP connections, this is already there on that level. And for UDP it could be solved withing the application (A ping from time to time trough the tunnel?).

I've always learned to keep functions, that are not really necessary away from functions that are important for the security!
So why did some people still think, that it would be a good idea to implement this function in SSL? And why did no one speak up against implementing this? Why did no one speak up against implementing it with a payload field that only increases complexity but does not better functionality?

Was there really no discussion about such things?

JacobJune 5, 2014 9:24 AM

In today's disclosure about the new vulnerabilities of openSSL, a major one was the ChangeCipherSpec issue which allows MITM attack.

The researcher who discovered it, Masashi Kikuchi, wrote in his blog this notable entry:

"... Next I check whether existing implementations correctly verify these conditions. Most implementations except OpenSSL verify them, more or less. OpenSSL seems not doing at all. Later I confirmed that OpenSSL is actually exploitable."

Another major concern is Prof. Matthew Green's opinion that libreSSL dev process would not have identified this vulnerability.

luckylukeJune 5, 2014 10:11 AM

@Jacob: There is no official statement by the linux foundation's Core Infrastructure Initiative regarding direct funding of libressl development.
The only funding is coming directly by the OpenBSD Foundation with donors behind it.

The point of asking Bruce here, is, that in no way openssl is 'repairable'.
If you look at the source code yourself and all the comments and media reportings all around it, you have to come to this inevitable conclusion that openssl is completely broken.

In order to restore security, by preventing catastrophic bugs like heartbeat, upstream openssl has to do the same as libressl is doing. Strip away useless old junk of code, trim the damned sources to readable, modern, well standardized code or suffer even more serious threats yet undisclosed.

I don't see any signs upstream openssl is doing that, not now, not even with any additional full-time employees, do you?

Libressl is not ready yet, but it will be and portability is a considered goal too.

It also incorporates all brand-new features not found in openssl (like chacha20, poly1305). That's future-orientated.

All it needs now, is to be taken as a 'Core Infrastructure Initiative' project and get funding as well and a little bit more time to prosper. :)

luckylukeJune 5, 2014 10:14 AM

@Jacob: totally agree. But it doesn't stop there. All crypto-libraries mishandle a lot lately.

BrillJune 5, 2014 7:38 PM

"We assume that governments also exploited the vulnerability while they could."

Why assume that? Governments aren't known for speed.

Chris AbbottJune 5, 2014 9:45 PM

Went to opensslrampage and the site with the slides. Indeed funny, but let me get this straight: OpenSSL is filled with garbage code and even stuff from the 90s as well things that are simply pointless?? That's rather disturbing...

MarcJune 6, 2014 3:29 PM

The problem here is called free riding.

Open Source is open source code and just that. Nothing else.
And with stuff like OpenSSL it's not really a choice.

OpenSSL ist pretty much a 2 man show with some volunteers and
the 2 man show is mostly volunteering as well. Henson makes
some bucks off of OpenSSL consulting but that doesn't quite
qualify as a full time job in this context.

The software is used by many big-money irons. None of them
feels obliged or considers it useful to actually give some
financial or staff support.

Hey. It's open source. It magically works(tm). People have
all the time they need. As we all know OSS coders don't need
to eat, sleep or engage in any other activity that's usually
required to survive in the world. Most notably making money.

So code reviews can actually be done by stopping time.

What's Google's and all the other free riders support?

Mostly thumb-pressing.

They are too busy spending the next dollars on
the latest and greatest crap-de-jour.

But spending a ridiculously tiny fraction of that in actual
financial support or qualified staff that would be *available*
if needed in one of the most critical projects on the entire

That's way too much of a risk investment.

polishwhaleJune 6, 2014 11:19 PM

It is quite trivial to force OpenSSL to use weak cypher still included in most browsers (due to backwards compatability with older websites) and intiate a MiTM attack given the right resources are in place. The weak or exploitable cyphers can be disabled in the browser, but then many websites will no longer accept a secure handshake, instead displaying a page reporting the handshake attempt as old and insecure and connection is dropped, or simply not negotiating an encrypted connection. The current trade offs between maintaining a secure connection and also functionality are too great for the average user at present which poses a very large problem.

Most web users have little idea about certificates anyway, so have little awareness of checking the state of a secure connection. Plus the number of unsolicited connections and port scans directed at a client every day is quite large (many of these are statistical collection attempts, but many are botnets and bad peers also). Huge amounts of data are collected on users everyday and only a few originisations publicly publish any details on this data.

Most smartphones also come with built in (sometimes third party) monitoring software. We first noticed this some years ago with early HTC smartphones where we were able to recover all key presses and SMS sent/received by the user, plus logs using the bundled third party software installed on the phone via COM port connection. All this data obtainable by the third party software would easily be obtainable via the carrier.

There are very few protections under current telecommunications law to address this problem.

WaelJune 7, 2014 2:40 AM

@ polishwhale,

It is quite trivial to force OpenSSL to use weak cypher still included in most browsers
I personally disabled weak ciphers on products I worked on.
but then many websites will no longer accept a secure handshake, instead displaying a page reporting the handshake attempt as old and insecure and connection is dropped, or simply not negotiating an encrypted connection.
I have not seen this as far back as 5 years ago. If you'd like to see the handshake, either use wireshark, or start an Openssl server and monitor your handshake. The handshake will always work as long as there is a common cipher-suite between the client and the server. It will fail if the server is only advertising "weak' or "low" ciphers, and the client has them disabled. You can look for ways of finding out by using some of the tools described here:,_Insufficient_Transport_Layer_Protection_(OWASP-EN-002)

Most smartphones also come with built in (sometimes third party) monitoring software.
Yes, they do! But most ones I know of don't allow key logging or snooping on user's private data -- they are used for diagnostics and network optimizations. Keyword here is "most".
HTC smartphones where we were able to recover all key presses and SMS sent/received by the user, plus logs using the bundled third party software installed on the phone via COM port connection. All this data obtainable by the third party software would easily be obtainable via the carrier.
Two points to note:

1) The problem was not about "snooping", it was about customer deception
2) The other problem was a weak implementation of the software that would allow an unauthorized entity to gain access to phone information.
The settlement / fine had nothing to do with the carrier "wanting" customer information...

There are very few protections under current telecommunications law to address this problem.
Seems to be a political problem rather than a technical one then...

Leave a comment

Allowed HTML: <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre>

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of Resilient Systems, Inc.