Dan Geer on Heartbleed and Software Monocultures

Good essay:

To repeat, Heartbleed is a common mode failure. We would not know about it were it not open source (Good). That it is open source has been shown to be no talisman against error (Sad). Because errors are statistical while exploitation is not, either errors must be stamped out (which can only result in dampening the rate of innovation and rewarding corporate bigness) or that which is relied upon must be field upgradable (Real Politik). If the device is field upgradable, then it pays to regularly exercise that upgradability both to keep in fighting trim and to make the opponent suffer from the rapidity with which you change his target.

The whole thing is worth reading.

Posted on April 22, 2014 at 7:52 AM42 Comments

Comments

maxCohen April 22, 2014 8:36 AM

“We would not know about it were it not open source (Good).”

I’m trying to understand this. How is that it being open source that we know about this problem? Couldn’t we have discovered it if it was closed source like so many other closed source exploits, through trial and error? Seems the advantage of it being open source is that it could be fixed by anyone.

llaen April 22, 2014 8:44 AM

@maxCohen: You’re right. That statement is somehwat flawed, though it does make discovering issues easier in the first place since more people can do a code audit.

The other obvious advantage is that you can participate/observe a general code cleanup following the discovery of the vulnerability.
Had the source been closed, we would just have to trust the owner of the code to do a thorough audit for other vulnerabilities but would never be able to confirm this.

Right now openssl is under new scrutiny which helps everyone in the end.

maxCohen April 22, 2014 8:47 AM

@llaen Had the source been closed, we would just have to trust the owner of the code to do a thorough audit for other vulnerabilities but would never be able to confirm this.

Very good point. Agree 100%!

Would love an audit of GPG someday.

Jason April 22, 2014 8:57 AM

My completely uninformed speculation is that since Apple’s crypto flaw was discovered, researchers have been paying extra attention to crypto libraries. And yes, while it would be possible to discover a vulnerability in closed source code, open source code enables more and easier methods of analysis.

tim April 22, 2014 9:14 AM

@llaen

There is a matter of trust required regardless if its closed or open source. At least with most close source applications – I understand what the incentive is to make sure your code is secure. Money. Which can be a very strong incentive. With open source there isn’t such an incentive.

Matt Drew April 22, 2014 9:15 AM

The only statement of his I really take issue with is this one:

” if you abandon a code base in common use, it will be seized. That requires a kind of escrow we’ve never had in software and digital gizmos, but if we are to recover from the fragility we are building into our “digital life,” it is time.”

He’s wrong about this, because open-source code does this by default; anyone can fork an existing code base and take over maintenance. If proprietary software vendors refuse the escrow – and they will, because they have nothing to gain from it – then the obvious recommendation is to avoid proprietary, closed-source code altogether.

maxCohen April 22, 2014 9:21 AM

@tim I understand your point but I think a strong incentive for open source is reputation of the programmer and for some it can be an ethics one.

Nick P April 22, 2014 9:44 AM

Let’s be real: problem is the hardware

I agree with him about troubles that come from monocultures, mode failures, complexity, etc. I used to say a lot of the same things as I was a systems and software guy. That said, he couldn’t be more wrong about why most hacks are happening. They aren’t because software is complicated. They are because mainstream machines are easy to exploit due to their architectural design choices. Another smart guy already explained it well:

“The problem is innately difficult because from the
beginning (ENIAC, 1944), due to the high cost of
components, computers were built to share resources
(memory, processors, buses, etc.). If you look for a
one-word synopsis of computer design philosophy, it
was and is SHARING. In the security realm, the one
word synopsis is SEPARATION: keeping the bad
guys away from the good guys’ stuff!”

“So today, making a computer secure requires
imposing a “separation paradigm” on top of an
architecture built to share. That is tough! Even when
partially successful, the residual problem is going to
be covert channels. We really need to focus on
making a secure computer, not on making a computer
secure – the point of view changes your beginning
assumptions and requirements!”

-Brian Snow, NSA Technical Director

I’ve previously posted that many systems in the past were designed with different instruction sets or built-in protections. The machines might differentiate between code & data at word level. The memory protection unit might see everything as segments with specific permissions enforcing POLA on every function. Quite a few typed every object in memory with a type check on each operation. There were certain methods for control flow integrity. One bounds checked every array access. At least two handled scheduling, exceptions, I/O, and memory management at the firmware layer to ensure consistency in all above layers. A few had garbage collection. A few required physical work to modify critical system layers such as boot code. Most of these had the OS written in a high level language whose compiler performed many checks.

Such designs allow the developer to put little effort into their code while attacker must still work their ass off for code injection. That gets us to the real problem: our machines are backdoor generators by design. We need new machines that perform well, while guaranteeing certain protections at the hardware level. Fortunately, DARPA and others are funding projects just like that which already have working prototypes. There will still be security problems due to a variety of causes, a few Geer mentions. However, you won’t open an email and then fight with an 8yr old kid for control of your PC. 🙂

A few old systems mentioned are linked in this comment
https://www.schneier.com/blog/archives/2014/04/friday_squid_bl_419.html#c5483352

My longer list of promising new and old tech
https://www.schneier.com/blog/archives/2013/12/friday_squid_bl_404.html#c2902272

He’s also wrong about us not being able to design systems that handle random and targeted faults at the same time. Many people, myself included, have designed such systems. I can’t recall if they’ve been taken to extreme of both fault types. I imagine it could be done by combining Nonstop availability architecture, tagged processor, layered/componentized system design, and exception handlers in each layer/component. These respectively catch hardware, software, and security failures. Try targeting that with your malicious IP packets and heartbleeds. All you’ll do is generate exception logs with data needed to find and patch the bugs when the admin finds it convenient. 😉

Shawn McMahon April 22, 2014 10:12 AM

The article says “What if Heartbleed had been a thoroughgoing monoculture, a flaw that affected not just the server side of a fractional share of merchants but every client as well?”.

It affects a tremendous number of clients. That’s going to be a lot longer and harder to fix than servers, for all the usual reasons.

Bruce Ediger April 22, 2014 11:19 AM

Let’s all watch the difference in monoculture punditry this time around. Last time Dan Geer came out against a monoculture, he lost his job, bloggers and trade rag editorialists excoriated him for weeks, and generally told us we had nothing to worry about.

But that was when Geer came out against a Windows monoculture. This is SSL, and an open source implementation, to boot. I bet no pundits will be paid I mean bother to come out against anti-monoculture arguments this time.

Clive Robinson April 22, 2014 11:35 AM

@ Nick P,

With regards Brian Snow, he was partly right in the past and less so now, but it’s comin back (such is the way things swing).

“Sharing” has not been the real driver of hardware for a quater century or so, it kind of stoped being an issue when “big iron” got replaced by PCs on LANs in the 80’s. It’s only come back again for the reason PC hardware now is “big iron” equivalent or superceeds it in most metrics that interest many organisations.

The issue Brian did not mention directly and which in many ways still is the real problem is “efficiency” although he did touch on it in other areas.

Much modern hardware is designed to be “efficient” as is software, though “efficient” means different things and thus metrics to different people. The problem as I’ve indicated in the past is that “efficiency” usually suffers from a bad case of “data leakage” via resource constraints opening side channels of various forms.

Whilst it is possible to design efficient systems which don’t have high bandwidth side channels you need to be aware of what’s involved to a level way way beyond most TEMPEST and other EmSec training (most of which is aimed at Techs not DesEngs). I’ve given some information in the past that is relevent to DesEngs but… as they say of horses you can but lead them to water, making them benifit from it is an altogether more taxing task 😉

Anderer Gregor April 22, 2014 11:40 AM

@tim:

I understand what the incentive is to make sure your code is secure. Money. Which can be a very strong incentive.

As we have seen e.g. with RSA’s BSafe, money can be an even stronger incentive to make sure your code is insecure …

Jacob April 22, 2014 12:13 PM

Dan Geer article left me with an uneasy feeling: on one hand, the guy has tons of credentials. On the other hand, he goes on a hyberbole, mixing theoretical musing with real-life examples that are a bit un-related.

The internet is fine and working great, had it not been to intentional subversion – be it state agencies, hackers, or spammers – in short, actors who go on purpose to do Bad Things.

Now, Mr. Geer provides examples of some methodologies that may give better controllability and determination – the aircraft industry and the clinical trials process.
Well, in both fields we normally do not have bad actors. When 19 bad actors operated on 9/11 for a couple of hours on some aircrafts, the US has spent since then more than $1B to review, to analyze, and to take action, while everybody is still feeling the aftershocks (and in a spooky coincidence, even facing the reduction of internet security and privacy). The same with drug trials. Should cheaters and subvertive financial actors play in it, I wonder how the trials system will hold up.

No easy solution.

Does Not Compute April 22, 2014 1:17 PM

@Nick P “…If you look for a one-word synopsis of computer design philosophy, it was and is SHARING. In the security realm, the one word synopsis is SEPARATION: keeping the bad guys away from the good guys’ stuff!…”

There’s a contradiction embedded in this argument that results in systemic dysfunction when pushed to the extreme.

The old computers were programmed by connecting function blocks with jumper wires – the wires were the “software”. This shows that the essential is not SHARING but CONNECTING.

To build a computer built on the axiom of SEPARATION, is to build a computer with function blocks that are DISCONNECTED.

In short, it amounts to trying to build a computer without wires connecting the components.

wins32767 April 22, 2014 2:19 PM

@Nick P: I think it’s actually a more fundamental thing than that. The whole Von Neumann architecture is the problem. If code is data and data is code you’ve baked in insecurity regardless of how much effort you put into trying to shore it up.

Clive Robinson April 22, 2014 3:23 PM

@ Does not compute,

You have not considered the type of conection…

If you think of it as unrestricted, then yes you have broken the seperation barrier between the two blocks.

If however instead of an unrestricted connection you have a fully mediated interface “choke point” the connection is restricted to “that which is allowed” within the current transaction. Provided this mediation is enforced in the correct manner the you can define a level of security required.

The procss of breaking down code blocks into smaller blocks with strongly mediated interfaces is a recognised way to reduce complexity and thus make other security issues reduce considerably.

Thunderbird April 22, 2014 3:30 PM

I think it’s actually a more fundamental thing than that. The whole Von Neumann architecture is the problem. If code is data and data is code you’ve baked in insecurity regardless of how much effort you put into trying to shore it up.

The trouble is that even if you build on a Harvard architecture, the things we want to do with computers mean you end up delivering content that is “executed.” That it’s executed in a virtual machine of one kind or another doesn’t change the problem you mention. Practically everything that people think of when they think of “the internet” is some kind of Turing-complete dingus: Word, Flash, Firefox, EMACS (yeah, I know, nobody thinks that any more), all the “content” platforms that providers can stuff their DRM into, etc.

I’m not sure what a system would look like if there were no way to deliver content that were then interpreted on the client node, but it certainly wouldn’t look like what people think of as their computer, and if they can’t watch the dancing baby and run Farmville and exchange Excel and Word files it will be functionally impossible to get a critical mass of users.

Also, if you don’t have any way to update “the system,” then it becomes really really hard to address the inevitable bugs that are discovered.

Open Source User April 22, 2014 4:12 PM

@tim: There’s also a number of businesses who develop open source and make money from support, including RedHat, openSUSE and Canonical. There’s a clear profit motive AND open source software in these cases.

DB April 22, 2014 4:45 PM

@ Thunderbird, wins32767

Thunderbird makes a good point. All data is “interpreted” at some point. For example, a picture is designed to display on a screen. Could a purposefully-malformed picture take advantage of some bug or oversight in the display system and do something it’s not supposed to? Absolutely. Harvard architecture does reduce the issue a lot over Von Neumann, but does NOT entirely eliminate the issue. It’s not a panacea.

So… yes.. we should be going toward Harvard architecture, don’t let me detract from that… but we shouldn’t then be stopping there and breathing a sigh of relief, we should be looking beyond that when we get there! It’s a road, not a destination. Some of the things Nick P mentions fall in this category of “even better than Harvard architecture.”

Anon April 22, 2014 6:06 PM

The issue here is not whether open source is good or bad.
The problem is we have an attacker with almost unlimited means and time with malicious intent.
What good are frequent updates if those updates are from the recent winner of the Underhanded C Coding Competition who is now working for the NSA?
Further complicating the situation is that we cannot assume that the number of good developers outnumbers the malicious ones.

Ultimately, the answer seems to be to create a list of essential code – like SSL – and commit the time, energy and funding into regular auditing and do only infrequent and controlled updates to it. Use best practices for this essential code (like MISRA C). Avoid standard bloat so frequent updates are unnecessary.

To solve these issues will require more than open source can offer. It will also probably require political commitment including funding for auditing, and passing new laws which holds the people involved in any government meddling (i.e. minimum jail time for the NSA employees all the way up the politicians that authorize it.).

Anura April 22, 2014 6:43 PM

@Anon

Unfortunately, essential code is a problem in and of itself. OpenSSL is an essential component, so is NSS and GnuTLS, but what about the web-browser (and all the components it uses, such as image rendering, page rendering, scripting engines, etc.), document viewers, IRC and IM clients, Tor, email clients, email servers, the entire internet stack and other OS components, any service/daemon that can interact with a (potentially compromised) client application; the potential attack vectors probably outnumber everything else. Hell, the pure complexity of your browser (JavaScript, plugins, add-ons, HTML5, image processing, video processing) makes it a nightmare to secure, while also being THE primary attack vector.

The long-term solution is, as Nick P mentioned, to start at the hardware level and make sure that we have a secure hardware model. As we are working on that, we should be looking to replace existing protocols with ones designed for security, simplicity, and modularity, whether they are cryptograpic protocols or not. After that, we need to make sure that our OS itself uses a secure model (probably some sort of formally verified microkernel), and that everything is built on segregation of processes and broken up into simple, modular components.

With those models and protocols in place, we can start creating all software from scratch, and making it all open source (from drivers, to the OS, to the software). We need to make sure we have a well-directed development process, with strictly enforced practices and procedures, with everything designed to be separated into modularized, independently verifiable components, everything written in a language designed to make it easy to write secure code (we might also want a formally verified compiler and formally verified standard/runtime libraries for our core languages), and heavy use of unit, integration, and regression testing (automated and manual as needed – I mean, no matter how good your model is, it can’t help if your authentication code always returns “success”).

My problem with the open source movement is the lack of organization; lack of organization tends to lead to inconsistent code, which tends to get really messy as time goes on. This doesn’t mean that closed source code is necessarily well developed either, of course.

This isn’t a panacea of course, as you are limited by the end-users of your systems, and large organizations tend to have a lot of users.

DB April 22, 2014 7:14 PM

“problem with the open source movement is the lack of organization”

It really frustrates me when people say things like this. It’s like saying “the problem with cars and planes is they crash” implying that cars and planes should be eliminated to eliminate crashes. Er.. no.. they should just be better designed, with more safety features. Likewise, software (including both closed and open source) should also be better designed (i.e. organized), with more safety features (i.e. security).

Software being open source does not force it to be disorganized. All open source software has a maintainer. Someone in charge of it. All of it. No exceptions. Some maintainers choose to be organized, some not. Shoot the disorganized maintainers by throwing out their software, don’t shoot the whole movement.

Anura April 22, 2014 7:28 PM

@DB

Whether it’s inherent or not is irrelevant; my problem is with the open source movement today, not open source software as a concept.

DB April 22, 2014 10:22 PM

@ Anura

I see, so you’re saying your problem is with how things are going, not that open source should disappear.

There are organized islands though. You just have to look for them. Not a lot of people have good organizational skills. Open source generally reflects more or less the skillset of technical people within society at large. I think that organized people will rise out more and more eventually, but it should happen organically, not be imposed upon it.

Take OpenBSD’s decision to fork OpenSSL. Out of the ashes of OpenSSL and Heartbleed comes the potential promise of something a lot better… Will it be the ultimate? No. It will be a great leap in the right direction though. Some other leaps will depend on hardware changes, which I sincerely do hope is coming eventually, that world just moves a lot slower and doesn’t turn on a dime… and that industry may be in denial still too so…

Nick P April 22, 2014 10:55 PM

@ Clive

“”Sharing” has not been the real driver of hardware for a quater century or so, it kind of stoped being an issue when “big iron” got replaced by PCs on LANs in the 80’s. It’s only come back again for the reason PC hardware now is “big iron” equivalent or superceeds it in most metrics that interest many organisations.”

No, it was an issue still & even more at that time. What Brian means by sharing is how the internal resources were more directly available to each other. Rather than a ton of walls and checks between components, there were various components that could directly access or manipulate state of other components. In a sense, the resources were shared rather than isolated via design & interfaces. And as you pointed out “efficiency” was the reason, while “data leakage” was one of many dangerous side effects.

As I’ve always said, we must win one battle at a time in INFOSEC. It’s possible that we can address (huge list of problems) and TEMPEST-style issues at same time. I doubt it. I’d rather just address one then throw the crap in TEMPEST tents or safes for now. Then address the next issue later after it causes many compromises on the quite secure hardware/software. Enemies use fait accompli little by little strategy because it works. Perhaps, we should do the same.

@ Jacob

” the aircraft industry and the clinical trials process.
Well, in both fields we normally do not have bad actors.”

We certainly do. Both suffer from the basic issue where profit makes actors try to BS the certifiers into thinking a product has greater quality/safety than it does. The medical industry is even worse with pervasive payoffs of people from those doing reviews to those writing textbooks. I remember one journal changed its policy to accept a certain amount of what’s basically bribery of “independent” reviewers as they couldn’t find anyone taking less than that amount of money from those who made what’s under review. So, those processes have plenty bad actors & it’s doubtful whether they really work. “No easy solution” is right.

@ Does Not Compute

I’m going to assume you’re joking around rather than trolling me. The problem is that you’re taking the words too literally. Clive beat me to explaining that the design of the system, hardware or software, can cause various elements to be effectively isolated or not. Snow essentially says mainstream architectures are not enforcing isolation for various reasons of performance and flexibility. He also implies that a redesign is necessary vs trying to improve broken architectures.

@ wins, Thunderbird, DB

re Von Nuemann vs Harvard Architecture

It’s a pointless debate that you shouldn’t get caught up in. Those things are abstract. Real architectures are concrete. Most real things are hybrids. And I thank DB for pointing out that I referenced architectures people created that are hard to map to either abstract idea, yet are far superior to both. So, I advise just don’t worry about academic crap like that and let’s focus on what real architectures can do. 😉

re “Everything is Interpreted”

I’ve said everything is interpreted if it’s above the hardware logic gates. That would be microcode up. That’s not an issue. My security engineering framework says we must verify correctness of every layer. So, the question is, does the layer the developers are working with make it easy to write secure or safe code? There have been many architectures that arguably made that work. There were many that didn’t. You’re probably reading this post on one. 😉 So, we identify what architectural attributes contribute the most to security/reliability while costing us the least. An optimal version of such a system is the one that we should be using. There might be many. And so I continue exploring…

@ Anon

I agree about coming up with known good standards. In previous conversations, I wanted them at EAL6-7. I might drop to EAL5-6 if strong controls on code quality exist. I want one for every core service or need. The political support won’t happen without a miracle. The funding might come from private parties, though.

@ Malachi J

“Openssl= Tragedy of the commons”

Haha. Nice.

@ Anura

Nice post and good plan overall. I see you also doubt that the OSS movement can accomplish this. I’d love to see them prove me wrong but it’s usually the other way around. With vulnerability and failure, I honestly hate being right. I want to see them put energy into processes that result in something of near perfection. I might be wanting that for a long time…

@ DB re Anura’s comment:

“”problem with the open source movement is the lack of organization”

I think Anura is pointing out that, despite its potential, the vast majority of open source work is shit from a security/quality perspective. Even most of the “security-oriented” projects don’t fully use what’s known to benefit security. They often even use what hurts security. I think Anura is like me in believing that most involved in open source won’t get the job done. It’s a nice concept with plenty benefits and potential. Just not real security for probably 99+% of projects.

yesme April 23, 2014 1:06 AM

@Nick P

You are forgetting that the SSL/TLS protocol, with all the options and revisions, is hard to implement and will probably take up to 100.000 lines of C code. In reference, a microkernel, the most basic part of it, is usually less than 10.000 lines of code.

So it’s not only the implemention to blame, it’s the protocol itself. (thank the fuckers of the NSA for that)

WeavelKernel April 23, 2014 2:03 AM

The problem any code faces, closed or open is that malicious attacks don’t care about the design brief or standards, they simple try and execute or deliver something that was never intended.

Modern consumer computing is weak by design, yet even open source platforms are at times vulnerable due to user error, bugs or unforeseen circumstance. At least the open source platforms try and implement a design that is more secure and has an open audit system, plus they try and enforce a level of discipline that fosters secure practices and the need to learn in order to operate. The closed source consumer platforms do not require users to properly learn to operate their systems and combine that with an insecure privilege system that can easily be side stepped or ignored.

The fact users can operate machines with no understanding of how they operate and absolutely no qualifications or self taught competency, introduced by these closed source platforms, only encourages the development of software that takes very little knowledge or understanding to use and very lax security practices.

Jacob April 23, 2014 5:37 AM

@ Nick P

“Both (Aircraft business and medical trials) suffer from the basic issue where profit makes actors try to BS the certifiers into thinking a product has greater quality/safety than it does.”

I totally agree with you on the issue you brought up, but when I referred to “Bad Actors” I meant actors who interfere with the product itself, not in the decision making process during development.

When you buy a plane from Cessna or drugs from Hoffmann La Roche, you can be pretty sure that no bad actor has subverted the product by either installing a device which will take over the navigation system at will, or by contaminating the pills by a special poison especially concocted for a “targeted” individual like you. Subversion in these fields are possibly done in the distribution channels via “interdictions”.

Contrast the above with computers and software: when you buy a CPU from Intel, a OS from Microsoft or a switch from Cisco, there is a no-zero chance that the product has a special feature in it that will cause you harm – either by siphoning off your business plans, your private comm and interests, or your political views for possibly nefarious purposes.

LightBit April 23, 2014 7:23 AM

@yesme

PolarSSL has 50.000 lines and it could have less. But I agree SSL/TLS is too complex.

John Hardin April 23, 2014 8:27 PM

The problem any code faces, closed or open is that malicious attacks don’t care about the design brief or standards, they simple try and execute or deliver something that was never intended.

…which is why fuzz testing and having someone “malicious” on your QA team, who attacks the software, is important.

Nick P April 23, 2014 10:01 PM

@ yesme

“You are forgetting that the SSL/TLS protocol, with all the options and revisions, is hard to implement and will probably take up to 100.000 lines of C code. In reference, a microkernel, the most basic part of it, is usually less than 10.000 lines of code.”

I can’t imagine what the size of it matters if it’s implemented via a language or architecture largely immune to code injection that also allows easier code review. If anything, my recommendation makes things easier as the code gets bigger.

That said, the way to implement that on a mainstream architecture is to model it with interacting state machines. You break it into many functions that are each small and easy to verify. You use state machines to allow easier formal analysis for errors & control flow issues. You consider various types of problems at each component or layer, along with their interactions. The largest part you keep in your head at once will be far less than 100,000 lines of code. And you can do most of it in a safe language that catches common sources of code injections.

Of course, we’re seeing a very different approach taken by the OpenSSL developers. It’s paying off too. For the attackers. 😉

@ Jacob

Thanks for the clarification. I see what you’re saying now. It makes sense. It is odd that my field is one of the few where people worry about these things. I think the reason is that what’s being engineered, the computer, is an extremely versatile technology that’s trusted in many endeavors. That’s plenty opportunities + plenty misplaced trust. Typically = intense efforts by rather unscrupulous people.

Garrett April 24, 2014 7:31 AM

@ yesme

Now that we have experience with SSL/TLS implementations, has there been any consideration of doing audits at the spec level? Going and officially deprecating a number of the options which are least likely to be used, least beneficial, and which add substantial complexity? If we attack part of this at the specification level, we would drastically reduce the amount of worry and work at the implementation level.

yesme April 24, 2014 8:20 AM

@Garrett

“Now that we have experience with SSL/TLS implementations, has there been any consideration of doing audits at the spec level? Going and officially deprecating a number of the options which are least likely to be used, least beneficial, and which add substantial complexity? If we attack part of this at the specification level, we would drastically reduce the amount of worry and work at the implementation level.”

To be honest, I don’t have high hopes that we can expect serious changes in the IETF. There are too many islands and they all are improving their own OSI model thing. What’s lacking is the big picture. We have way too many protocols doing roughly the same. That needs to change. For example, there is NFS, WebDAV, FTP, CIFS/SMB, SFTP, FTPS and simply put, they all provide remote access of files (HTTP too). So the level of NIH is high. The same with H.264/265 and Googles VP8/9. There are just too many forces playing there.

That’s why I don’t expect real and thorough changes at the IETF. It shouldn’t be a committee, it should have good and powerful architects. And it also shouldn’t play the devastating patent games.

In short, they should change their agendas and the end user should be paramount.

But I don’t expect that to happen any time soon.

Joe April 24, 2014 11:23 PM

“The router situation is as touchy as a gasoline spill in an enclosed shopping mall.”

Why is this guy, or anyone, concerned about their or any internet router? So long as it routes, who cares?

Use end to end encryption, assume everything in between is compromised. Any other course of action is illogical, always has been.

Figureitout April 25, 2014 1:27 AM

Joe
–You haven’t thought long enough about the problem eh? It’s my hardware and I should have complete control of it. Sure the modem itself needs to be more than just a doorstop; encoding data w/ an actual secure protocol. Anonymity for starters. Having your comms routed thru incriminating sites; leading to false accusations. Wifi security (which is bordering on worthless and extremely fun to hack). Worthless firewalls and hidden ports that hide packets coming in and out of your network (and into your computer). If you assume everything in between is compromised, what about the chips you’re encrypting w/? If you have a nice double or triple shielded room w/ 1 or 2 noise rooms and bare minimum of 20 ft of dirt to force some heavy duty radar to try to penetrate where the encryption is happening, props to you. I envy your setup. Otherwise, you’re getting ass-raped and not even aware of it…

security researcher May 1, 2014 11:56 PM

Even with the widespread publicity of HeartBleed and availability of a patch for servers, some major online retailers still have not secured their E-Commerce sites against this vulnerability. I won’t mention which ones, but I have informed them of the vulnerability and am watching to see how quickly they get the job done.

If they do not fix the problem promptly and inform their customers to update their passwords once finished patching and updating certificates and private keys, I will publicly start revealing their websites, server platforms, and how long they have known about the vulnerabilities and failed to act, as well as their previous fingerprints and if they in fact update their certificates and keys. I will continue monitoring and recording their progress.

bresketaltup August 11, 2014 9:32 AM

Dan Geer wrote (said) in his Blackhat talk :
“things that need no appropriations are outside the system of checks and balances”.
He speaks of the Executive and of the Legislature but nowhere mentions the Judiciary.
In the case of an effective appeal to Justice, full access to the facts are required. Open source code delivers that. Closed source does not.
Hats off to open source. Lets get a decent Freedom of Information regime for the rest, one that forces closed source code to be revealed when used in critical areas. Dan Geer says all areas are critical. Lets just ban closed source 🙂
In their own way the blackhat commuinity are the judiciary – acting without an axe to grind; impartial; open in it’s proceedings and open to all, just like the Justice system. Oh, wait, aren’t there secret courts and gag orders and …

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.