More on the Vulnerabilities Equities Process

Richard Ledgett—a former Deputy Director of the NSA—argues against the US government disclosing all vulnerabilities:

Proponents argue that this would allow patches to be developed, which in turn would help ensure that networks are secure. On its face, this argument might seem to make sense—but it is a gross oversimplification of the problem, one that not only would not have the desired effect but that also would be dangerous.

Actually, he doesn’t make that argument at all. He basically says that security is a lot more complicated than finding and disclosing vulnerabilities—something I don’t think anyone disagrees with. His conclusion:

Malicious software like WannaCry and Petya is a scourge in our digital lives, and we need to take concerted action to protect ourselves. That action must be grounded in an accurate understanding of how the vulnerability ecosystem works. Software vendors need to continue working to build better software and to provide patching support for software deployed in critical infrastructure. Customers need to budget and plan for upgrades as part of the going-in cost of IT, or for compensatory measures when upgrades are impossible. Those who discover vulnerabilities need to responsibly disclose them or, if they are retained for national security purposes, adequately safeguard them. And the partnership of intelligence, law enforcement and industry needs to work together to identify and disrupt actors who use these vulnerabilities for their criminal and destructive ends. No single set of actions will solve the problem; we must work together to protect ourselves. As for blame, we should place it where it really lies: on the criminals who intentionally and maliciously assembled this destructive ransomware and released it on the world.

I don’t think anyone would argue with any of that, either. The question is whether the US government should prioritize attack over defense, and security over surveillance. Disclosing, especially in a world where the secrecy of zero-day vulnerabilities is so fragile, greatly improves the security of our critical systems.

Posted on August 9, 2017 at 6:40 AM88 Comments

Comments

Scissors August 9, 2017 8:02 AM

Ledgett has some good points. However, he needs to brush up on WannaCry before claiming it’s a prime example of obsolete software (Win XP) getting compromised. A huge majority of WannaCry targets were Windows 7. (https://arstechnica.com/information-technology/2017/05/windows-7-not-xp-was-the-reason-last-weeks-wcry-worm-spread-so-widely/). I guess some could complain that Win 7 isn’t patch able in every situation, but that’s more a design issue than an insufficiency of patching issue.

Robert August 9, 2017 8:07 AM

Richard’s statement that the blame is on criminals is short misguided. After all, our society deems a person not taking actions to prevent a murder is guilty of murder. The same is true for many crimes. In the same manor, an organization, government, or individual that does not disclose a vulnerability responsibly to a software developer is also guilty of the exploitation of software vulnerabilities.

Dr. I. Needtob Athe August 9, 2017 8:25 AM

Ledgett: “As for blame, we should place it where it really lies: on the criminals who intentionally and maliciously assembled this destructive ransomware and released it on the world.”

Bruce Schneier: “I don’t think anyone would argue with any of that, either.”

It sounds rather odd to hear Bruce Schneier, of all people, agree that the blame for weak security belongs not on its designers but on those who discover and exploit it. In fact, it seems contrary to everything I’ve been reading here for roughly the last 20 years.

This week in ASVAB Waivers August 9, 2017 8:50 AM

Ledgett, classic army. OCD and not too bright, fake degrees from Close Cover Before Striking School of Cosmetology and Strategic Intell, famously explained that Snowden would cause chem-bio apocalypse (Did he mean, in Kenema?)

Ledgett’s tax parasites whipped up dozens of handy utilities for exciting home business opportunities like Eternal Rocks, bobbled them all over the Internet for everybody in the world to play with – but it’s the fault of… criminals.

In other news, Oppenheimer blames the H-bomb on Russia.

It’s going to be hilarious when China finally gets fed up with these beltway morons.

Mario August 9, 2017 8:58 AM

[…] And the partnership of intelligence, law enforcement and industry needs to work together to identify and disrupt actors who use these vulnerabilities for their criminal and destructive ends. […]

So, who says that the use of these vulnerabilities is for good or for criminal intentions? The US government? … and who they are?
I think you should start to think in a more “global” way, not all are Americans and there are a lot of interests everywhere…

regards

JohnnyS August 9, 2017 9:23 AM

The idea that the blame should be placed only on the criminals who write malware and exploit users malware is deeply wrong.

The mantra currently endemic in the industry is “all software has flaws”. Every developer and peddler of software repeats this mantra as though it somehow excuses their sloppy, insecure product. The fact that software we depend upon for our livelihoods has been produced and sold with little regard for security and with vast and painful consequences for the users is a high crime in itself.

OpenBSD is not the perfect solution, but it does prove that competent developers with a strong focus on security CAN create a very secure and robust system that can provide a much higher level of protection to the user than the usual suspects.

When someone says “software vulnerabilities are inevitable”, make sure you do NOT buy software from them. Instead, try to work out a way with your government or other regulatory bodies to hold their feet to the fire on security problems. It’s time they were held responsible, much as the car companies were held responsible after their greed and utter disregard for their customer’s well-being was exposed by Ralph Nader in “Unsafe At Any Speed”.

Ollie Jones August 9, 2017 10:08 AM

Dr. Schneier hints at this in the last sentence: There’s a system resilience argument to make.

Obviously, if vulnerabilities are “retained for national security purposes,” that requires a zero-defects security posture from the retaining organization. If somebody manages to exfiltrate those vulnerabilities, systemwide trouble can result. The bad actor can quietly hang on to a vulnerability and exploit it years later, or use it immediately. So, this kind of retention is, basically, security by obscurity: a brittle strategy.

If well-funded state actors can’t manage to keep these kinds of secrets, it’s likely that nobody can.

On the other hand disclosing vulnerabilities is a resilient strategy.

Who? August 9, 2017 10:17 AM

Proponents argue that this would allow patches to be developed, which in turn would help ensure that networks are secure. On its face, this argument might seem to make sense — but it is a gross oversimplification of the problem, one that not only would not have the desired effect but that also would be dangerous.

The only gross oversimplification of the problem I can see is thinking vulnerabilities are our friends. Does the NSA want secure networks? Then they must fix all known vulnerabilities in both U.S. and non-U.S. software. This one is the only way to stay secure, spending efforts on the defensive part of the NSA.

The only reason to store vulnerabilities is making mass surveillance much easier, something that can hardly be described as “national security.”

As noted by lots of readers on this blog, if you are a target of the NSA all hope is lost. Offensive capabilities cannot be the result of weakening network security.

Dirk Praet August 9, 2017 10:43 AM

And the partnership of intelligence, law enforcement and industry needs to work together to identify and disrupt actors who use these vulnerabilities for their criminal and destructive ends.

Err, did he now just say we need to put the NSA and other TLA’s on trial for exploiting vulnerabilities for criminal and destructive purposes?

… if they are retained for national security purposes, adequately safeguard them …

Ah, I see. We should all work together to disrupt such abysmal behaviour, but it’s ok if we (and our valued partners) do it because we are the good guys and you should not get in our way or we will hit you with very frivolous litigation based on overly broad statutes we actually pushed for ourselves.

As long as every government on the planet thinks like that and remains convinced that NOBUS is a very real thing, then exactly nothing is ever going to change.

ab praeceptis August 9, 2017 10:52 AM

JohnnyS

Yes and no.

“OpenBSD is not the perfect solution, but it does prove that competent developers with a strong focus on security CAN create a very secure and robust system that can provide a much higher level of protection to the user than the usual suspects.”

Sorry, no. It does prove that with lots of expertise, good will, and efforts one can create a somewhat less insecure system. Whether the efforts/results ratio is a healthy one probably depends on ones position.
Don’t get me wrong. Those people are among the best and doubtlessly have not only the very best intentions but also heavily invest in terms of efforts. That it merely enhances the situation a bit isn’t their fault. The most severe boundaries are posix and C.

And technicalities aren’t the only and possibly not even the major issue. Example: just look at how large corps. or governments handle inhouse vs consumer software (and almost certainly hardware) engineering. Typically you will find quite different mindsets and paradigms. The us of a gov., for instance, knew about the problem decades ago and they did have it solved (e.g. Ada). That they didn’t follow up is due to other factors (see below). The french also have made major efforts. google (go) had safety as an important criterion, too, and so did facebook with their efforts about php as well as others. Finally evil corp (microsoft) itself sunk gazillions into research (often simply buying solutions from the french and others).
Compare that with we, the people, get.

And that’s, at least in part, our own fault. Why? Not simply because we eat what they serve, but because safety is simply not on our wishlist – or at least we associate it with “state, government”. The wishlist is well known: bling bling, gadgets, jumping emojis and other trash and maybe some social status elements.

If tomorrow morning some company were to offer a tablet with a reasonably safe OS very few would buy it. One obvious reason is that “being safe” and “offering tons of gadgets and plunder” don’t go together well; most potential buyers wouldn’t see a safe tablet but rather a boring tablet that is uncool and undesirable, even risking to be ridiculed by peers.

The primary evil guy in all of that is the state. It is both the understanding of the people and (one of) the official raison d’etre of the state that it protects its people (and, btw. it’s also it’s duty to properly educate people instead of stupidizing them to be helpless working, tax, and consumer drones).
Unfortunately those who implement “the state” have other interests. For them stupid, helpless consumers and citizens are far more desirable than the constitutional-theoretical “we, the people, are the souvereign”. Just ask some spooks …

And the large corps? Well, their holy formula is return on investment. Offering colourful, cool, plunder sells. And they are not forcing it down our throats; no, we ask for it.

Non Creditis August 9, 2017 11:23 AM

I don’t believe anything anyone says from NSA.

They have acted against the rule of law, against the Constitution and against the people for so long they have lost the ability to say anything except self serving double talk. I blame NSA for much of the current chaos on the www.

I hope that’s not too nuanced to understand.

JohnnyS August 9, 2017 11:58 AM

@ab praeceptis

“Sorry no” to your “Sorry no”. 🙂

OpenBSD is a bit better than “somewhat less insecure”. It really is a solid OS that is “much less insecure” than the current commodity crop. Perfect, no. But if you boot it up on the Internet it doesn’t get pwned instantly. Considering the toxic crap bouncing off my firewall, that’s a big leap in security.

The rest of your libertarian-esque argument boils down to “caveat emptor” and “the end user is too stupid to want security, so they give them insecure stuff because money.”

We have billions of dollars in losses on an annual basis right now and it’s only getting worse: I don’t think the users will continue to not care forever.

Whether we hold the makers of the insecure commodity software to account through government regulation, class action lawsuits or just a good old-fashioned lynching makes no difference to me: I don’t follow any particular “ism”. As long as it gets done.

No disrespect meant.

Hahaha August 9, 2017 12:01 PM

We need CALEA backdoors to watch the bastards we let in… and you.
There is a part of IT subculture that is just like police and the govt sector. If we solved problems and redesigned an OS, jobs would be lost. Hidden motivations to not care about prevention. I love how outlier conditions are shaping this country into a brown curl on the grass.

ab praeceptis August 9, 2017 12:21 PM

JohnnyS

“But if you boot it up on the Internet it doesn’t get pwned instantly. Considering the toxic crap bouncing off my firewall, that’s a big leap in security.”

Pretty much every unix has a firewall. One might reasonably argue that OpenBSDs pf is better in this or that regard but the differences aren’t major (plus, there are also negative sides like. e.g. OB pf being singlecore). So your argument is not a major one.

Also keep in mind that it’s usually not the OS per se that gets breached but rather diverse applications (which are the same on all BSDs and often on linux, too).

But that wasn’t the point anyway. My point was that any posix conformant/implementing system is limited by that and that any system written in C is limited by that. While one theoretically could “armour” the hole code base by adding annotations (incl. separation logic which isn’t exactly easy to do) to it and then statically verifying it, the efforts necessary for that would be prohibitive and much more expensive then starting fresh with a better suited language.

It’s not that one didn’t try or least try to create some infrastructure and tools to at least partially automate the transformation. The problem, though, is what makes C a bad choice in the first place, i.a. and foremost ambiguity. Other prohibitive factors, to name an example, are “smart”(?) developer tricks an automated transformation machinery couldn’t grok.

As for the rest (sheeple, gov + large corps, etc) I merely spoke about empirical observations. microsoft might be the best example: They do much in terms of developing tools to produce reliable and safe code and they do use it or at least have begun to use it internally. What they produce for the sheeple should be obvious; example: tiles, i.e. giving the desktop OS a more “up to date” and smartphone like interface. In other words: The good stuff for ourseves, the crap for the sheeple.

As for your bet on society not tolerating that much longer, I’m afraid, you’ll end up in a long line of others who assumed so and failed to find their guess becoming reality. Most sheeple, Pardon me, are just way to ignorant, clueless, and careless.

Dirk Praet August 9, 2017 1:06 PM

@ Non Creditis

I don’t believe anything anyone says from NSA.

OK, but wouldn’t “Non Credo” or “Non Credimus” (plurale majestatum) have been a better handle than a Plural 2 then ?

JohnnyS August 9, 2017 3:11 PM

@ab praeceptis

I was actually referring to putting OpenBSD on the Internet without a firewall: With the default install, assuming the installer put strong passwords in when prompted during the initial install and didn’t open anything else up, the system should tolerate being on the Internet for a reasonable time without being hacked. This is down to a “default hardened” installation process and good code practices and reviews.

Your point that it’s usually the apps that are exposed is agreed.

As for the issues with C you very correctly point out, it goes deeper: The basic physical amd64/i386/ x86 architecture in use is designed to support a system that puts code, data and stack in the same memory space (ref: “Smashing the stack for fun and profit” http://insecure.org/stf/smashstack.html). This isn’t just a problem with C, but with the memory handling in the amd64/i386 processors, so just getting rid of C won’t solve the problem.

I acknowledge the the lack of interest by the public, but back in the 60s this nutty lawyer Ralph Nader actually DID manage to get enough awareness among the electorate to force changes to the way the carmakers were building deathtraps. It wasn’t pretty, but I can honestly say I’d rather be in an accident at 40mph in my ’09 Subaru than at 20mph in a ’58 Impala.

So it’s a matter of political leadership: Get the right person leading the charge and the electorate will follow, sheeple or not. Ralph Nader did it. I’m not an American so I don’t know if you’re still breeding his type, but if you are you may want to poke one of them towards this issue.

ab praeceptis August 9, 2017 3:24 PM

JohnnyS

I’d certainly be among the first to shout “hurray” if you were right about “we, the people” (or a representative) succeeding in bringing about positive change – but I don’t hold my breath.

mzoef August 9, 2017 3:49 PM

@JohnnyS : A car accident is tangible and kills people. That’s very far from a smartphone exploit that could expose one’s bank account or messages and photos at worst. Also, attacks on smartphones aren’t exactly common yet ; I can’t think of any person I know who was victim of anything in the past years.

Maybe the state of consumer tech security will change, but it would take massive attack (like ransomware campaigns that effectively target most computers) to make it quick. For now, people are obviously satisfied by their mediocre OSes, as they don’t suffer from it.

bob August 9, 2017 4:19 PM

@Robert

“our society deems a person not taking actions to prevent a murder is guilty of murder”

What the hell are you talking about?

cg August 9, 2017 5:13 PM

Listen to this Richard Ledgett fellow: “former Deputy Director of the NSA.” The way he runs his mouth, he’s a straight-up old-guard KGB double agent who somehow got into the NSA through Katherine Archuleta and her cronies’ lax security.

… help ensure that networks are secure … — … is a gross oversimplification of the problem, one that not only would not have the desired effect but that also would be dangerous.

They don’t want our networks to be secure. They hack into them for money. “They,” you might rightly ask. “They.” They are the pro-Russian thieves in law who are well nigh unto having the U.S. banks and other critical online infrastructure at their mercy for any amount of money they want, blackmail, and worse.

Malicious software like WannaCry and Petya is a scourge in our digital lives, and we need to take concerted action to protect ourselves.

Come on, you just have to put up with it. The thieves in law are in control.

That action must be grounded in an accurate understanding of how the vulnerability ecosystem works.

We buy and sell zero-days for money, and it would undercut our market if vendors got too smart for their own good and started “fixing” those vulnerabilities.

Non Creditis August 9, 2017 6:02 PM

@Dirk Praet

Yes, of course.

Certainly ‘Non Credo’ – I don’t believe* would work.
‘Non Credimus’ would work too, I hope.

But, I blame the Google that repeatedly told me ‘I don’t believe’ is Non Creditis.

I sat for awhile trying to drag up that Latin class from long, long ago and then realized you were right and I shouldn’t believe Google all the time, …either.

Government and corporate internet organizations certainly do have a credibility problem these days, however, no matter what you call it.

Sancho_P August 9, 2017 6:17 PM

The crux: “adequately safeguard them”:
No one can.
There are dozens of reasons, from personal (who has access, would you trust when your president has?) to technical issues:
The enemy knows all “known” and probably several unknown vulnerabilities, granted.

Whom to blame?
First blame those who keep us intentionally insecure, then the other criminals.
Who would demand their devs to have vulnerabilities in their product?

A very short sighted article, as often at lawfareblog, unfortunately.

Not sure if I understand correctly, but I think @Bruce’s last paragraph seems to hint he’s (at least slightly) pro vulnerability disclosure? – Pssst, it’s dangerous!

Only it can’t be the gov’s (via taxpayer) duty to find and disclose them –
In capitalism the gov has to govern (= set the rules for) the “free market”, not to disable them (e.g. lex Bill G).
Capitalism means to punish them if they fail.

Btw. there is no question about priority, because attack means aggression.

Wael August 9, 2017 6:31 PM

@Non Creditis,

Who’s who on this blog:

then realized you were right and I shouldn’t believe Google all the time, …either.

Google translate seeks @Dirk Praet’s and @Czerno’s assistance with Latin translations. @Ratio does Arabic (but his Phoenician apparently sucks.)

Anon August 9, 2017 10:35 PM

@Dr. I. Needtob Athe:

It sounds rather odd to hear Bruce Schneier, of all people, agree that the blame for weak security belongs not on its designers but on those who discover and exploit it. In fact, it seems contrary to everything I’ve been reading here for roughly the last 20 years.

Agreed. I wasn’t expecting to read that at all.

@All:

If the Government’s first job is protection of the people, then surely they have a duty to disclose any discovered vulnerabilities to the software author concerned as they find them?

Conversely, if they know of a vulnerability but choose not to get it fixed bhecause it could be useful for some other purpose, then the Government is now passively working against the people they are meant to protect.

The Government’s first priority should be protection of the people, because as much as they may lose a specific method of entry into an enemy’s systems, they have equally denied the enemy use of that same technique, securing their people (and themselves) against attack via the same methods/vulnerabilities.

As the WannaCry and Petya viruses prove, an exploit is not just for use by the “good guys”. Once just one Bad Guy(TM) knows about it, it’s game-over. Patch or be pwned.

We wouldn’t tolerate a manufacturer of a car with defective brakes not recalling the cars to be fixed because the defect only occurs if you open the trunk with two hands on a Monday morning, so why do we accept it when it comes to software?

If they know about the problem, it should be reported (in confidence), and fixed. To do anything else is crazy.

Anon August 9, 2017 10:41 PM

Is electronic defense/offense symmetric? Is the loss of capability offset equally by the gain in security, or is the improvement better or worse overall than the loss of capability?

OK – so we can’t hack them, but they can’t hack us, either. Surely this is the better posture?

Ph August 10, 2017 2:13 AM

A very nice example of doublespeak.

What i read is that NSA is allowed to hunt and use vulnerabilities because the other players in the chain don’t have all their stuff together.

No real arguments against putting the NSA in the disclose role as opposed to the hoard and (ab)use role.

Clive Robinson August 10, 2017 3:29 AM

With regards,

As for blame, we should place it where it really lies: on the criminals who intentionally and maliciously assembled this destructive ransomware and released it on the world.

Much as people dislike it he is correct, in that,

I assume nobody forced those who wrote the ransomware to write it. That is it was they alone who deliberately and with forethought created the ransomware with an intent to commit a criminal act, or appear to so do[1].

So yes as a criminal act those who wrote the ransomware software are solely to blaim in the legal sense for what they chose to do with it.

What most people here are arguing about is what and whom knowingly or unknowingly contributed to the crime. Which is where you fall into Richard Ledgett’s argument logical trap.

Richard Ledgett is pulling a fast one on the security community and he is getting away with it. He has deliberatly focused on a criminal act and then used it to excuse other behaviour. It’s something that the better paid lawyers and polirical players do constantly.

So lets argue not as he does about the criminal act and who is legaly responsible for it, but more as crash investigators do that is to investigate and find those who knowingly or unknowibgly contributed to the state of affairs that allowed the criminals to commit their particular act of criminality.

From what we know in reverse order,

1, The NSA exploit was released.
2, The NSA exploit was obtained.
3, The NSA used the exploit.
4, The NSA developed the exploit.
5, The NSA chose to weaponise the vulnarability.
6, The NSA obtained the vulnarability.
7, The NSA set up a weponisation unit.
8, The NSA chose to manufacture cyber-weapons.
9, The NSA decided on offence not defence.

But for all that to be viable and happen we have to look at why step 6 was posible.

Put simply the software industry is predicated on churn. That is to produce software at the low price point it does, it needs to generate continous income by high volume sales. Thus the industry is focused on rapid development of product and incentivising people to spend a little money frequently. Part of that is removing from the development process anything that is unnecessary to that process.

As we know security is not something the general market has so far wanted to pay for.

It could be argued that those who chose other OS’s are doing so for security or other reasons. The biggest use of *nix is currently the Linux kernel and GNU software, tucked out of sight behind the Android front end. None of which is very secure as jail breaking and malware via applications is actively engaged in.

When you line up the commercial and FOSS OS’s on consumer level products you find that there is a general trend that indicates security is a low priority for end users. Similar trends show for applications.

Thus it’s going to be hard to argue against the observation that security is not something that people generaly want to put on their purchasing choices.

However as with the general perversity of humans, when something goes wrong they don’t wish to blaim themselves, “It has to be the other guy who’s to blaim” and “It’s the person to blaim that must make restitution”…

Yes there are a lot of people who are to blaim in very many ways so many in fact that we are nearly all to blaim. It is our patronage and money that sets the direction of the market. The market dynamically responds, thus we get what we pay for.

Unfortunatly there is a non linear effect in progress which is akin to the economic effect noticed and attributed to the English economist William Stanley Jevons. Known as the Jevons Paradox it dates back to 1865. It describes an effect technology has on a market that appears perverse. Jevons noted that as a fuel became more efficiently used due to improvments in technology the demand for the fuel would rise not fall. The reason is that the effective reduction in cost made it a more viable powersource than alternatives as well as making new processes viable. Thus economic activity increased and demand became higher.

Almost the entire ICT industry is now a similar economic paradox as well as showing other anomalies[2]. Which tends to suggest that the usual economic measures are not likely to work the way we might predict from other markets.

Thus we are left with the question of how to change an anomalous market for the better.

The answer is possibly to be found by market regulation and legislation.

As has been pointed out before the automobile market was undesirable and as a result we ended up with changes in legislation. Quite contrary to the usuall free market mantra the legislation moved the market into a different operating position. It stopped being what was a race to the bottom and became inovation led.

I suspect that the answer lies in “appropriate” market regulation legislation. However we have to be carefull what we wish for, most ICT legislation since the 1980’s has been entirely the wrong sort.

[1] The jury is still out on weather it realy was a ransomware attack or a cyber-weapon designed to look like ransomware. Because as many have pointed out both the software and ransom backend implementation was amaturish in nature, but very effective in attack.

[2] Another anomalous behaviour of the ICT market is the inherant “deflationary activity”. The usuall argument is that due to inflation the price of a “good” will rise with time. Thus it is better to buy now than later as you will get a given “good” at a lower price. The ICT market is different in that it has “price points” that remain more or less stable, however as time goes forwrds you get more for the price rather than less. Thus you have the argument that it is better to wait to buy as you will get more for your money. Simple economics would indicate that such a market would fail as nobody logically would buy today, thus there would be no income to sustain the market. The fact the ICT market survives and apparently thrives suggests that simple economics is missing something…

Inside Threat Model August 10, 2017 5:06 AM

@bruce @moderator
nothing on the google issue of the past few days? It seems rather relevant to schneier on security unless of course it impacts EFF funding.

Ergo Sum August 10, 2017 6:37 AM

@Clive…

Yes, software companies are guilty of rushing product to the market in order to keep the cash register ringing, but…

From what we know in reverse order,

1, The NSA exploit was released.
2, The NSA exploit was obtained.
3, The NSA used the exploit.
4, The NSA developed the exploit.
5, The NSA chose to weaponise the vulnarability.
6, The NSA obtained the vulnarability.
7, The NSA set up a weponisation unit.
8, The NSA chose to manufacture cyber-weapons.
9, The NSA decided on offence not defence.

But for all that to be viable and happen we have to look at why step 6 was posible.

From my viewpoint, there are three options possible for “step 6”:

  1. Developed in-house
  2. Purchased on the exploit market
  3. Provided by the complicit software company

The #1 and #2 are essentially the same, if and when #2 had been purchased with exclusive rights.

The question is… Is the vulnerability in question a programming logic/coding error, or it is a correct programming logic/coding? If it’s the latter one, the #3 should really be called backdoor. In which case, certain sequence of letters and/or commands could provide “god mode” access to the local/remote user(s).

For example:

Remote code execution vulnerabilities exist in the way that the Microsoft Server Message Block 1.0 (SMBv1) server handles certain requests. An attacker who successfully exploited the vulnerabilities could gain the ability to execute code on the target server.

To exploit the vulnerability, in most situations, an unauthenticated attacker could send a specially crafted packet to a targeted SMBv1 server.

The security update addresses the vulnerabilities by correcting how SMBv1 handles these specially crafted requests.

Source: https://technet.microsoft.com/en-us/library/security/ms17-010.aspx

Without knowing the content of “a specially crafted packet“, it’s hard to say under which category this vulnerability falls under?

The hardcoded local IP address in the SMB exploit does not instill much confidence in programming error:

After the initial SMB handshake, which consists of a protocol negotiate request/response and a session setup request/response, the ransomware connects to the IPC$ share on the remote machine. Another related aspect of this attack is that the malware is configured to connect to a hardcoded local IP, as shown in Figure 1.

Source: https://www.fireeye.com/blog/threat-research/2017/05/smb-exploited-wannacry-use-of-eternalblue.html

Thoth August 10, 2017 8:20 AM

@Johnny S

“I don’t think the users will continue to not care forever.”

This is where you get it wrong. Most people are reactionary in nature and that’s how we can be easily exploited.

It is back to the carrot vs. stick argument. Another way of calling it is convenience vs. security. Security is always seen as the inconvenient troublesome thing while convenience is convenient because it lacks the hassle of security.

A thought experiment would be to stalk a person and read their emails and personal conversations and allow them some hint that they are being watched, the victim would definitely go somewhat out of their comfort zones to try to use some form of message security technique to protect the conversation. One simple way is to agree on a bunch of codewords and speak in codes.

If you simply make your presence hidden while you still monitor the victim, they would pretty much revert to normal conversation without codewords after a while of figuring out that they might not be under active surveillance. The reason this happens is because the victim feels threatened and they attempt to protect themselves by increasing their security levels and it is the base instincts of all creatures to ensure their own survival thus when it comes to matter of survival, it is the “stick” situation and they will try to protect themselves by all means. When the perceived threat is thought to be over, they will not continue all the hassle as they now perceive the “carrot” which is free communication.

Thus, it is important that the bar is to be raised permanently instead of temporarily via some “more secure OS”. OpenBSD is good but still not good enough if we are to be strict. The main thing is to have a safe bottom-line but we are not seeing this happening as many of the “old issues” of computing which should have been solved are still coming back to haunt us. One good example is buffers and memories in computers and as long as anyone can remember, the issue of buffers and memory violations are still on-going seemingly forever.

Do we still want to live in this era where we are perpetually stuck on low security levels or do we want to improve and not want the good old SSL and TLS exploits to come back and haunt us.

The slippery slope of complacency is permeating every aspect of computing and now we have IoTs which will make it much more obvious and worse in every single aspect.

Are we going to say that the huge TCB of Linux now loaded into IoT chips are fine ? Isn’t Mirai not a good reminder that the problems of old now bleeds into the era of new and with IoT, it gets even worse. Who would have known 64+ pieces of hardcoded default passwords in Mirai is enough to cause a lot of problem with websites protected by Cloudflare (i.e. Krebs on Security).

Our current attitude is we just shrug our shoulders and wipe our hands thinking all is fine with whichever IoT systems (i.e. Linux kernel on some IoT chipset). We could probably proceed down the nice slippery slope of “it is too paranoid to do more security” or “this much security level is good enough” and when we think we are fine, we aren’t.

The people behind the projects are also important and a very good example is Linux Torvalds himself. He cares not about security and thinks that security would somehow magically fix itself and people would be nice enough to submit patches that have no backdoors. This isn’t true as we might never know if any of those patches submitted may or may not be OK for use in critical systems. For such a person who leads such an important project (the Linux Kernel) and cares not about security, it would be unsurprising that Linux is unfit for security critical systems but people do not learn their lessons anyway.

The sad thing is that not a lot of efforts are being invested into higher assurance security because people simply don’t care and do not want to sponsor such developmental efforts. It does not make sense to push out higher assurance technology and innovation into the public and give it as is (FOSS) and it has very little benefits to the creator and the users don’t want to use it either. Look at all the new projects and new security products in the market and one wonders what is the cause of stagnation of ideas in the security industry.

Nathan Gau August 10, 2017 9:30 AM

Spot on. Security through obscurity never works. Corporations certainly need to do some work in recognizing that deprecated protocols need to be shut off, which is without question the case with SMB1, but the government has little business sitting on a known vulnerability that can be used to exploit because it gives them an attack vector. Let’s also not forget that it was an NSA mistake that allowed to get this out. As such, our own government, by not publishing this information gave the bad guys the keys to the kingdom while not giving the people it’s sworn to protect the time needed to safeguard their environments.

JohnnyS August 10, 2017 10:28 AM

@Thoth, @mzoef

Yes, people are complacent and will ignore insecurity until it affects them, but there’s a tipping point that can be reached.

December 7th, 1941 was a tipping point that really didn’t affect many Americans: Only a few thousand died and there were many more Americans dying annually of various diseases at the time. But holy cow, did it ever change the entire direction of the USA from neutrality to aggression!

September 11th, 2001 was another tipping point: Again, only a few thousand people died, much fewer than were dying from either gun violence or traffic accidents in the USA that year. But again, the USA went from complacency to maximum attack in a very short space of time.

The breadth and severity of damage to end users from hacker attacks has gone from minor inconvenience (infected floppy, “Stoned” virus”) to widespread and very high impact (Wannacry and other ransomware). If in the future you get 5-10 million Americans with ransomware, water and power utilities failing and massive economic disruption then “something’s gotta give.”

Sooner or later it will get to a tipping point and things will change. I don’t dare try to predict if the changes will be good or bad, but it’s going to be interesting.

Nick P August 10, 2017 12:15 PM

@ Inside Threat Model

It’s more helpful to readers if you just describe the event with a link as the others do. Then, people can read it to determine things for themselves. Optionally, Bruce might share his own thoughts on the matter.

ab praeceptis August 10, 2017 12:58 PM

JohnnyS

I think that logic is flawed, i.e. and mainly for two reasons.
a) what happened then was within the “real world”, within a frame everyone can principally perceive and understand. Lack of static typing or buffer overflows in ssl/tls, however, are not. Those are in “another world” for most people.
b) The events you mentioned were much amplified and carried to the last corner of the country by both politics and media. The problems we discuss here pretty never are. Also note that, insofar, politics and media look at them, that happens in an utterly distorted and completely untechnical way boiling down to arbitrary attribution, i.e. they do not at all care about the problem itself but merely abuse it for other ends.

And even if, very rarely and superficially, such problems (as discussed here) do surface the “solution” is as simple as “xyz corp. declared that there is a new and secure version available”.

Thoth August 10, 2017 8:29 PM

@ab praeceptis

A Linux A Day Keeps The Hackers Away !!!

Golden Stickers are OLD STUFF !!! We need GOLDEN QUOTES !!!

Dan H August 11, 2017 6:41 AM

@JohnnyS
“I was actually referring to putting OpenBSD on the Internet without a firewall”

I would suspect that OpenBSD would be compromised in very little time.

However, Plan 9 was actually put outside the corporate firewall and was never breached as far as I know.

JohnyS August 11, 2017 6:54 AM

@Dan H

“I would suspect that OpenBSD would be compromised in very little time.”

Prove it.

Wael August 11, 2017 7:24 AM

@JohnyS,

Prove it.

That’s impossible for two reasons:

One:
Suspect: verb – To have an idea or impression of the existence, presence, or truth of (something) without certain proof.

Secondly: (I don’t feel like saying ‘two’):
OpenBSD (and FreeBSD, TrustedBSD) aren’t weaker than the rest, to put it ‘diplomatically’.

JohnnyS August 11, 2017 8:27 AM

@Wael

“impossible”?

Nonsense. Take a freshly installed OpenBSD box and put it on the Internet, no firewall. Monitor it.

Measure the time it takes to get pwned. Publish that time here, and I will happily and humbly retract my assertion “With the default install, assuming the installer put strong passwords in when prompted during the initial install and didn’t open anything else up, the system should tolerate being on the Internet for a reasonable time without being hacked.”

Do it or GTFO.

Wael August 11, 2017 8:54 AM

JohnnyS,

Obviously you need to improve your reading skills.

Do it or GTFO.

And your comprehension skills too. Go reread what I posted and tell me if I’m saying openBSD is weak or strong.

I’m wiling to bet I used BSD longer than you have.

Dan H August 11, 2017 9:46 AM

@JohnnyS

CVE-2017-1000372 Detail
A flaw exists in OpenBSD’s implementation of the stack guard page that allows attackers to bypass it resulting in arbitrary code execution using setuid binaries such as /usr/bin/at. This affects OpenBSD 6.1 and possibly earlier versions.
CVSS v3 Base Score:
9.8 Critical
Access Vector:
Network exploitable

CVE-2017-5850 Detail
httpd in OpenBSD allows remote attackers to cause a denial of service (memory consumption) via a series of requests for a large file using an HTTP Range header.
CVSS v3 Base Score:
7.5 High
Access Vector:
Network exploitable

CVE-2017-1000373 Detail
The OpenBSD qsort() function is recursive, and not randomized, an attacker can construct a pathological input array of N elements that causes qsort() to deterministically recurse N/4 times. This allows attackers to consume arbitrary amounts of stack memory and manipulate stack memory to assist in arbitrary code execution attacks. This affects OpenBSD 6.1 and possibly earlier versions.
CVSS v3 Base Score:
6.5 Medium
Access Vector:
Network exploitable

ab praeceptis August 11, 2017 11:46 AM

JohnnyS

Nonsense. Take a freshly installed OpenBSD box and put it on the Internet, no firewall. Monitor it.

Sorry, no, the nonsense is on your side. Also kindly note that while Wael usually seems to prefer to talk about arabic and whatnot outside of our field, he is no idiot and you shouldn’t treat him like one.

You pretty much went right into a marketing trap laid out by OpenBSD. Of course a “naked” system (which is default with OB) has a much much smaller attack surface than one with diverse network stuff enabled! That, however, works just as well with most other systems (probably excl. windows with its diverse call home and call in “functionality”).

Following that line of argument one might as well call windows secure because it has a quite small attack surface when being switched off.

Actual data suggest that OpenBSD is, indeed, somewhat more secure than other BSDs and certainly linux. However, almost all of the applications are pretty much the same on all the unices and unfortunately it’s those, the applications that are attacked the most.

And, more importantly, as I already told you, OB has about the same boundary (“theoretical optimum”) as other unices because it’s largely written in C and (self-)condemned to be posix conforming.

Ratio August 11, 2017 2:35 PM

@Dirk Praet,

It’s pluralis majestatis, i.e. nominative and genitive singular, respectively. 🙂

@Wael,

@Ratio does Arabic (but his Phoenician apparently sucks.)

ههههه شكرا

No, my Arabic sucks. My Phoenician, on the other hand, is completely nonexistent. 😉

(I’ll catch up on the rest of the chatter later.)

Is Lawfare Connected to the ShadowBrokers? August 11, 2017 4:21 PM

It’s Lawfare. There is absolutely no reason to treat Lawfare as a trustworthy source.

Nick P August 11, 2017 10:59 PM

@ JohnnyS

I agree that it probably wouldn’t be hacked quickly or probably at all for most people since it and its users are obscure (unimportant). Mostly. However, if a targeted attack happened, you’d have lost the bet as recently as a week ago. Don’t fall for the only 2 or 3 vulnerabilities in the default install line. They count differently than the rest of us. I go into that and some other OpenBSD BS here which I still need to finish (esp revise) before I drop it on OpenBSD devs themselves. Aside from those that read here.

So, they’re both great at writing good code w/ tactical mitigations plus full of shit when talking about that in terms of history, best of current practice, and so on. At least they make a robust UNIX. Others that are full of shit usually have shitty OS’s. 😉

Dirk Praet August 12, 2017 7:45 AM

@ Ratio

It’s pluralis majestatis, i.e. nominative and genitive singular, respectively.

Silly me. I keep on making that mistake over and over again because that’s how our first grade Latin teacher Father Vanderdonckt sj. (incorrectly) called it. 40+ years on, I suppose I still haven’t come to terms with the fact that not everything the old guy told us was gospel 😎

Hahaha August 12, 2017 12:28 PM

@Dan H
At least someone figured out CVD.

I am thinking of the history of Linux and BSD security fail wondering why Linux users think their OS is inherently “more strong.” Since I have been installing before Caldera got bought by SCO, bought by Novell, I have seen some epic fails. The beauty of Linux is building custom, stripped-down ISOs for servers. Otherwise, Linux is technically an anti-holistic process of disparate projects coming together, updating whenever, excessive beta.

Child-like convo saying “my OS is strong.” ORLY? You open yourself up to a thousand different vectors of attack on that. Given Android, most hacked since NT4, Linux users might want to tone down their argument.

And somebody tell Google that bug bounty is not an excuse for proper beta testing. You have to own your code base. What happens when nobody shows up for bug bounty except scammers trying to get paid?

gordo August 12, 2017 4:46 PM

Mr. Ledgett has every reason to be defensive. Nowhere in his piece does he say what types of vulnerabilities should be disclosed. Is disclosing vulnerabilities that can be weaponized for indiscriminate, mass-scale cyber attacks on whole ecosystems too much to ask? Had The Shadow Brokers not given advance notice of their impending tool dump, how much worse might the WannaCry outbreak have been?

More on the ongoing fallout, harms, opportunism, ecosystems—real or imagined, etc., below.


Also begging the question is the suggestion of a new kind of digital divide: Those who can afford to secure their computer systems against indiscriminate attacks and those who cannot.

Somewhat off-topic, but not unrelated is the banning of Kaspersky from GSA vendor lists. I’m guessing that means the whole supply chain. That is yet another indication of how the internet is splintering.


Ongoing fallout, etc.:

Russian Cyberspies Are Using NSA Tools to Target European Hotels
By Catalin Cimpanu | Bleeping Computer | August 11, 2017

After the US government has spent probably millions of dollars developing hacking tools, Russian hackers are now using them to spy on guests across hotels in Europe and the Middle East.

According to a report released today by US cyber-security firm FireEye, a well-known Russian cyber-espionage group has used an NSA exploit known as ETERNALBLUE as part of a complex set of hacks it carried out starting July this year.

This report marks the first time a Russian or any other cyber-espionage unit has used ETERNALBLUE in a live campaign after a group of hackers called The Shadow Brokers leaked the tool online in April this year.

https://www.bleepingcomputer.com/news/security/russian-cyberspies-are-using-nsa-tools-to-target-european-hotels/


Lloyd’s says cyber-attack could cost $120bn, same as Hurricane Katrina
World’s oldest insurance market warns cost to global economy of cyber-attack could be as much as worst natural disasters
Julia Kollewe | The Guardian | 17 July 2017

Lloyd’s of London has warned that a serious cyber-attack could cost the global economy more than $120bn (£92bn) – as much as catastrophic natural disasters such as Hurricanes Katrina and Sandy.

https://www.theguardian.com/business/2017/jul/17/lloyds-says-cyber-attack-could-cost-120bn-same-as-hurricane-katrina


Security firms still using WannaCry to push their wares
Security firms are continuing to use last month’s WannaCry ransomware attack to shamelessly plug their wares, with McAfee the latest to do so, warning the Australian Government that cyber crime is becoming more and more sophisticated.
Sam Varghese | ITWire | 01 June 2017

Somehow the picture that came to mind was of a stockbroker in the film The Corporation, talking about how, when he saw the planes crashing into the World Trade Centre towers, his first thought was what stocks he could buy for his clients so that they could make a killing.

https://www.itwire.com/open-sauce/78370-security-firms-still-using-wannacry-to-push-their-wares.html


[ nominal delivery draft, 6 August 2014 ]

Cybersecurity as Realpolitik
Dan Geer

If a couple of Texas brothers could corner the world silver market,[HB] there is no doubt that the U.S. Government could openly corner the world vulnerability market, that is we buy them all and we make them all public. Simply announce “Show us a competing bid, and we’ll give you 10x.” Sure, there are some who will say “I hate Americans; I sell only to Ukrainians,” but because vulnerability finding is increasingly automation-assisted, the seller who won’t sell to the Americans knows that his vulns can be rediscovered in due course by someone who *will* sell to the Americans who will tell everybody, thus his need to sell his product before it outdates is irresistible.

This strategy’s usefulness comes from two side effects: (1) that by overpaying we enlarge the talent pool of vulnerability finders and (2) that by making public every single vuln the USG buys we devalue them. Put differently, by overpaying we increase the rate of vuln finding, while by showing everyone what it is that we bought we zero out whatever stockpile of cyber weapons our adversaries have. We don’t need intelligence on what weapons our adversaries have if we have something close to a complete inventory of the world’s vulns and have shared that with all the affected software suppliers. But this begs Schneier’s question: Are vulnerabilities sparse or dense? If they are sparse or even merely numerous, then cornering the market wins in due course. If they are dense, then all we would end up doing is increasing costs both to software suppliers now obligated to repair all the vulns a growing army of vuln researchers can find and to taxpayers. I believe that vulns are scarce enough for this to work and,, therefore I believe that cornering the market is the cheapest win we will ever get.

http://geer.tinho.net/geer.blackhat.6viii14.txt


Bugs in the System
A Primer on the Software Vulnerability Ecosystem and its Policy Implications
Andi Wilson, Ross Schulman, Kevin Bankston and Trey Herr | New America | July 28, 2016

The question for policymakers is, what can they do to help speed the discovery and patching of vulnerabilities so that our computer systems and therefore our economic stability, our national security, and consumers’ privacy—are safer? (p. 2)

https://www.newamerica.org/oti/policy-papers/bugs-system/

https://na-production.s3.amazonaws.com/documents/Bugs-in-the-System-Final.pdf [Length: 40 pages]


Enemies of the Internet 2013 Report
Special Edition: Surveillance
Reporters Without Borders
12 March 2013

Today, 12 March, World Day Against Cyber-Censorship, we are publishing two lists. One is a list of five “State Enemies of the Internet“, five countries whose governments are involved in active, intrusive surveillance of news providers, resulting in grave violations of freedom of information and human rights. The five state enemies are Syria, China, Iran, Bahrain and Vietnam.

The other is a list of five “Corporate Enemies of the Internet“, five private-sector companies that are “digital era mercenaries“. The five companies chosen are Gamma, Trovicor, Hacking Team, Amesys and Blue Coat, but the list is not exhaustive and will be expanded in the coming months. They all sell products that are liable to be used by governments to violate human rights and freedom of information. (p.
3)

http://surveillance.rsf.org/en/

http://surveillance.rsf.org/en/wp-content/uploads/sites/2/2013/03/enemies-of-the-internet_2013.pdf [Length: 47 pages]

JohnnyS August 12, 2017 5:02 PM

@Nick P,

I’ll grant you that kernel vulns exist and are being fixed, but I would still like to see someone do the experiment of putting an OpenBSD box on the Internet as I described. A real world demo beats theory and research. Talk is cheap.

Again, no disrespect intended to anyone.

Thoth August 12, 2017 6:51 PM

@Nick P, ab praeceptis

Forget about OpenBSD, Linux, Windows, Mac or even some microkernel beong stronger.

Just exploit the Intel AMT, ARM TZ or AMD PSP exploit and it’s pretty much over. Even if some super secure hypervisor written using only a single line of code (yea you know what I mean) and certified to CC EAL 100+ levels were used, it still would be useless against hardware based backdoors in the chip. @Nick P should have at least libraries worth of links pointing to hardware exploits on TZ, AMT and the likes of these so-called Secure Enclaves or Remote Management tools.

Thoth August 12, 2017 7:57 PM

OpenBSD Hackathon Hosting Billing Order

Items

1.) OpenBSD 6.1 software – Cost (USD$): 0.00
1.) VPS Space (OpenBSD) with annual renewal (FlokiNet Romania 1 VPS) – Cost (USD$): 107.00
2.) Domain Name (openbsdtest.tech) via (namecheap.com) for 1 year – Cost (USD$): 1.06
3.) Honeypot Prize Money for 1 year – Cost (USD$): 1500.00
4.) OpenBSD setup fee (1x) – Cost (USD$): 200.00
5.) Administrative and Monitoring Fees for 1 year – Cost (USD$): 1000.00

GST Tax (7%): Waived
Total Cost: USD$ 2808.06

Payee Information

Bitcoin Address: 18TnhHLakFva8vTqC1BeSWpr4mTXmQ9hvb

Additional Information

Bitcoin exchange rates as per https://cex.io of the day.

ab praeceptis August 13, 2017 2:00 AM

Thoth

Yes, absolutely.

But then, who in his right mind would use an x86[-64] or arm processor for a sensitive security project? Oh well, except governments, banks, the snakeoil industry, etc …

Thoth August 13, 2017 3:42 AM

@ab praeceptis

“Oh well, except governments, banks, the snakeoil industry, etc …”

That is as good as EVERYBODY 🙂 .

No way out of this nasty mess that’s why I keep proposing using an alternative hardware like a smart card as a Dynamic Secure Execution Environment as it is less susceptible to these problems but it is by no means 100% solution. It is only meant to be a means to buy enough time to find something more robust while having at least some level of security.

Thoth August 13, 2017 8:48 PM

With all the discussions of whether OpenBSD is secure from a default setup when left on the Internet, it is interesting to note that nobody has decided to contribute financially to the experiment I have offered to setup as a paid service for a year. I guess there is nothing much to argue whether OpenBSD is secure from a default setup when left on the Internet or not and how long it will take to get hacked since there is no uptake if sponsorship for this experiment.

Wael August 13, 2017 10:42 PM

@Thoth,

it is interesting to note that nobody has decided to contribute financially to the experiment I have offered to setup as a paid service for a year.

Who in his right mind is going to pay for one of your golden stickers? You’re slapping them freely, left right and center on everything 🙂

I guess there is nothing much to argue whether OpenBSD is secure from a default setup when left on the Internet or not and how long it will take to get hacked since there is no uptake if sponsorship for this experiment.

Perhaps because some believe that the experiment is insignificant (as I do.) A possibly better way is to examine the security capabilities of various OSS under test at the architecture level, for example this comparison between FreeBSD and OpenBSD. It’s a bit dated, but at the time, OpenBSD had ASLR whereas FreeBSD lacked it. Conversely, FreeBSD had jails and OpenBSD lacked it. Two different security controls and different ways and philosophies of protecting memory…

The expectation is that users of FreeBSD know how to secure the system for the purpose they need. The fact that OpenBSD has (as claimed on their page) the most secure default installation has little meaning for an expert system admin. It maybe a good thing for an amateur or a beginner. Therefore the experiment is meaningless. It’s only a PEN test exercise on default installations.

I use FreeBSD and I like it for nostalgic reasons, even if it’s perceived to be less secure in some areas. It takes me about three or four weekends to set it up and configure it the way I want. That includes compiling everything from source, configuring options before compilation, disabling all the services I don’t need, backup snapshots, etc… If I used OpenBSD, I would do the same! So the fact that OpenBSD has a more secure default installation is insignificant to me: I will change the default installation.

A better suggestion for a paid service would be to develop your own test suite and let customers upload images of their OSS for evaluation. A subsequent step or added feature is that you create a portal where customers choose the operating system and set of applications they need, then you configure it for them and send them the secured or hardened image (integrity protected, of course.) Just don’t slap any stickers on it. Good luck!

Thoth August 14, 2017 1:00 AM

More importantly, people just want to have a free ride. FOSS and the Internet have probably created a mindset where people would not want to pay for the software they want if possible.

This is a second experiment to see how many people would be willing to contribute to the causes of improving security and the results are quite apparent in itself.

There are no stickers on the OpenBSD experiment by the way. Not even labeling it as insecure or secure in the above billing order 🙂 .

We will have to accept the fact that the status quo of insecurity would continue to remain.

Wael August 14, 2017 1:24 AM

Hmm… That’s not nice!

people just want to have a free ride […] people would not want to pay for the software they want if possible […] This is a second experiment to see how many people would be willing to contribute to the causes of improving security and the results are quite apparent in itself […] quo of insecurity would continue to remain.

That’s basically saying (in no particular order) the outcome of the experiment is D!

A) People here are software pirates, freeloaders and cheap bastards
B) They don’t care about security
C) Apathetic (they don’t care to help fellow bloggers make a living)
D) All of the above

ab praeceptis August 14, 2017 1:58 AM

Thoth

You are right.

The BSDs are notoriously poor. While linux gets money and resources (particularly from large corps) thrown at it, other – and considerably better – OS projects get very little support.

Jane and Joe expect software to be free, largely thanks to [f]oss, and the large corps aren’t driven by warm hearted charity either; they simply found a new form of buying what they want (which might be very different from what Jane and Joe want and, in fact, contrary to their interests).

Thoth August 14, 2017 3:18 AM

@Wael

Regarding ‘Apathetic (they don’t care to help fellow bloggers make a living)’, I have not mixed my work stuff with personal stuff. They are kept totally separate and no sales of work related stuff ever occurs here.

Note that the billing order is addressed to my personal Bitcoin account in terms of sponsorship for the OpenBSD Hackathon experiment as a personal venture since there are so many people here very curious of how well OpenBSD is to real world scenarios.

Also note that the biggest chunk of cash allocation is for the honeypot cash to be placed within the OpenBSD environment as a reward to whoever manages to hack into the OpenBSD setup. It is intended to be in the form of an unencrypted Bitcoin wallet or at least the plaintext password to the Bitcoin wallet with the price money would be placed within the OpenBSD environment as a reward.

Setup fees and monitoring fees are a requirement as those are time consuming and a compensation on the time and effort of me doing the setup and spending my off-time monitoring the setup would need to be justified and those are way below the appropriate amount for my caliber and the pay-check I usually draw.

If I don’t really care, I could have just ignored and walked off but the fact is I bothered to talk about current issues with security and have been quietly working on a few things behind the scenes, these amount would not even suffice if I bother.

Basically there is too much talk here and very little action and the few of us who bothered to step up to the plate and get something practical have mostly decided to go back to doing our own things these days because of the environment and how it’s not worth it to contribute these days to the open community.

@ab praeceptis
I have deleted half of my Github repositories since I don’t see how it is worth putting time and effort into improving and maintaining it these days.

Wael August 14, 2017 4:56 AM

Basically there is too much talk here and very little action

That’s what blogs are for. People who want ‘action’ do it.

and the few of us who bothered to step up to the plate and get something practical have mostly decided to go back to doing our own things

That’s more productive.

anonymous coward August 14, 2017 12:22 PM

Overall, price, demand and supply will drop in the west while remaining constant elsewhere.
It’s all about market forces.

researcher -> 3rd party -> govt

If you force the major western buyers of vulnerabilities, governments, to immediately patch vulns they buy, then the perceived price of exploits will drop as demand drops.

Also, because a 3rd party is oft used (ZDI, Exodus Intelligence), they’ll have to sell to better bidders.

Principally, other countries (RU/IR/IN/CN/VN/etc.) will maintain the perceived value.

With less buyers at fair market rate, overall the price will drop.

Also, demand will drop as no western researcher wants to run afoul of proposed/future export laws (Wassenar Agreement was dropped but not the ‘spirit’ of the law).

Overall, it’ll give a competitive edge to other countries.

Hahaha August 14, 2017 2:17 PM

@Wael
Here’s a quote for ya:
“I have far more confidence in the one man who works mentally and bodily at a matter than in the six who merely talk about it”
– Michael Faraday

[] Library injection and hidden process; external dynamic library loading
[] Windows NTFS still allows alternate data streams
[] CA Authority as a centralized government hijack
[] Forensic direct disk access via USB=>BIOS=>disk
[] CALEA no-handshake protocol for deep access if necessary
[] SMS/MMS assault; telecom indifference
[] phone internet exchange for telepresence which needs to die
Pick a glaring fail but they have you at the lastmile box down the road.

I don’t think I have to say much about wasted law enforcement logic.

We still have routers that offer WEP. What else can I say?

Wael August 14, 2017 2:50 PM

@Hahaha

“I have far more confidence in the one man who works mentally and bodily at a matter than in the six who merely talk about it”

What misleads you to believe that people who talk about it here aren’t doing anything? And what leads you to believe that people who claim to be doing something about it know what they’re talking about?

Here’s a quote for ya:

Cute! And here is one for ya:

“A mouse in a Lion’s skin is still a mouse.”
Take off your skin and use your real handle if you want to continue the discussion.

Lee Neubecker August 14, 2017 3:52 PM

The leaked cyber weapons such as Double Pulsar have been combined I believe with exploits on the Broadcom WiFi Chipset to allow the WannaCry malware to spread and attack the UK NHS Hospitals in a drive by attack. You only need to come in range of WiFi to take over the Raspberry Pi Broadcom WiFi chip (if unpatched – which most today still are on the Raspberry Pi platform).
See https://leeneubecker.com/uks-nhs-likely-compromised-due-to-broadpwn-vulnerability-on-raspberry-pi-medipi/ for more details on why I think the UK’s NHS was so quickly compromised.

On a side note, the Raspberry Pi should have a patch update out shortly. There is a pre-release patch to fix the Broadcom CVE-2017-9417 “Broadpwn” issue and CVE-2017-0572 Memory Corruption problems. More on the patch is available at https://leeneubecker.com/raspberry-pi-patch-to-protect-against-broadpwn-pre-released/

Bruce, I love watching your presentations on youtube.com! Keep up the great work that you do!

Dirk Praet August 14, 2017 5:42 PM

@ Lee Neubecker

See https://leeneubecker.com/uks-nhs-likely-compromised-due-to-broadpwn-vulnerability-on-raspberry-pi-medipi/ for more details on why I think the UK’s NHS was so quickly compromised.

Thanks for sharing that!

@ Wael, @ Thoth

A better suggestion for a paid service would be to develop your own test suite and let customers upload images of their OSS for evaluation…

I know I would pay for such a service.

The fact that OpenBSD has (as claimed on their page) the most secure default installation has little meaning for an expert system admin.

I concur. There is zero point in connecting any default installation to the internet and then see what happens.

Hahaha August 15, 2017 1:04 PM

@Wael
Use your real handle? That’s a good one. Every morning.
I think I meant that about how industry leaders or projects talk a good game, not the posters here. Sorry for that.

It’s never about thread-posters. We should be vectored outward in case some govt lackey or corporate monkey runs by this forum, might he be enlightened with pure hatred. A digital mirror with fear tactics and denials bouncing off.

And as far as defense, I think almost every one of these threads leads to a single evolution of thought… custom encryption and screw the brickwall called Uncle Sam.

Wael August 15, 2017 1:13 PM

@Hahaha,

Use your real handle? That’s a good one

You have a right to laugh after the stunt “he” pulled. I do recognize your writing style, though.

Uncle Sam

Stands for: United States of America, just in case you wondered where that came from…

I don’t know where the ‘C’ came from, though. Tell me if you do.

Hahaha August 15, 2017 2:41 PM

@Wael
I don’t filter my writing style. You can pick some of my words, do a search, and probably come up with my infinite handles and run-on lunch break frustrations.

Like a bad party joke in a room filled with bots. There is no ubiqtorate for coders or engineers to tell governing entities how retarded lawyers are for perceiving cleverness as intelligence.

In college, a philosophy professor told me she didn’t own a television. The class laughed. By the end of the semester, I wasn’t laughing because the news was spewing out Monica Lewinsky. I try hard to not watch the news anymore… a square box that pisses me off.

Nice bot language. I’m gonna have to work on that. Right to laugh on your real handle style, Uncle Sam.

Nick P August 15, 2017 3:09 PM

@ Wael

re OpenBSD and FreeBSD

I’m not sure it’s accurate to say FreeBSD feels less secure with it has more reported vulnerabilities, more attack surface (esp complicated code), less auditing, and less barriers to execution of exploits esp in privileged code. The overall ecosystem is definitely less secure than OpenBSD. FreeBSD seems to mainly beat them on features, speed, and so on.

Far as comparisons, you might enjoy this one from 2017 with a pretty, friendly discussion from OpenBSD and FreeBSD reps:

My BSD Sucks Less Than Yours

Wael August 15, 2017 3:19 PM

@Nick P,

I’m not sure it’s accurate to say FreeBSD *feels* less secure…

I said “perceived*. And that means by others, not by me – that’s the reason I italicized the word. It means I’ll use it even if others claim it’s less secure, which I don’t necessarily agree with.

Thanks for the link. Will definitely visit it.

Clive Robinson August 15, 2017 3:28 PM

@ Hahaha,

And as far as defense, I think almost every one of these threads leads to a single evolution of thought… custom encryption and screw the brickwall called Uncle Sam.

No some of the threads here go further, way further than that.

At the end of the day encryption custom or otherwise is vulnerable, very vulnerable when run on modern computer hardware.

People talk about how wonderful the latest encryption app is… But forget or don’t realise that the display and keyboard are in the plaintext space not the ciphertext space. Which in turn is inside the scope of the computer. So the encryption counts as nothing if they can get to the computer I/O via an “end run attack”. Any computer in “on-line mode” –connected to a communications channel of some form– is vulnerable to end run attacks. Even if it’s off-line when used for ciphering all computers have semi-mutable memory, be it the Hard Drive or the flash inside a SoC used for say part of an I/O device. Malware can hide there and “store and forward” plain text or KeyMat at some future time. Such as when the computer is used in on-line mode, or an “Evil Maid” or “Secret Squirrel technician” briefly have physical access. Think of it like one of those “key loggers” from the end of the last century.

It’s solving such issues that are where the real action is not custom encryption.

Dirk Praet August 15, 2017 5:37 PM

@ Nick P

Re. My BSD Sucks Less Than Yours

That’s recommended reading indeed for all BSD buffs. Daroussin and Jacoutot presented it at FOSDEM last February too.

@ Hahaha

Is that you, @r ?

Wael August 15, 2017 6:35 PM

@pup socket,

Say more so we can profile your “ankle” and dereference your anonymizing alias. I’m pretty sure it won’t be a null pointer (meaning you have posted here in the past,)

pup socket August 15, 2017 6:39 PM

@Wael: Can’t you see I’m mourning @Bong-Smoking Primitive Monkey-Brained Spook?! (sob)

gordo August 15, 2017 8:00 PM

Which Is More Dangerous—the Dark Web or the Deep State?

Hal Berghel

Pages: 86–91

Abstract—Much has been made of the dark web’s dangers, but democracy has more to fear from Citizens United and the global surveillance industry than Silk Road or Tor.

Keywords—Out of Band; deep web; dark web; deep state; Internet/web technologies; Silk Road; parallel construction; cybercrime; Tor

https://www.computer.org/csdl/mags/co/2017/07/mco2017070086.html

RonK August 16, 2017 12:46 AM

@ Dan H

(If you are still listening…)

Actually reading the description of the 9.5 rated OpenBSD vulnerability at URL https://www.qualys.com/2017/06/19/stack-clash/stack-clash.txt I am lead to believe that CVE-2017-1000372 is neither network exploitable (“A remote attacker cannot exploit this vulnerability, because he cannot modify RLIMIT_STACK”), nor working in a default OpenBSD installation (“We were unable to create 40M files in /var/cron/atjobs: after one week, OpenBSD’s default filesystem (ffs) had created only 4M files, and the rate of file creation had dropped from 25 files/second to 4 files/second. We did not solve this problem”).

Or were you thinking somehow that CVE-2017-5850 can be used instead of CVE-2017-1000373 in an exploit of CVE-2017-1000372? AFAICS, no, since CVE-2017-5850 is a heap-based memory exhaustion, not a stack-based one.

(And no, I am not claiming that OpenBSD is somehow invulnerable. Just that your post seems inaccurate once one investigates the details.)

Clive Robinson August 16, 2017 4:29 AM

@ Wael,

Perhaps @Clive Robinson would like to blow his cover?

Is that some veiled reference to a “Bean and Cabbage Supper”?

Clive Robinson August 16, 2017 4:34 AM

@ Wael,

At least he went piecefully

You mean he went out with a bang not a bong 😉

Hahaha August 16, 2017 2:44 PM

@Clive
My argument is based upon the “Last Ditch Effort” if-then:
“Given your system has been targeted, mitigate and limit damage in this way.”

This news post is to alert you to the fact that a traditional Tom’s Hardware convo about selection and comparison doesn’t cut it anymore. The govt precludes the conversation, and they want it that way. They have the last word on how security is distributed and educated.

I am offering you the programming option in the wake of a deplorable scenario. I don’t think anyone needs to be downing that. From a practical perspective, netadmins do not have the luxury to think like that. Just running through an evolution of thought; what the government is pushing people into for any sort of guarantee.

Not against logic/counter-logic, I am trying and provide a salient point beyond the techspeccing hijack. I have to stroke egos.

Jared Hall August 28, 2017 1:09 AM

Exploits are a commodity. They are bought and traded as such. Like any other market, Microsoft, Google, Amazon, and the like, have to “pay to play”. The IC is a customer in that market, and a formidable one. And by IC, I mean all countries. Manufacturers and developers are coming to grips with the reality that people expect better quality. Bug Bounties have gone up. More bugs are being fixed. As far as I can see, that is the free market at work.

As for Microsoft and the WannCry(pt) problem, Microsoft only patched Windows 10 in a timely fashion. Somehow they knew this exploit would be uncorked – in advance. That said, they should be sued and pay dearly. Again, it is the free market at work. It’s a beautiful thing.

I’m interested in the spin-off of Cyber Command. It seems we will be no different than the GRU. There will be another buyer in that marketplace.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.