Schneier on Security
A blog covering security and security technology.
« Random Numbers from Quantum Noise |
| The NSA's Perfect Citizen »
July 15, 2010
Russian Intelligence Gets Source Code to Windows 7
I don't think this is a good idea.
Posted on July 15, 2010 at 2:32 PM
• 73 Comments
To receive these entries once a month by e-mail, sign up for the Crypto-Gram Newsletter.
Maybe they figure since the spy gave it to them anyway, may as well get some relationship-building out of it.
Well, otherwise MS won't get big contracts with Russian government, and that's a notable part of Russian subsidiarys' income.
No doubt it will show up on warez.ru soon enough, then we can have a look ;)
BTW - is the US government going to accept Windows 7 if the russians have the source?
Microsoft has actually been sharing their source code for a long time. They've done it with commercial and academic partners and other governments. At this stage, sharing it with the Russian government is probably not too big of a deal - all the good exploits are probably taken.
Bruce, could you clarify how your reaction isn't just promoting security through obscurity?
To me there's two differences:
1) Providing source code access is one thing...but giving source code specifically to people with a vested interest in breaking your security while NOT giving the source code to everyone else seems like the worst possible way to do it.
2) Due to the proprietary/monopoly nature of the Windows platform, vendor-switching and code forking isn't as realistic as for other platforms with source code availability, so users are more vulnerable.
For most users, the greatest threat to their security will be the NSA--and other American three letter agencies--rather than the Russians. We can assume that Microsoft has already given Windows source code to the NSA.
Giving source code access to a low importance predator won't change much when the American apex predators already have access.
You mean that the SVR didn't already have this source code?
Most high-security US government agencies are more interested in having the source to protect their own applications (against both intrusion and low reliability). It's likely that the Russians have the same desires.
That's two out of the three major OSes in the desktop market. My sources say that the Russian intelligence also have complete access to the source code for Linux.
I'm intrigued to hear Bruce's reasoning for why it's a bad idea. My sense is the following. Perhaps I am stereotyping but the real security problem with this is that somebody in the Russian government security apparatus will give it or knowledge from it to some Russian spammer/corpespionage hacker friend of theirs in return for some favor... it's pretty corrupt there from the stories I've heard. Will "rule of law" be respected for a US software product? On the surface, perhaps, but in practice, kinda unlikely. And the asymmetry that a Russian hacker has the code and white hats here don't is the problem.
From the Russian government's security perspective, this is probably a moderately helpful step, but certainly not bulletproof given the other toolchain and hardware layers that remain more opaque.
Could you please explain why do you think this is not a good idea ?
Well, if their software people are as capable as those agents they have here collecting background they could have easily gotten from Wikipedia instead, I don't think we have much to fear.
But by keeping the code private you would be relying on "Security Through Obscurity". If we really wanted windows to be safe, shouldn't everyone have access to the source?
Well, it was even more idiotic to give the NSA access to the source code, which Microsoft did for Vista and probably 7.
The idea was for the NSA to probe for security vulnerabilities since they're good at that (supposedly).
So of course what undoubtedly happened was the NSA found, say, ten vulnerabilities - and reported seven of them to Microsoft.
Obviously the NSA wouldn't report them all, they'd be idiots.
So now Russia has the same benefit, which is actually a good idea in terms of "mutual deterrence".
Now Microsoft needs to give it to the Chinese so all the major players are on a level playing field.
Unfortunately, the "level" of the playing field is somewhere around 5,000 feet below sea level given Microsoft's history of security.
Perhaps the motive is to insert a backdoor into all MS encryption so that Russian will be able to read everybody's secrets. Well, most of the world's.
I saw a keyboard driver for a Microsoft USB Natural keyboard (one of those wavy ones) that was about 130-ish MB.
Unless the Russians (and everyone else covered by these agreements) gets all the source code for that as well, what's the point?
Surely someone could write a perfectly secure OS and then find _somewhere_ in those 130MB to build in a back-door (through malice or incompetence).
Not only that, but every printer you add seems to bring along it's own driver (why, oh why, do printers need drivers deeply embedded in the OS to work?!?!?). Any one of those could be vulnerable.
Unless you have _ALL_ the source for _EVERYTHING_ you run as part of the OS (user apps _could_ be sandboxed), you're not really improving security much.
Is this a "bad thing" because the Russians used to be "The Enemy?" Or do the same issues apply if the code were released to (MI-5, NSA, CSIS, BfV, Shin Bet, SCSSI)?
I agree that there's potential for exploiting such a code release; I'd think it true for any of (GB, US, CA, DE, IE, FR) too.
"I think what happened is that someone in the Russian government said "We can not use Microsoft because we can not see if the USA had put any spy-ware in it" and Microsoft said "No problem, we will show you the source code." So now the Russian bureaucrat feels better."
This would rationalize it well except that when MicroSoft sold access to their source code for cash to Russia
BUT simultaneously paid contempt fines to keep their source code secret from the U.S.Government DoJustice
and its investigators who were filing anti-trust actions against Microsoft - it was speculated by some this was because percipient inspectors would have recognized versions of captured source code that predated their own corporately posthumous acquisition deals - - -hmm.
If true, the Russians had beverage to get this Thanksgiving treat from the Microsoft banquet table,
Any thoughts, Clive?
Especially given the H1B situation most of those companies push for, it's naive in the extreme to think various foreign agencies and their less legal friends haven't had the source to all kinds of software the whole time.
Back in the late 80's or early 90's IBM signed an agreement with Fujitsu (who were competing in the Mainframe market) to supply all MVS source code within 90 days.
Very shortly after Fujitsu dropped out of the mainframe market. The source code is not necessarily going to give you anything other than a mountain of truely boring reading.
Actually, I think it *is* a good idea; assuming the same situation already exists with NSA, this is going to encourage them to report issues that they might otherwise have kept to themselves (since those Russian scallywags might find and use the (hypothetical) three unreported vulnerabilities that Richard Steven Hack talks about).
It's an interesting idea - by giving more access to the 'bad guys', you encourage the 'good guys' to be more good.
You all did your homework and read "Reflections of trusting trust", did you?
Before you can "trust" any code, you need the build the complete tool-chain. You have to be able to build the executables and compare them to the ones you use.
You also did read David A Wheeler's "Fully Countering Trusting Trust through Diverse Double-Compiling (DDC) - Countering Trojan Horse attacks on Compilers"
So, now, did the VSR get the ability to build the tool-chain from the ground up?
@Winter: you're overthinking this. I think it's safe to assume that Microsoft is not going to use a trusting trust attack to take over the Russian government; they're out to have a business, not a crime ring.
Moreover, Microsoft is a large company with tens of thousands of employees. There is no way they could keep a secret conspiracy with that many people; it's too easy for one whistle-blower to crack the whole thing open. So, even if they did try to pull off such an attack, we're very likely to hear about it.
I'm glad you know about the trusting trust attack, but let's be realistic for a minute. In practice, no publicly traded company is going to do such a perfidious thing.
Anyone with more than 1,500 seats of Windows under contract can ask for (and likely get) the source.
I am really surprised more people do not take advantage of this.
"I think it's safe to assume that Microsoft is not going to use a trusting trust attack to take over the Russian government; they're out to have a business, not a crime ring."
Sorry, I was too terse.
This is completely NOT what I was arguing. What I was going at is that without the ability to actually BUILD the application/OS, you will be never sure that the source you see is the one actually used.
Moreover, the source is only meaningful to the compiler and its pre-processor. If you do not have these, how can you interpret the source? No compiler works exactly according to the specifications.
And I seem to remember that MS uses a compiler developed (modified) in-house. You probably know how meticulously MS adheres to (language) standards ;-)
Without a tools chain, the source is rather useless.
"I don't think this is a good idea."
And yet we're constantly told that Open Source is more secure because anybody can review the code...
How can you trust the compiler? Especially, if it's the modified one they gave you, which is the only one that works with the source?
How far do you go down the rabbit hole?
I think it makes Macs a good idea.
"How can you trust the compiler?"
This was the subject of David A Wheeler's thesis, see "Fully Countering Trusting Trust through Diverse Double-Compiling (DDC) - Countering Trojan Horse attacks on Compilers"
This is what I was thinking about when Clive was discussing asymetric risk yesterday.
But let's itemize all the risks first 'kay?
It's a good thing for corporations to obey the law.
--too many different laws leads people trying to make a buck to suck up to dictators. So nice to see you cave to the Chinese government Google! So principaled.
--someone could find 0days.
...it's already happening without source disclosure.
--The FSB could tamper with supply chain and introduce a back door.
...like MS would be dim enough to reintroduce the source into production?
--The FSB can satisfy themselves there are no NSA/CIA backdoors in the code
...like how? the compiled code could still have been tampered with by TLAs.
--MS could reveal IP to people who would pirate it for resale?
...MS's risk MS's gain. Meh.
What did I miss?
For those who are wondering about why this is a bad idea, as BF skinner notes,
"This is what I was thinking about when Clive was discussing asymetric risk yesterday"
It was actually MS handing over the source code yet again (remember China getting it?) that made me think about asymmetric risk.
You can find my comment here,
If I start my own country can I get the source too?
No? Guess I'll have to stick with Linux then.
Doesn't seem like news
From the article:
The agreement is an extension to a deal Microsoft struck with the Russian government in 2002 to share source code for Windows XP, Windows 2000 and Windows Server 2000, said Vedomosti.
A senior security source with links to the UK government told ZDNet UK on Wednesday that the 2002 deal was part of Microsoft's Government Security Program. Nato also signed up, said the source. Having a number of different governments with access to Microsoft code meant it was possible that a government could find holes in the code and use it to exploit another nation-state's systems, said the source.
I'm surprised that people are surprised by this. This is basically what Microsoft's Shared Source program is for. MS supplied the Chinese government with access to their source code years and years ago.
The bottom line is that hackers don't need the source code to find vulnerabilities anyway. If governments around the world can feel a little bit safer because they were able to read the source, that doesn't bother me much. (I worry a little bit about them recompiling individual components with a backdoor added, or something... but truthfully they can do that kind of thing without the source too.)
@Alan: "And I seem to remember that MS uses a compiler developed (modified) in-house."
They use the same Microsoft C++ compiler that everyone else uses. They do use their own internal build system, but thats true of most large software companies -- every software company I've worked for has had its own homegrown distributed build system.
I would not worry about "trusting trust" type of attacks. I would just worry about the fact that Windows contains tens of thousands of compiled binaries: DLLs, OLE/ActiveX controls, device drivers, side-by-side assemblies, little command-line utilities that no one knows about, and so on. Securing all of that stuff is a gargantuan pain in the ass. Even building a mandatory whitelist system for it would be a huge pain in the ass. There are literally millions of places in Windows where malicious code (e.g. backdoors inserted by the NSA) could be lurking.
Russia probably said they wanted to ensure security. And maybe the people telling them that believed that. It is probably one factor they wanted it. The other factor would to aid them in finding zero day. Everybody knows that.
Microsoft was simply bought off so they wanted to believe the lies. They were probably pressured in other ways, too.
Happens all the time in many ways.
As a former employee of a company which had some relations with Russian security services I don't beleive that so-called FSB "IT specialists" could even locate a piece of code related to cryptography, security etc. They are great in hunting stupid child pornography traders, or jailing innocent teachers for installing unlicensed Windows. But there is no chance that anyone in the government believes that thick-bottom bureaucrats in affilate IT consulting offices or brave pirateware fighters will hack into code. Just the obvious Russian trick: take additional millions from the budget for "code review", get payoff from local "integrators" and so on. With some variations the scheme is repeated every year. Anyone who ever could understand what is going on in OS kernel will never work for government or military agency in Russia. They just have nothing to offer ($15K/year compensation and some old loosers around feeling nostalgic for their youth with soviet lamp mainframes). I don't think that the code will be used improperly (to find holes, make backdoors etc), but I beleive it will be "проёбан" (the special russian word for "shameful event of lose").
(paraphrase) 'incompetence, not m4d haX0rz sk1llz'
Could be. I don't know anything. Or I could be lying to pretend I don't know anything when I really do. Same thing with competence. "Oh, you know you can't expect anything bad from me... I am innocuous".
Playing down one's skills like that is a type of stealth.
But it is also true, incompetence tends to be the rule, not the exception in large, archaic systems.
SVR or some other branch of fsb u havent dealt with surely won't have access to this code.
I wonder how they got money for it.
"Yes, bosses, all for defense. Forget the added angle of sharing it to our hackers. That is not a possible benefit we can sell to management to fork out the cash for this".
That is kind of like asking a boss to buy a car that has no wheels or frame, for the same price you can use for the same type of car with wheels and frame.
Just daily reality.
Surely you are not a proponent of security through obscurity are you Bruce? Microsoft should open the OS to everyone.
Don't worry, the code is written using Hungarian notation. Those stoopid commies are far tooo stoopid to understand that.
Very nice! (golf clap)
@Winter: Ah! Thanks for clarifying. Let me double-check I understand your argument now: you're saying that Microsoft's compiler probably doesn't adhere to the C++ Standard (because in practice no compiler does), and therefore the compiled binary will not be the same as the source code, so even if there are absolutely no bugs/vulnerabilities in the source code the compiled binary may still have problems, and therefore you can't tell precisely how secure the binary is merely by looking at the source.
I completely agree in theory, but in practice any changes in the binary are very unlikely to be exploitable, and any vulnerabilities in it are very likely to be visible in the source code. I think you're saying that Russia can't use the source code to find all the Windows vulnerabilities (and I agree!), but looking at the source code can get them over 99% of the way there, which is probably good enough.
"That's two out of the three major OSes in the desktop market."
Since Mac is nothing but Debian Linux, make that 3 out of 3. Oh, sorry, Mac FanBoys, did I shatter your little fantasy?
Speaking as someone who has done this for a living for a long time, this won't help them in any substantial way in finding and exploiting new vulnerabilities. The reason is that the top tier of hackers can reverse engineer the binaries enough to find what they need to get new 0-day. Its true that this opens up the search for some people who are less skilled, and so you can perhaps expect a small increase in the number of 0-day they can find, but how many do you need if its A) unpatched, and B) no-one is looking for it (cause its 0-day). 1 exploit will do just as much as 100; and even without source they should be able to find much more than just one.
Now one thing this will likely help them to do is make better and more effective rootkits and backdoors. Knowing in complete detail how the inner-workings of the operating system expects things and how they are laid out will be a very big help in making these tools. Again its not necessary, but it really speeds things up and allows them to taylor specific rootkits to specific operations much more easily. So if anything I'd consider this the real threat if any.
It's all good. As long as they didn't also give them the COMPILER source ;)
Even compiler source isn't enough to detect what may have been inserted into the previous compiler build, and carried on by each compiler since that point by subsequent insertions...didn't the UNIX folks show that point, ages ago?
Sharing windows source is security theater.
@McCoy Pauley: I thought OS X was FreeBSD. Looks a lot like it to me.
@ Michael Lynn,
"Speaking as someone who has done this for a living for a long time, this won't help them in any substantial way in finding and exploiting new vulnerabilities."
I don't agree with you for some less than imediatly obvious reasons.
"The reason is that the top tier of hackers can reverse engineer the binaries enough to find what they need to get new 0-day."
Actually reverse engineering and using fuzzing tools do not find as many faults as examining the source code.
Both executable examination and most definatly fuzzing are probablistic in nature and have a very significant time component.
This "time component" is what you are missing/ignoring with,
"Its true that this opens up the search for some people who are less skilled, and so you can perhaps expect a small increase in the number of 0-day they can find"
It's not so much the number found but the number found in any given time, which is important from an attackers perspective.
Esspecialy if you are discovering 0-Day but not activly exploiting them, other people will turn your "known but unused exploit" public at a slower rate, thus having the source code gives you a time advantage over those without (part of what asymetric risk is all about).
But there is also another advantage to the source code. Code cutters are human with "artistic" leanings which means they have both "style" and "failings".
You can analyse the source code and fairly acuratly see how many programers have touched the code, but more importantly also where (this tends not to come across as clearly when it has been down an optomising tool chain).
If you then analyse each programers failings it will give you a significant edge in finding their mistakes and more importantly in each context.
Thus you will find exploitable bugs that fuzzers and executable code analysis just won't find this side of eternity, and you will do it not just several orders of magnitude faster than either fuzzing or executable code analysis.
But importantly at a lot greater depth than fuzzing will ever do. This is because once a very deep bug is found in the source code you can walk it backwards and back up through the layers working out just how to excercise it. You normally cannot do this with fuzzing because the required amount of analysis is way to high.
Also when it comes to 0-Day attacks they have a "probablistic" shelf life, having the source code puts you way out in front of the "probablistic curve" if you use it correctly.
With regard's to
"but how many [0-Day attacks] do you need if its A) unpatched, and B) no-one is looking for it (cause its 0-day). 1 exploit will do just as much as 100; and even without source they should be able to find much more than just one. its 0-day). 1 exploit will do just as much as 100 and even without source they should be able to find much more than just one."
You are forgetting the time element. Nearly all 0-Day attacks that are used get seen and analysed and patched at some point. The point in time is based on the usage over time and what effects it has (ie how visable it is).
The slower and more covert the usage the longer this time period will be, except for "chance" (which is how the Sony Root Kit was found).
If your aim is to get as many machines under your control as you can within a given time frame (shades of "cyber-warfare" build up) you are best of having considerably more than 1 attack vector in your arsenal.
Which brings me onto your "root kit" analysis,
"Now one thing this will likely help them to do is make better and more effective rootkits and backdoors. Knowing in complete detail how the inner-workings of the operating system expects things and how they are laid out will be a very big help in making these tools."
This is exactly the same argument as for deep 0-Day attacks. A bug that is deep within the code is effectivly un-exploitable within a given time frame unless you have a "road map" as to how to get there with your "payload".
Likewise your comment,
"Again its not necessary, but it really speeds things up and allows them to taylor specific rootkits to specific operations much more easily."
Which brings me onto your final comment,
"So if anything I'd consider this the real threat if any"
Yes but for all the attack vectors not just the "root kits".
Russian intelligence also have complete access to the source code for Linux.
So does your old lady! So does anyone who takes time to download from website. So does anyone who wants to reverse engineer any software. Tell me something! What is a *non-event*?
Okay here's the thing.
Yes if one organization (NSA) has a set of secret data they have potential to deny, decieve and exploit. But when that set of secrets is given to an equivilent organization (SVR) it's no longer asymetric. So what MS has done in this case is restore the symetry between US/Russia/China.
It will always be asymetric with respect to the huge majority of us. And face it. MOST of us and I'm talking 99.999% of the human population lack the skill, time and tools to do disciplined source code analysis's's's on something the complexity of an OS.
@Grazia: please have your sarcasm detector fixed, it's rather broken if you ask me :-)
@ BF Skinner,
"But when that set of secrets is given to an equivilent organization (SVR) it's no longer asymetric"
On the face of it yes but in practice no.
We are afterall looking at a very very large code base that has (as a reasonable estimate) tens of thousands of bugs in "known classes" of bugs. The question is how many more that are not currently in a "publicaly known class", classess that might only be known to a very few due to the fact they have not been exploited protocol bugs for instance (think back to that issue with SSL...)?
The MS code base is sufficiently large that all the organisations they have released the code base to so far can all work independently of each other with an incredably low probability of steping even close to each other let alone on each others toes. This is simply because the areas in the MS code base each organisation will start looking at willdepending on their focus individual focus requirments.
The advantage with open source is that there are generaly sufficient "eyes" looking that there is a better probability of 0-Day. In a way finding a 0-Day in open source is a way to get your spurs and get your name known to sufficient people to improve your job prospects.
But yes even Open Source is asymetric depending on the focus of the eyes concerned.
The trick is that when a new "class" of attack is found,
Open Source Code is usually scanned almost immediatly and any issues fixed (professional pride on behalf of the programmers).
Closed Source code on the other hand... who knows. It falls to the senior managers within the organisation and how they perceive the risk, some code houses have good reputations others not, most we haven't a clue.
Which is why we have had the arguments about "disclosure" and the how and the why of ensuring that many code houses "remain honest" with their customers.
And before anyone says "nebulous argument" I agree that there is little research in this area and therfore not much tested data on which to hang the argument.
I say we get several governments to fund the creation of a team of programmers to rewrite the base OS (at least kernel-mode) using low-defect methodologies. The Windows API's and stuff will be functionally modelled, then the new system will be a drop-in replacement that matches the functionality. The .NET runtime and MS Exchange should probably be redone. I nominate Software Inspection Process, Cleanroom or Praxis Correct by Construction as the methodology. All available tools should be used to expedite the process.
I mean, I'd love to start from scratch on a better OS but the legacy issue means that rewriting the existing OS is probably the best way to rid us of these bugs. A part of the deal should probably be that Microsoft use the same low defect processes during maintenance and extension of the new software. They would also provide the upgrade for free to existing and new Windows users.
As someone who made a living for a couple decades as a hardcore C++ programmer in windows (up to xP) and who had an MSDN subscription with examples and how to do drivers, I more or less had access to the source code back then, and on the cheap compared to what they charge usually for that access. They allowed you to load symbol tables and trace/debug right into the main system dlls for example. No big deal, and softice was pretty nice too (really, the first virtual machine manager, one client). What you mostly saw was code that looked like it was written by rank beginners with no master plan or overview.
Giving that mess with "tacked on features and gee whiz first, even if they break other features some" and security an afterthought at best will probably drive the Russians nuts anyway. It's not coherent or well written at all -- even the largely hated MFC was done much better than core windows code in nearly all cases.
The more time they spend studying that, the more crazy they will get, it's not healthy in there.
To the extent they copy any of it out to use, it will degrade their abilities! That stuff, plain and simple, stinks.
So it may have been a good decision to let the Russians self-pollute by reading that junk. If you don't like them. To the extent I studied Russians as a cold warrior, I found them fairly likable as a people, didn't like their government much though.
But then that goes for most governments.
Have to send a thanks to BillG for letting me make so much money fixing or working around the flaws....Linux for me, now that I'm retired and just use computers to do things for me.
First, when they give the source we can finally put the rumors to rest that the NSA has backdoors in it.
Secondly, it doesn't matter if they give it or not vulnerability wise. You don't need to have the source to discover vulnerabilities.
Thirdly, giving insight into source code, doesn't mean the end-user product (shipped version) is the same.
I think M$'s transparency is key here, and if new vulnerabilities -due to disclosure of all the source code- are found (which I don't think will happen), we will know about them very quickly anyway by simply monitoring our traffic on honeypots with XP on them.
@Clive "Open Source Code is usually scanned almost immediatly "
True in principal not in fact.
Programmers have offered me this argument for a decade now and my reply is "For an entire MONTH Sendmail.org offered a trojaned tarball. Their distribution server had been hacked. The hash wasn't touched. The only way it was discovered was by someone who finally ran the hash and reported it."
If people, generally, don't do the simple check of computing the hash to check the file integrity can we expect that there are eyes enough to perform pain staking source code analysis? Scans are good (but tools like Ounce Labs are expensive) but only catch some things.
I don't think so. Companies are spending less and less. That profit motive that Clive alluded to earlier. They trust the supply chain and the market.
Could be there is a market for source code vetting. Like what NIST does with crypto product evaluation. I don't think NIST wants to go down that road. Probably 'cause there are no standards and it's a mutable design.
Hmm. I guess having the source to the Debian GNU/Linux or RHEL operating systems is also a bad idea. After all, we can exploit more holes just by having the source, right?
Meh. Scare tactic FUD, if you ask me.
@ Aaron Toponce
Actually, the point is that it's a mixed bag that can turn out good or bad at any given time. Considering the aims of Russian intelligence, their finding a bug via access to source could lead to significant exploits on classified networks. This was less of a problem in the past due to the obscurity of the software on them, but the government relies on defective COTS software like Windows too much these days. Hence, giving enemies source code might be good for Microsoft's sales/image, but bad for national security. I mean, everyone will certainly horde obscure bugs. And use them.
@ BF Skinner
Excellent points. Open source is better because it increases the potential of finding most bugs. In practice, it may not. I can give more examples like your Sendmail bug. The openssl entropy error went undetected for several years. The author of Mailman, a mailing list manager, said it had "a handful of glaring security problems" that lasted for three years. I think there was a UNIX tool bug that lasted over a decade, but I can't remember it. So, these have had very high scrutiny, but serious flaws remained undetected for years.
Jeremy Zawodny wrote a nice piece on why Open Source doesn't imply extra security:
That link came to from this article, which has plenty of good links:
The crux of the problem is that software is complex and hard to understand. Bug finding is harder than understanding it. Identifying vulnerabilities by reading code for a few minutes is a rare skill. High profile projects have few such experts looking at their hundred KLOC or MLOC codebases. Most open source projects have no attention from experts. So, in theory it *can* help, in practice it doesn't. Open source is still better, but only for its potential.
As for software vetting, I think that's a valid opportunity. It already exists in safety-critical markets. It's a proven fact that certain development processes reduce defects by eliminating them throughout the lifecycle and focusing strongly on connecting requirements to implementation. I've mentioned them before so there's no need. If a team of trained developers apply one of these processes, the defect count will almost certainly be low. And the processes aren't always much more expensive: about the same; sometimes cheaper; often 30%-50% more while delaying release a few months; impractical in some extreme cases. So, your software vetting is being done successfully, just not for mainstream software because the market doesn't care. And by care, I mean with their wallets. You know, what counts... ;)
I got props from @Nick P?
I respectfully disagree. The reason being that I can look at a binary and find a bug faster than almost anyone who relies on source (or fuzzing for that matter) to find a bug; and I've worked with people faster than I am. Its not because I'm some super man (although I think most people who have seen my work would agree I do a fair job), its because most of the real work is in evaluating commercial software of one type or another, so if you've got the experience and the skill to be playing at the top level you already have to have been able to overcome the source requirement that greener researchers have. With some practice and talent for it reading the binary isn't a huge amount slower than reading the source for the purposes needed to exploit most vulnerabilities, even those in an OS. I agree that in very complex situations source would speed things up, but only an initial cost. Once you've reversed the key structure in the system they stay reverse engineered, so the second exploit doesn't need to redo this effort. My IOS reversing took me about 3 months, after which I was pretty much able to read the code (the code relevant to my work anyhow) as well as if it were source. And that 3 months was only needed once.
@ BF Skinner
"I got props from Nick P? Yay me!"
LMAO. Am I that hard to please? ;) Your posts never cease to surprise me. Always an interesting read, if only because I never know what angle you will be coming from.
@ Michael Lynn and Clive Robinson
Alright, a real Cold War raging here. Let's turn up the heat.
I first have to say that Michael is right about good zero-day hunters not needing source code. It may even hamper the work of those with the highest skill. From what I've gathered, the reason is that common faults like buffer overflows and format string attacks are very easy to spot in assembler. (Michael, correct me if I'm wrong.) The hackers get used to looking at assembler to find these. They also have tools and bug-hunting strategies to help them. For instance, if I was doing it, I might look at all instances where the app takes external input and look for a way to inject code. It would be easy in assembler, esp. if I was used to it. If I was used to hunting assembler bugs, then suddenly was looking at a bunch of C++, I might have a hard time figuring out what the code is doing. So, Michael has a point: it is a *fact* that many good zero day hunters do it best in assembler.
As a tangent on the above, fuzzing is quite an effective way of finding bugs in non-robust applications. The ex-NSA employee who kept finding bugs in programs like Acrobat and Flash said exactly how he did it: he just kept flipping a bit throughout the file until the program crashed. Each time, he analyzed the assembler for potential vulnerabilities. He always had a zero day at every hackathon. And the Mac usually fell first, but I'll leave that for a future flame war. ;) The technique was simple and would have been slowed by source.
On the flip side, the source does in fact improve the situation for organizations seeking vulnerabilities. The reason is that it's easier to train auditors to find a large amount of flaws in C/C++ source code than assembler. If they already know C/C++, which is a large supply of talent, then Daniel J. Bernstein style training on some open source apps (maybe old versions) is all it takes to make them decent bug hunters. Give them some experience and have guys looking at the binary too and the effectiveness increases greatly. Static analysis tools can also be employed to focus on likely trouble spots. So, a team with source code gets both the easy and complex bugs.
I like Michael's idea that this will lead to better rootkits. I think this is a distinct possibility. I also think people looking at OS source code might find more covert channels that could allow intelligence agencies to subvert a machine in some way, either leaking critical info or hijacking it. The whole idea of source going to intelligence agencies focused on subversion really bothers me. The results won't be good.
"it is a *fact* that many good zero day hunters do it best in assembler."
This has been true in my experience. I pretty much agree on all points.
On the rootkit side of things, I like your point about making it easier to find better covert channels. We spend a lot of time discussing clever ways to break security, and clever ways to gather intel with a rootkit but something we rarely talk about (and lots are very careful not to bring up) is that the biggest challenge for many (perhaps even most) intel ops using rootkits isn't getting the data, its getting the data off the machine and out of the network. And having better covert channels would definitely make this easier.
@ Nick P (july 19 6:27pm)
"I say we get several governments to fund"
Hmm they can not agree on how to deal with third world debt so... it's best going to be done by the likes of a standards body (IEEE / CCITT / etc.).
"the creation of a team of programmers to rewrite the base OS (at least kernel-mode) using low-defect be functionally modelled, then the new system will be a drop-in replacement that matches the functionality."
Yes it would be handy and the reason it has not happened is the near monopolistic position of the big software houses (the lack of distance vector and near zero production costs make standard market forces that economists rely on somewhat mute).
Arguably it can already be done with the various Unix standards (did I hear Xopen drift by on the wind ;)
The fact there was no clear Unix company ment they where forced to be co-operative in a competative way so the API's got standardised.
But as it is only realy going to work as a framework at the API level, the interfaces need to be clean and enforcably policed independantly.
"The .NET runtime and MS Exchange should probably be redone."
They are the easy ones. MS has a reputation for having two API's "in house" and "for the competition". If true (and some judges appear to think so) then those using the "for the competiton" API will be forever at a disadvantage.
As has been noted by Doug (DCfusor) the MS code is a real nightmare and worse it has to be draged forward for "compatability" because there is way to much "legacy" code and hardware with drivers. Worse because the MS code was so bad the third party code was even worse, so realisticaly you have a core of code that nobody now understands or wants to touch, but that has to still be supported (anyone remember IBM's OS/2 well I'm told some banks still use it).
My view point has for some time (ever since the 16-32bit thunking issue) has been ditch the legacy support or run it in a VM and go for 32bit (now 64bit) clean code and dump the Intel segmentation model.
Realisticaly I think I'll still be saying this in 2030 (assuming I'm still here to say it) when the current head of MS is scaring the staff in his nursing institution.
@ BF Skinner (july 20 8:25am)
"True in principal not in fact Programmers have offered me this argument for a decade now..."
I was talking about new "classes" of attack.
Which you point about MD5 hashes (a human failing) did not cover in quite the way it appears.
Subsiquent to that new class of attack most open source package managers check the hashes by default. Further a lot of repositories check the hashes against the binaries on their site and check the hashes against an ofline list.
Also the level of response of Open Source developers depends on how it is maintained and by whom. Core code for instance tends to be well maintained and supported orphaned or obsolescent packages may mot get looked at at all.
Overall depending on whose stats you belive mainstream Open Source has a faster patch/fix time than Closed Source. The big exception of recent times being MS who have decided (on the face of it) to clean up their act.
I actually know of a number of closed source packages with very high current price tags (CAD/CAM/Accounts) that have the same security issues they had two or more major releases ago. However trivial HCI issues usually get patched within the product cycle.
As I said it's an open area for research simply because nobody is taking a serious look.
With regards vulnerability checking as a service,
Yes and No, I've often said an "Underwriters Laboratory" for code would be a sensible step on the software assurance issue.
However although it works well for mechanical devices and some electrical devices software is several orders of magnitude more complex and it got there in less time than just about any other industry.
To be honest my gut tells me that the likes of an MS OS is now of a size and complexity that it is generating a new class of security vulnerability with every iteration. You simply cannot test realisticaly in a production environment for new classes of vulnerability only new versions of existing known classess of vulnerability.
Thus arises the question of liability on a test house. Whilst I agree it can be done for considerably less complex base systems you have to ask yourself at what rate (power law) do vulnerabilities go up code size and complexity.
Which is one of the reasons I have talked about using what are effectivly scripting languages for ordinary programing, with the language components being developed as "secure items" independantly. That way (to a certain extent) you take the responsability for secure design away from the code cutters.
And without getting into a long chat about it (I have had a small one with Nick P in the past) I'm aware of just how difficult it appears. But more importantly some of the benifits.
@ Michael Lynn, BF Skinner,
It has become clear from reading your later postings with Nick P that we are not comparing apples with apples in our discussions.
I'm assuming 0-Day based on "new classes" of attack, and you appear to be thinking in terms of "new attacks" in existing classes.
For new attacks in existing classes I would agree that asembler level code reading will actually show up more new attacks than with the source code, especially if an optimizing compiler has been used.
However new attacks in existing classes have quite a short shelf life as malware writers are looking in this area most of the time.
However finding new classes of attack is better done at the highest level you can, and once found can have an extraordinary life time.
I guess the thought is crossing your mind about what's the real difference. Well buffer overflow attacks are a class of their own and have many attacks based on them. Because the mechanism is so well know it is fairly easy to find especially at the assembler level as the usual code obsfication via libraries, inline code etc has been removed.
A new class of attack I have been looking at is via side chanels in protocols that can be used to change the flow of a programs code without having to inject a code sledge or other jump code to get the attackers required functionality, it simply uses the existing legitimate code differently.
The fact that you don't need to add exacutable code or know where specific function exacutable code resides in memory gets around many of the current security protections such as those that detect a broken stack, or linkers that use random not fixed memory allocation for library code etc.
One reason for this is embeded systems where the executable code may not be mutable, or not even in the same code space (think strict Harvard Architecture for instance).
When you consider that a lot of I/O these days is actually done by an embeded processor in the interface card the potential for this type of attack is increasing.
As I've said a number of times before the areas of future attack are going to be by side channels be they passive or active and most side channels will be found at the interface and protocol levels not down in the code.
No I pretty much can find those just as easily in binaries too.
Oh out of interest anyone else keeping an eye on the (apparently) SCADA targeted malware that jumps the "air gap" using a USB drive and an unfortunate feature of Win 7 (to do with the .lnk files that put up fancy icons in file managers etc).
I've been keeping an eye on "air gap jumpers" for a while now as "Fire and Forget" virusware with "specific target related payload" is a sensible way to move forward for those not involved with the very silly side of profiting from malware.
It has however aided a few of those "Infrastructure attack" "Cyber-warfare" weenies get "their panties in a wad"...
I fully expect as I predicted some time ago for "fire and forget" with "target sensitive payload" attacks to increase. I also expect them to develop "control channels" that cannot be predicted and thus cannot be blocked.
Hey ho it's just a consiquence of the "malware market" developing as people learn how to "better capitalise" on their investment in the botnet idea...
@ Clive Robinson
OS extensions that mediate external devices, combined with traditional security techniques, are the answer here. Well, the technical answer. Good policy and strong enforcement on personnel are the foundation of keeping malware out of air gapped systems. But, we do have technical measures to pull it off now. I think it would also help to restrict the data formats to those that aren't really executable, like PDF-A rather than PDF. Everything should be simple and hard to use to launch an attack.
@ Nick P,
"OS extensions that mediate external devices combined with traditional security techniques, are the answer here. Well the technical answer"
Only for a subset of the issues, thankfully we have not yet seen any of those from the larger set yet.
Consider how a program is effectivly a hard coded set of rules with user entered data in one or more buffers and control information also stored as data.
Traditional attacks have revolved around breaking a buffer to insert executable code or the exercising of type casting errors to cause the program to make incorect control control choices.
With a mediating OS on a CPU that has separate code and data sections where the code section is "randomly" loaded your options as an attacker apear to be limited to playing with data or timing.
Even so, this is still sufficient in many cases, one of the failings of many "object orientated" programing languages is that an objects user data and control data are stored adjacently in data memory and likewise objects are stored consecutivly in memory. Thus it is still possible to find and manipulate data within this structure without having to know where the data is actually loaded in memory.
Thus (overly simplisticaly) for instance an attackers user input string that is in effect an SQL attack may simply bypass the input validation simply by overwriting control data adjacent to the input buffer.
The obvious solution (random data ordering) is just not practical in many cases, likewise writing random values between user data buffers and control data is difficult to get right and thus work effectively (due to amongst other things timing attacks).
Thus even with these measures an attacker is always likley to have an attack vector with just input data, as it is down to how the application code is writen, it's available functionality and how user data is presented to it.
Even with the best written of code, because it is within the bounds of the programs legitimate behaviour the possability exists.
Thus technical solutions at the OS or lower layers is not going to be foolproof especialy on single CPU systems.
"Good policy and strong enforcement on personnel are the foundation of keeping malware out of air gapped systems."
As we unfortunatly know all policy has required exceptions (think software updates and security patches) and even the best personnel are subject to manipulation in one way or another (like senior managment edicts).
"But, we do have technical measures to pull it off now."
Is only partialy true. We also need what we don't have which is all code cutters being security gurus.
Which brings me back to my previous argument about "code cutters" being relegated to "scripting" and security gurus writing and maintaining the scripting language elements that rune in jails or sandboxes. Potentialy it is more secure but importantly it makes code cutters more productive at the expense of CPU cycles which generaly is unimportant.
"I think it would also help to restrict the data formats to those that aren't really executable, like PDF-A rather than PDF."
Yes but that has it's own issues in the "groupware" culture that businesses now work in. Thus the "Senior Managment edicts" problem.
"Everything should be simple and hard to use to launch an attack."
With monolithic applications this idea has it's own hidden problems that make it impossible to implement, hence my point about scripting as oposed to code cutting.
There is very little application level production code that needs to be written in any kind of low level or "bare metal" language. And the little there is can mostly have those parts extracted out to be run in a highly controled environment.
Just about every study carried out shows that the number of programing bugs is related to the number of lines of code not the level of the language used. Also that again irrespective of the level of the language the number of actual "final code" lines per day remains constant irrespective of the level of the language.
Thus the simplest security measure to take now would be to move to much higher level languages and thereby cut down the number of lines of "code cutter" programing. From the managment perspective this would also mean higher productivity...
As has been known for some time properly crafted scripting languages are the fastest way to get working code for prototyping etc. The downside has always been the "perceived need for speed" which several studies have shown to be a compleat waste of time.
One study actualy showed that the time to "re-code" the prototype the customer had signed off on exceded the time it took for the hardware/price/performance to catch up with the prototype...
With more and more studies showing security issues have moved from the OS layer into the application layer we still appear to be fighting the same old battles over not having learnt the lessons of history (ie web browsers are effectivly multitasking OS's in their own right, but without any of the current OS security features).
As security practitioners we need to look for a way to "sell our services" currently we do it by "Chicken little" FUD. Which tends to make us as popular as "lepers at a debutants ball".
A much better way is to give security as "part-n-parcel" of systems that will add to their bottom line not detract from it. High level languages with built in security features are currently the way to go unless we can solve the human issues that give rise to the constant bugs/lines and lines/day isses.
As for "security training" for "code cutters" forget it very few software houses started doing it and now we are in a recession it's a cost most managers are not going to add to the bottom line without the buy in of the very senior of managment.
Which brings me back to my old (cracked record ;) point that "security" should be seen as a "quality" process and treated as such and thus "built in" from day zero (day one is a day to late ;)
I think the problem is using Microsoft software in a secured environment. How can you protect your security if you don't know what code are you using? And if you still want to use MS software, ask for the code and check if your sistem is in danger or not. Or just use open source.
Schneier.com is a personal website. Opinions expressed are not necessarily those of BT.