Reacting to Security Vulnerabilities

Last month, researchers found a security flaw in the SSL protocol, which is used to protect sensitive web data. The protocol is used for online commerce, webmail, and social networking sites. Basically, hackers could hijack an SSL session and execute commands without the knowledge of either the client or the server. The list of affected products is enormous.

If this sounds serious to you, you’re right. It is serious. Given that, what should you do now? Should you not use SSL until it’s fixed, and only pay for internet purchases over the phone? Should you download some kind of protection? Should you take some other remedial action? What?

If you read the IT press regularly, you’ll see this sort of question again and again. The answer for this particular vulnerability, as for pretty much any other vulnerability you read about, is the same: do nothing. That’s right, nothing. Don’t panic. Don’t change your behavior. Ignore the problem, and let the vendors figure it out.

There are several reasons for this. One, it’s hard to figure out which vulnerabilities are serious and which are not. Vulnerabilities such as this happen multiple times a month. They affect different software, different operating systems, and different web protocols. The press either mentions them or not, somewhat randomly; just because it’s in the news doesn’t mean it’s serious.

Two, it’s hard to figure out if there’s anything you can do. Many vulnerabilities affect operating systems or Internet protocols. The only sure fix would be to avoid using your computer. Some vulnerabilities have surprising consequences. The SSL vulnerability mentioned above could be used to hack Twitter. Did you expect that? I sure didn’t.

Three, the odds of a particular vulnerability affecting you are small. There are a lot of fish in the Internet, and you’re just one of billions.

Four, often you can’t do anything. These vulnerabilities affect clients and servers, individuals and corporations. A lot of your data isn’t under your direct control—it’s on your web-based email servers, in some corporate database, or in a cloud computing application. If a vulnerability affects the computers running Facebook, for example, your data is at risk, whether you log in to Facebook or not.

It’s much smarter to have a reasonable set of default security practices and continue doing them. This includes:

1. Install an antivirus program if you run Windows, and configure it to update daily. It doesn’t matter which one you use; they’re all about the same. For Windows, I like the free version of AVG Internet Security. Apple Mac and Linux users can ignore this, as virus writers target the operating system with the largest market share.

2. Configure your OS and network router properly. Microsoft’s operating systems come with a lot of security enabled by default; this is good. But have someone who knows what they’re doing check the configuration of your router, too.

3. Turn on automatic software updates. This is the mechanism by which your software patches itself in the background, without you having to do anything. Make sure it’s turned on for your computer, OS, security software, and any applications that have the option. Yes, you have to do it for everything, as they often have separate mechanisms.

4. Show common sense regarding the Internet. This might be the hardest thing, and the most important. Know when an email is real, and when you shouldn’t click on the link. Know when a website is suspicious. Know when something is amiss.

5. Perform regular backups. This is vital. If you’re infected with something, you may have to reinstall your operating system and applications. Good backups ensure you don’t lose your data—documents, photographs, music—if that becomes necessary.

That’s basically it. I could give a longer list of safe computing practices, but this short one is likely to keep you safe. After that, trust the vendors. They spent all last month scrambling to fix the SSL vulnerability, and they’ll spend all this month scrambling to fix whatever new vulnerabilities are discovered. Let that be their problem.

Posted on December 10, 2009 at 1:13 PM41 Comments

Comments

Robert December 10, 2009 2:10 PM

In most cases this makes sense. You’re right Bruce, most vulnerabilities come and go and there’s little you can do about them. I would add a few points for clarification:

  1. The audience for your entry here ought to be considered end-users. Network and security admins would be wise to inform themselves of the details affecting their particular charge and attempt compensating controls. Eg, maybe they monitor their logs a little more closely. Maybe they can reconfigure their software depending on the vulnerability. Or maybe they just need to call up their vendors and make sure a fix is coming soon.

  2. For end users who have very critical and/or sensitive apps that they’re using, I recommend contacting the service provider to inquire about their resolution path. WellsFargo, eBay, and Amazon need to be clear that people take security seriously and something needs to be done to take care of it. (They can lean on their vendors ideally. Vendors do care when Wells calls and demands a speedy fix.)

  3. Vendors need to address these things promptly and openly. Hiding it or trying to keep things quiet while they slowly put out a fix shouldn’t be considered acceptable. Both customers and those customer’s end users should continually demand better.

Thomas December 10, 2009 2:28 PM

  1. Install an antivirus program … Apple Mac and Linux users can ignore this, as virus writers target the operating system with the largest market share.

(ducks and waits for the flamewar to start 🙂

  1. Configure your OS and network router properly. Microsoft’s operating systems come with a lot of security enabled by default; this is good.

Actually all the mainstream OS’s do, singling out only one for praise here seems a bit odd. If anything Windows was lagging behind the others for a while in this respect.
As for routers, the consumer-grade ones all seem to insist on HTTP-based configuration (no HTTPS) with no way of disabling access to the configuration page over wireless, so pick a good WPA2 password!

  1. Turn on automatic software updates. … they often have separate mechanisms.

One of the really neat things about most Linux distros is the package manager. It looks after installing, patching and un-installing everything. (I’m not sure what Macs do)

PackagedBlue December 10, 2009 2:51 PM

Without properly writing up the issue, another security blog asked about how can good crypto like TLS/SSL be cracked with such a fast handshake, well read between the lines, it can’t, and only has to have some middle/renegotiation.

Those who read the blog within the last 6 months, who didn’t already know about such an easy hit, who have guessed it. I did, and I bet many others did as well. I charge those who do not pay for extra security, that you get what you pay for. I am happy that the security industry has blog that report on different levels of IT.

It costs a lot to learn, test, ponder, fix, repeat, and monitor the industry.

Free users beware, quality/security is a paid industry.

Not the best write up on this, but I felt it was an important point to make, reacting is problematic, implementation and working with the industry is everything.

David December 10, 2009 3:48 PM

To add another recommendation: look over your credit card bills as they arrive. If somebody does steal your credit card information, you can still dispute the charges if you find them in time. I do this in particular for credit cards used on the Internet, and my family does not use debit cards there. (They may well have the same legal protections, but in event of a dispute it’s nice to be on the side where the money is from the start.)

neill December 10, 2009 3:55 PM

for a successful MITM attack the attacker has to be ‘in the middle’ – in a public location (eg public wifi, other peoples’ networks, libraries etc) you don’t know who is running the network – so be careful

other locations i’d say: you have to trust someone … your bank, your boss (your wife?), your phone company … otherwise you’ll have no life anymore

any OS/protocol has vulnerabilities, just a question of time and disclosure when/if you hear about them

strange though: we trust an unknown cab driver with our life but if there’s a product recall about our car we freak out

RH December 10, 2009 4:08 PM

@Thomas:
I think its an interesting datum that Bruce chose to single out Windows for good defaults. Windows has been historically lax with these. If Bruce can argue that they’re “good,” then that’s a sign that the efforts Microsoft have been putting in that direction are helping, and that they should continue along that path!

Clive Robinson December 10, 2009 4:26 PM

The question is how long before we see this sort of fault being used by defense teams in legal cases.

I’m fairly certain that this sort of protocol error is a little more common than we like to think.

As Bruce notes there is nothing you can do about it especialy if it is only known to a few.

Thus we get into the interesting point we have an infrestructure that we in the 1st world are very dependant on(more so every day), that is effectivly “unknown” to the majoraty.

Unlike tangable (physical) security intangable information security has some real problems.

The primary one being “no physical constraints” that is an attacker is not limited in what they can do by having to be physicaly present.

The second one being “force multipliers” tangable force multiplers (machines” have physical constraints that effect what can be accomplished. Intangable force multipliers do not have physical constraints they are copyable as many times as you wish at little or no cost.

This means that the limitations on information attacks is only constrained where it impinges onthe pysical world. Which is “storage” and “communication” these have very real costs but unlike a safe cracker the tools in use do not belong to the thief but the victim.

Thus the information theif is not constrained by cost of their tools or the energy to use them.

To much of our information security is based on physical security principles. Thus they have underlying assumptions that might or might not apply.

Thus we should be cautious less our perceptions lead us astray.

Some of the figures about “protocol faults” that have become known indicate that only around 3% (so far) have been (knowingly) exploited.

Which brings us around to the question “why”.

Historicaly most of the exploits that have happened have been “low hanging fruit” type activities used for “self promotion” (web defacment etc).

Less than five years ago there was a change that I noticed in attackers in that their desires had turned from “ego food” to “money” and they became “guns for hire” and malware started to become a lot lot less visable.

Finding protocol errors is not an easy activity it is however extreamly damaging.

If you found a long standing protocol error you may well have used it to put root kits on selected servers etc.

Thus the notion that attacks can be stoped by “pulling the plug” is a false one when it comes to the likes of cyber warfare.

The enemy is inside your defences in a cloak of near invisability thus shutting the “city gates” will not get you much if anything at all.

The enemy can use your resources via their force multipliers that they have installed (almost) invisably on your systems…

Which begs the age old question,

How do you tell if the enemy is within?

Esspcialy if they cannot be seen on running critical servers and services…

Nick P December 10, 2009 9:54 PM

Clive, I think you make a good case for verified software. Additionally, the SSL protocol makes a good case for simple, non-novel protocol designs whose constructs are strongly tied to usage scenarios. Personally, I’d like to see a protocol like SSL designed and checked by people with differing expertise: cryptographers; mathematicians (formal verification->vital here); safety- or security-critical software engineers; people good at secure coding; hackers in general. Each will look at it from a different perspective and be able to identify problems on many levels. Many of the problems found shouldn’t have ever occurred if high assurance development was used.

For an example of how to get things right the first time, one can look at NSA’s Tokeneer Demo, Blacker VPN, Multics, VAX Security Kernel, MILS kernels/middleware, and recent HAIPE (government’s IPSec) efforts. Many of these systems have been deployed in high risk areas fighting sophisticated attackers for years. They cost a lot of money upfront, but it more than paid off in achieving their goals. Spending a few million bucks and 1-3 years to design a perfect SSL v.4 may be better than designing a decent (read: half-assed) protocol that needs constant revision and produces continuous losses/downtime/etc.

Maybe it’s just me, but I think the world needs high assurance. We nearly had it again and again, because NSA required it, then they accepted lesser products for quick time to market and killed most high assurance development efforts. I’m hoping this doesn’t happen with the new MILS architecture efforts, as stuff is progressing nicely. With some government financial backing, we could have tons of reusable, medium-to-high assurance components within months or years that we need TODAY. I’m not all that excited about cloud computing and web 2.0 when OS kernels, network stacks, PHP interpreters, JVM’s, crypto libraries and key protocols are all flawed. If you want a sturdy house, you start with a secure foundation. You can’t save time/money by doing it later: by then, you’re leaning on it so much that changing it will knock your ass on the ground. We need to change this. We need high assurance. We have many tools, processes and people who are capable of at least medium assurance. We have many building blocks. Now, we need to start laying new foundations. The fun part is that increasing assurance of just a few critical components of our system can cause far-reaching improvements. At the very least, these enabling or infrastructure-type components should be high assurance. The most important standard libraries must be at least medium. We can do this stuff today. We just lack the will (or incentives). Security incentives are perverse…

blue92 December 10, 2009 10:04 PM

“How do you tell if the enemy is within?”

Yell “fire”, see which people reach for water and which reach for gasoline.

Or, slightly less obtusely, publish false vulnerabilities and see who tries to exploit them… Or silently bear trap the previously real vulnerabilities.

If the history of crypto teaches us anything, it’s that nothing is ever infinitely invulnerable. [Or at the very least that we should be wary about assuming invulnerability.] This necesarily means that the gate swings both ways — if the white hats can never be safe, neither can the black hats. Whether or not fighting for that safety entails certain deals with the devil during the actual execution is the nasty, tricky part.

Just don’t put all your nest eggs in one packet.

Winter December 11, 2009 2:46 AM

I wonder who else notices the contradictions in these statements:

“Apple Mac and Linux users can ignore this, as virus writers target the operating system with the largest market share.”

Versus

“Install an antivirus program if you run Windows, and configure it to update daily.”+
“Turn on automatic software updates.”

If everybody runs AV and updates regularly, then according to this logic, these are the OS’ with the largest market share. Therefore, these users should be most vulnerable. Ergo, AV and updates make you more vulnerable.

So, either, AV software and updates are useless, because they have market dominance and that is the only thing that counts. Or, it helps to patch your system with AV, but then you would be better off with a system that doesn’t need this AV protection in the first place.

You cannot have it both ways. And I know that must be possible because there do exists OS’ for which no working virus has been produced, ever. Not even as a proof of concept.

Winter

Chris S December 11, 2009 3:28 AM

“4. Show common sense regarding the Internet.”

This is without doubt the hardest thing because most people just don’t have any idea what this type of common sense would mean. When talking about SSL/HTTPS most people probably don’t understand when they need to be sure they are using these secure protocols. I have had opportunity to tell users that they should always make sure they’re HTTPS when doing commerce or entering login details. But even that is not enough as common attacks on HTTPS will switch the protocol or url in such a way that an average user may not notice.

I’ve come across login pages that submit details via HTTPS despite the container page being HTTP, the developer seemingly not understanding that a spoof page could be made that a user would view as identical because the HTTPS is not used on the whole page. How is an average user supposed to use common sense in situations when they’re not even aware of what is possibly wrong. Is the answer to cross your fingers? Most typical users I know still insist on using the same password in multiple or all their logins.

I feel relatively confident that when I use a web site I understand what to check, and understand when I may be “in the open”. I just think that most people don’t and there ought to be a place where they can learn this “new common sense” they need for life online.

indrek December 11, 2009 3:43 AM

Is it really SSL/TLS bug?

I would say that this is HTTPS bug (lacking of proper renegotiation and etc).

Regular SSL sockets are just fine.

Douglas O December 11, 2009 4:10 AM

in a public location (eg public wifi, other peoples’ networks, libraries etc) you don’t know who is running the network – so be careful

From most of the posts i see here, the target seems tobe end users, and these people really do not understand this “be careful” afterall the best advise on open wifi’s has been to use a VPN (with SSL/TLS offcourse)…..oxymoron considering the current situation…no?…………i find the “avoid” option easier….because just when you think you are visiting an ssl’d site, someone is trying to ssl”Strip” you!

Jonadab the Unsightly One December 11, 2009 6:14 AM

SSH, fortunately, is not vulnerable.
http://marc.info/?l=openssh-unix-dev&m=125746978116831&w=2

So mostly we’re talking here about the TLS used for the https and email.

In the case of email, your mail server is probably located at your ISP, directly upstream from you in most cases, so in order to get “in the middle” the attacker would have to actually break into your ISP, not some third-party system. That’s going to make the thing pretty hard to meaningfully exploit. Why not just break into the mail server itself?

That leaves https as the other major use case, but I have become convinced in any case that https uses SSL in an inherently poor fashion that does not provide very much security at all. The mere fact that the client doesn’t alert the user if the certificate changes (as long as the new one is still signed by one of the cert authorities on The List) is significantly more worrisome than this MITM vulnerability, as far as I’m concerned.

Peter A. December 11, 2009 6:34 AM

Jonadab:

Lack of certificate change warning is not that big an issue as it is still checked for validity and against the domain name. Getting a blessed cert under some big company (like a bank) name and domain fraudulently is not that easy. (But one could try to register a similarly-spelled domain name and get a cert for it under nearly any name he wants).

Having said this, I always check the cert fingerprint before I log in to my bank – and know when the cert is about to expire so I am not surprised by a change. When it changes, I examine it carefully and memorize a few digits from the new fingerprint for subsequent accesses.

gattaca December 11, 2009 7:12 AM

hmm, I am not sure about the anti-virus software. I use AVG myself but I know all software sucks with the heuristics. Last maleware I got was a trojan horse (an exe pretending to be a pdf-file) and my up-to-date AVG did not recognize it even if I forced it to scan the file ‘by hand’. Did not install the program though but this was just me – initially I got the program from a coworker asking me to check out if I can open the UPS-sending-information she received by mail.
moral of my personal story: nonverifyable benefit vs. obvious fail in basic task. I think the installation of antivirus-software is like the faith in a lucky charm. Common sense and caution do the trick as well.

Clive Robinson December 11, 2009 7:17 AM

@ Jonadab the Unsightly One,

“In the case of email, your mail server is probably located at your ISP, directly upstream from you in most cases,”

Yes along with the web cach DNS server and in many cases the DHCP server.

“so in order to get “in the middle” the attacker would have to actually break into your ISP, not some third-party system.”

Saddly not true.

Let’s take the simple case of a cable modem. Local users share the network segment in various ways.
With out going into details (you can look them up quite easily) it is often possible to subvert the “upstream services” simply by responding “faster” at the lower network levels (think ethernet not IP).

Thus there are various tricks such as making traffic go through one of the other machines on the network instead of through the switch/hub to the gateway etc (by faking a bridge etc).

Depending on what level they do this you may not be able to spot it without realy knowing your low level network protocols and many of the details of the network segment you are on.

“That’s going to make the thing pretty hard to meaningfully exploit.”

No once you “own” one of those machines on that network segment the rest are dead meat.

“Why not just break into the mail server itself?”

It is more likly the ISP has configured the mail server to a much greater security level than an average user has their windows box.

Thus getting “in between” is do-able for many ISP’s and organisations.

It’s just a matter of doing it the right way…

David December 11, 2009 8:46 AM

@Winter: Microsoft has been getting better, but I’d still recommend MacOSX or some Linux for security. These aren’t the right choice for everybody, though, since lots of people want to run applications that don’t run on MacOSX or Linux, and it can be harder or more expensive to get a Linux or MacOSX machine. For those people, I’d recommend a good antivirus program, and keep it updated.

@RH: Microsoft has, as far as I can tell, been making a good deal of progress towards security. Unfortunately, MS has a lot of historical baggage, including old programs that still have to run, and differences in culture. Apple broke a lot of backwards compatibility with OSX, and Linux is from a different culture, one with multiuser machines from the start. It will take years to overcome that.

Chris S December 11, 2009 8:49 AM

@Clive,
Yes, and in my apartment building there are 180 unknown neighbors on a building LAN, which makes it ever so much easier to arp spoof and redirect etc. Insane really. Or in wifi situations one can easily be intercepted by rogue access points and most people have no way to tell. I tend to use a SSH tunnel often but I do always wonder what vulnerabilities are out there that I’m not even aware of.

Rob Lewis December 11, 2009 9:33 AM

@Clive Robinson,

“How do you tell if the enemy is within?”

If you can’t tell that the enemy is within, shouldn’t one change the assumptions one works with, particulary that they ARE there?

Based on that assumption, the control requirements would generally change to more detective/deterrent in nature.

Not Really Anonymous December 11, 2009 10:04 AM

I take issue with Vax getting things right initially. They had a poor privilege design. Processes had a list of flags for current privileges and another list of privileges that the process was allowed to have. But there was nothing controlling the ability to enable privileges from the allowed to have list.
This resulted in such things as the privileged mail program, which needed physical I/O privilege, disabling physical I/O while running the terminal handler, which could be an executable written by an end user, which could turn physical I/O back on and own the system.

Clive Robinson December 11, 2009 2:17 PM

@ Rob Lewis,

“Based on that [they are inside] assumption, the control requirements would generally change to more detective/deterrent in nature.”

And that is the problem.

If they are there on a running production service server then detecting them may be a tads difficult if their “root kit” is any good.

If not then you can install tools to detect them (even as simple as tripwire has a high chance of detecting the initial stages).

Once they are burrowed in the only way is to bring the server down and run a proper check on all the semi-mutable memory (including Flash ROM and battery backed RAM on the mother board and IO cards etc).

But who has the resources to just “take a production service server down” on the “off chance”…

Hence some are looking to “the cloud” to do such things.

My take is get the extra resources and use three way fault tolerance (as 2+1 load balanced) and back down to two way (load balanced) at slack times and check the downed server.

But this requires a large level of admin skill and can be very expensive on propriatry OS’s etc.

And it has issues in the current economic conditions when execs are only looking 2Q away at most.

Thus now would be a good time to infiltrate if you are the “enemy who wishes to be within” 8(

Nick P December 11, 2009 2:50 PM

@ Not Really Anonymous

I wouldn’t blame you for taking issue with VAX. I’m only referring to the VAX Security Kernel they were building to meet Orange Book A1 level. VAX, VMS, etc. would be hosted on top of it and of course configuration of subject and object privileges and access rules would have to be right. While VAX had plenty of problems, as you illustrated pretty well, the A1 Security Kernel was kick ass. I don’t think it made it directly into production because they stopped working on it when NSA shifted their goals and requirements. If they hadn’t, current OpenVMS users might have been using a modified (for Intel) EAL7-class kernel, instead of a measly EAL4 kernel. A1 VAX just another loss in pages of high assurance R&D history….

Well, at least we get the usual “Lessons Learned” documents, which were pretty damned useful to one of my projects. Similar documents on PSOS, LOCK, GEMSOS and all of NICTA/Dresden’s stuff gives plenty of hints on how to build high assurance stuff in practice. Not entirely a waste, I guess.

Pierre December 11, 2009 3:04 PM

  • install an antivirus?

This is in total contradiction with the previous Bruce (the one stating that more software was not making more secure while less software was the way to go).

By the way, all antivirus products (including Bruce’s choice) have dozens of vulnerabilities per year -so we are not discussing mere theory.

  • enable automatic updates?

Err, is Bruce reading newspapers, or ahem, Microsoft reports, or, say, secunia advisories?

Each new patch is actually introducing new vulnerabilities to your system so it may sound a really loosy way to resolve the problem…

Once upon a time, Bruce’s blog was about security. What happened?

Jeffrey December 11, 2009 3:45 PM

@David – “…but I’d still recommend MacOSX or some Linux for security”

These are not anymore “secure” than Windows. Everyday more and more issues are found with these Operating Systems.

The more market share they gain, the more this is proven.

PackagedBlue December 11, 2009 10:44 PM

re: *Clive “how do you tell if the enemy is within?”

In a nutshell: your fan goes to max briefly. Thanks, how is that for reacting to built in insecurity?

Modern hardware, it makes you want to cry, tons of work on open source for “security” and it is all just a cruel joke for trusted computing.

The creative class is coming, and more rewarding than working with acceptable computer security, if there really is such a concept.

ERROR
2010

Nick P December 11, 2009 11:22 PM

@ Jeffrey

This is the point I’ve been making in a few posts. UNIX wasn’t designed to be what it was today. Mac and Linux are based on it, although Mac is also based on the least efficient of all microkernels. While people mention that these are reliable or secure today, one must remember that the UNIX family of OS’s has had almost four decades and thousands to millions of man years of effort. It barely competes with Windows on the desktop and in the server market it’s still no more secure/reliable than ancient Multics in its first year or modern VMS or IBM System/Z after they had a quarter to half the development time and way fewer developers. Obviously, the overall architecture or approach to these OS’s are flawed.

The truly secure OS’s today are designed with security in mind from the get-go and their mechanisms are flexible, with policy built on top of them. SELinux’s Type Enforcement (though not implementation), LOCK’s capability model, and modern separation kernel’s (i.e. Integrity-DO178B or OKL4) partitioning schemes are good examples. Many different policies can be efficiently implemented on top of these. These follow the high assurance model, which must withstand sophisticated attackers with plenty of resources. All low assurance software like Windows & UNIX-like OS’s have only proven their original EAL4 claim to fame: they only protect against “casual or inadvertant attempts to breach security.” Modern popular OS’s on commodity hardware aren’t secure… none of them… they are merely convenient or cost-effective for reaching most personal or business goals. This may be enough for one to desire them. They aren’t secure, though.

Clive Robinson December 12, 2009 6:59 AM

@ PackagedBlue,

” …computer security, if there really is such a concept.”

Yes I would say there is both computer safety and security but it’s a quality state of mind 😉

High asurance is good for both safety systems and security systems. And invariably you get there by partitioning and “state control”. As a running process it is not “high efficiency” but that does not mean “low efficiency”.

Like quality security and safety have to be built in as a fundemental part of the “engineering” design process.

You have to accept there are design trade offs like the automotive “seat belts -v- air bags” the simple fact is both will fail and enginering for better than a certain degree is prohibitaly expensive. So you include both and “overlap” their respective features. Thus there is a “sweet spot” where you reduced the expense and increased the reliability.

Every state in both safety systems and security systems must “fail safe” and all “exceptions” must be handled correctly including “full back out” untill after “verified commit” (how much comercial software can you say you’ve seen that on). It has a secondary benifit of making it not just high assurance but also high availability (with only minor work at the input side interface).

Where the systems differ is in “side channels” and this is where the problems realy are in “security system engineering”. You have to accept that you cannot design them out of usable code/hardware blocks, so you throtle them back at the interfaces by “clocking the inputs and outputs”, “with all data values being protected” and the correct “error handeling”.

These things are not “magic” nor “artistic” they are a well proven engineering aproach based on realistic axioms.

The downside is the solutions will not win “bang for your buck” “marketing specs”. But then why should they?

After all you don’t ship ming pottery via a motor bike “dispatch rider” do you?

Nick P December 12, 2009 1:11 PM

@ Eli Talmor

Nice overview of eCommerce & browser threats. I’ll look into the proposed solution, but this is a solved problem. The MILS Architecture is one of the building blocks I use for medium assurance systems and one of the first applications was secure online banking (via the Nizza Security Architecture variant). MILS basically starts with a secure, small kernel and runs everything else in userspace on top of it. All sharing is done by kernel IPC & is subject to its access rules. This allows isolation of different components and enforcement of a security policy w/ confidence.

Nizza Architecture uses L4/fiasco kernel and trusted services, like Nitpicker GUI. Nitpicker is a small (i.e. verifiable) app that ensures user input goes only to app that has focus and displays visual output in clearly labeled, unspoofable windows. Each VM or app running on security kernel can produce a virtual screen or request input. The kernel and its GUI system decide to accept or reject and there’s no bypassing this (by design).

One of the demonstrators TU Dresden built on its Nizza Architecture was eCommerce. Basically, the browser and other untrusted components were in a Linux VM. A secure viewer and signer is in a separate partition. The user sets up a transaction in untrusted partition, details of which are transmitted to secure viewer. User switches focus to secure viewer and sees exactly what they are signing, types in password (which can’t be keylogged by browser), decrypts private key, signs transaction and returns signature to untrusted partition. This is sent to the bank. The isolation mechanism is only 15KLOC and the secure viewer is maybe around 50KLOC. Minimal, app-specific trusted computing base. No keylogging. No spoofing. Using regular x86 hardware and a Linux desktop. Why we need a complicated authentication infrastructure when a simple separation kernel & Linux VM will do the job with greater assurance? Check out TU Dresden’s Demo CD to see it in action. Here are some examples of the MILS-like approach to solving this and many, many other problems.

Nizza Security Architecture

http://os.inf.tu-dresden.de/papers_ps/nizza.pdf

Green Hills INTEGRITY PC

http://www.ghs.com/products/rtos/integritypc.html
http://www.ghs.com/products/rtos/INTEGRITY_workstation.html

LynuxWorks LynxSecure Hypervisor

http://www.lynuxworks.com/virtualization/hypervisor.php

OKL4 Open Source Microhypervisor (my favorite, naturally)

http://www.ok-labs.com/solutions/secure-hypercell-technology

Perseus Securtiy Architecture (more recent & uses TPM)

http://www.perseus-os.org/content/pages/Architecture.htm

Turaya Security Kernel (Perseus applied; sold by Sirrix)

http://www.trust.rub.de/media/ei/attachments/files/2009/02/OSS_chap10_Turaya.pdf

Nick P December 13, 2009 7:36 PM

I’ve recently been looking more into high assurance development methodologies. I’ve found certain techniques that can improve reliability or security, but I’ve also found that there is still no standard way to produce provably correct (e.g. EAL7) software. Every practical project is both an application of high assurance methods and an experiment to see if they work. If anyone wants to see a cost breakdown of a high assurance OS development, they should look for the paper “Cost profile of a highly assured secure operating system.” This details the LOCK project, which attempted to design and verify an Orange Book A1-class security kernel. It ended up in a real product that’s been used for years. It’s interesting to see the costs involved and which bug-finding methods produced the most results and why.

After looking at all I have, I think I will still try to apply formal methods to various aspects of the software I build. The methodology I’m building for medium assurance app development includes these components so far: strong requirements gathering with prototypes and feedback used to ensure accuracy; formal and informal specification and requirements modeling; use of formal or functional language for executable (and provable spec); modules with well-defined interfaces; use of safe verifiable (like Spark ADA, MISRA-C or OCaml) coding; developing spec and code in parallel to ease correspondence proof; static analysis tools on implementation code; intensive unit and usage testing. So far, it seems these components alone could produce something that’s highly robust, but I’d only promise medium robustness. I also think several different modelling tools, specification styles, and languages should be used to prevent inaccurate proofs caused by assumptions of the tools.

Clive Robinson December 14, 2009 5:49 AM

@ Nick P,

“I’ve found certain techniques that can improve reliability or security, but I’ve also found that there is still no standard way to produce provably correct (e.g. EAL7) software.”

I’m not sure “provably correct” is possible 😉

The methodologies have two basic paths,

1, Apply a formal logic system.
2, Apply a mitigation stratagy.

We know that the 1st can only take us so far but not 100% of the way due to the implications of Kurt Godel’s incompleteness theorems.

The 2nd is a case of known-v-unknown. You can only prevent that which is a known problem (known-knowns) or in a broadly similar class of problem (known-unknowns). Which leaves what is yet to be discovered (unknown-unknowns)…

So there is no “provably correct” system that is capable of covering “all cases” 8(

So what to do?

Well as far as I’m concerned “borrow from other fields of endevor” and use what is available in a conservative way trying to mitigate any non transferable axioms.

And this is the real problem “fundemental assumptions” that are “stated” (axioms) or unstated due to our current “physical world” perception limitations.

We live in the physical or tanagable world where our observations have given rise to what we call “the laws of nature”. In reality they are just mathmatical models that may or may not be correct (think of Newton’s laws of motion corrected by relativity etc).

Mathmatics is not an “exact science” it is not a science at all, it’s a (sometimes) self consistant series of rules based on some axioms that have been known to change from time to time (and are expected to change further).

Which is why Stephan Wolfram’s book “A New Kind of Science” caused such a reaction. He belives that mathmatic’s has it’s limits as a modeling tool and thinks we should be looking at “cellular automata” to describe the functioning of the universe, which kind of supports some of the views of Seth Lloyd (universal computing) and Roger Penrose (quantum mind).

What it boils down to is every thing we can see or touch is tangable and constrained by the basic laws of physics (of matter/energy/forces).

Information however is intangable and in it’s true state has no physical embodyment. What we percevie as physical is actually the mechanisums by which information is being stored (state of matter) or transmitted (movment of matter/energy under a force).

This gives rise to two problems the first is we are preconditioned in a very very fundemental way to the axioms and unstated assumptions of the physical world and it’s implications which colours our perspective.

The second is that through our “coloured perspective” our mathmatical models nearly all have underlying assumptions that are bassed on sometimes hidden fundemental physical atributes.

So we have few models to borrow that are in some way not “suspect” when applied to information.

To be as detached about it as possible the only real handle we have on information is Claude Shannon’s “entropy” which is a measure of “possability”. And an atribute of possability is “chance” which has “states” with attributes of “probability”. Another atribute of possability is combinations etc of which there are some interesting bits of math.

However probability brings us around to statisitcal processes. Which is where we have to be extreamly cautious (ie there is no reason why the bell curve of physical processess should apply to information etc).

But also it gives us a clear warning that the science of information could be closely related (model wise) to quantum events…

None of which appears to help untill you consider “Quality Assurance” is about controling possability. And like wise safety engineering is about managing risk.

Which is why my view point is to borrow heavely from Quality and Safety engineering processess for doing security. BUT importantly sanity check for unfounded assumptions.

That is a process that is also oddly similar to dealing with the design of “cellular automata” which can be considered as “Programs veriffiable in controled environments” (ProVInCE to coin an acronym 😉 or state machines on steroids 8)

So having come up with a name now we need to develop the methodology.

As I said quality and safety engineering have closely related and usefull frameworks and processes. And as you know I’m in favour of segregated state machine with controled interfaces as a security engineering design methodology.

However as you noted in the past some of the constraints I apply are a bit onerous. Which makes me think that “cellular automata” might resolve some of the issues.

That leaves the issue of how to get from the broad brush stroke design down to the cellular automata rule set.

As you have noted there are “formal methods” etc to do this.

Thus I think we have the basis of a better process of doing security engineering we just need to think more on it 8)

Nick P December 14, 2009 1:26 PM

@ Clive

Yes, this seems to be what my research has shown. “Risk reduction” and reducing “possibilities.” That’s about all modern assurance methods cover. There are also critics who say that truly formalized requirements are impossible because of the complexity of defining both behavior and the environment in which it happens. Research has shown, though, that having the users formalize requirements in an easily-learned spec language helps on this front. Personally, I think we have the tools to make EAL5-7 software affordably today, even if they don’t 100% prove something. I would like to see more exemplar projects like NSA’s Tokeneer, but for high assurance software with modern tools. In the commercial world, we see tools like Perfect Developer, GNAT Safety Critical Ada, and Microsoft’s SLAM verification toolkit producing real results. There are open-source and free equivalents for most of these. Regular developers just need to know how to use them for best results and that’s where government-funded projects come in. Results must be published, though, or much work is lost (see LOCK).

Interesting that you brought up cellular automata. I was aware of them, having been an artificial intelligence buff in the past. I was fascinated by their emergent properties: a few local cells, following simple rules, complicated global behavior (including computational circuits) results. As far as I know, turning requirements into a CA set or even proving correspondence is a field in its infancy. Although, you drew an interesting parallel between FSM’s and CA’s. I bet we could reuse some of the math and tools that verify FSM’s in whatever eventually verify’s CA’s. I like CA’s because they are simple enough to support in hardware (e.g. FPGA), inherently parallel, and quite resilient to failure. Maybe your “ProVInCE” method has a place in the future, but today I don’t see much of any practical use out of CA’s. Neural networks started out as an academic exercise that later gave excellent results in many practical fields, so maybe we shouldn’t give up on CA’s. Btw, you implied that there is a mathematical way to verify (to some degree) CA’s. Is this new research or what were you referring to? (I haven’t looked into CA’s in a few years now so I may be behind.)

On the topic of verified software, I’m looking at it from a few different angles. We’ve already discussed modularity and limiting inputs and outputs. I was thinking of a FSM model where it can be in a few states: looking for (and validating) input; processing; output (maybe specific format). The processing step would be the real functionality and could be defined via functional programming or a rules-based approach or something else. I could use whatever technique suited the module best, then use whatever model-checker or prover supported that technique best. No reason I have to do the whole system in one language or tool. I figure I can model and implement components in many different languages as long as interface is simple, predictable and consistent among all implementations. Aside from overcoming limitations of using one tool, this also allows parallel implementations and proofs of different components. The system as a whole, usually some invariants, can then be proven using the component-level proofs. Since mathematics is functional, I keep thinking of doing the high-level using a pure funcitonal language like seL4/L4.verified did. This makes proofs simpler.

fung0 December 15, 2009 11:59 AM

Great post, as usual. However, you’re WAY wrong about enabling auto-updates. First, they’re increasingly used by publishers (e.g. Microsoft) to force anti-features (e.g. WGA) on users. Second, they open your system to untested and often buggy ‘1.0’ software… possibly with new and less-understood vulnerabilities of its own. Third, the update process itself carries a certain risk. Updating your OS is akin to brain surgery conducted in your home by not-necessarily well-meaning amateurs working via long-distance remote-control. Possibly worthwhile if you have a life-threatening tumor; to be avoided in all less-urgent cases.

Updates: maybe. AUTO-updates (in fact, ‘auto’-ANYTHING)… just say no.

David December 15, 2009 1:08 PM

@Jeffrey: I’m not claiming that one OS is inherently more secure than any other, but when I’m talking about what people can do to secure their own personal systems I will advocate what appears to work in the real world. In reality, MacOSX and Linux systems will not be as vulnerable to most of the malware going around. This is partly due to popularity, and partly because of cultural and backward-compatibility reasons.

Similarly, while I don’t like the basic idea of anti-virus software, and distrust automatic updates, I’d still suggest AV software (I make sure it’s installed and up-to-date on all the Windows boxes at home), and believe automatic updates are the best choice for the average user (my son’s computer is on auto-update).

Rob Lewis December 16, 2009 8:07 PM

@Nick Pat,

“Maybe it’s just me, but I think the world needs high assurance.”

It’s not just you, but so few security practitioners truly understand high assurance, and according to Spaf, new recruits seem to know less and less about its roots. High assurance is the right goal alright, but in a framework that is so usable and manageable operationally so as to become disruptive and not limited to just selective implementations. I think that MILS has utility but is a compensatory response to the inability to make MLS scale across domains. Do you see it ever used with SMBs?

Now about sturdy foundations. When we get them, how many generations will it take to replace the status quo?

I’m not a techie, just a keen observer. Some work we do might challenge your assumptions about how to achieve sturdy foundations. How about an injectable technology that adds high assurance (separation kernel and self-protecting reference monitor capability) to existing sytems? What is more, it uses existing DAC objects as keys for its rule set, so high assurance can be pushed across domains.

Here is link to add to your collection:

http://www.trustifier.com/pdf/trustifier-architecture.pdf

@clive,

This applies to your reply as well. A trusted (high assurance) production server will resist that root kit in the first place. Furthermore, since it enforces operational privilege and is extremely granular, it is an additional layer of defense against the enemy within.

Clive Robinson December 17, 2009 7:51 AM

@ Rob Lewis,

I’ll have to look at the PDF you link to when I’m out of hospital as this little mobile phone only renders “some” PDF text and little else.

Nick P and myself have been having a chat for a while about various bits and bobs to do with various aspects of High assurance, High availability and high reliability.

And extra insight would be very welcome.

The problems that we see are not entirely technical as “acceptance in the market place” is going to be the big hurdle to jump.

The “commodity hardware / software market” has for as long as I can remember been driven by “Specmanship” of the “bang for your buck” variaty from the marketing departments etc. It is probably the clearest indicator that we are still in a very immature market stage (a bit like the 1800 era “Patent Remadies” market from which Bruce gets his Snake Oil).

The simple fact is is we have a huge amount of CPU horse power just to update the screen in less than a blink of an eye etc etc.

This leads to a design methodology to “efficiently” support the CPU, for both the hardware and consiquently the software

Unfortunatly the side effect of the “efficiency is “side channels” which realy are our current big threat to real as opposed to theoretical information security in an otherwise secure design (theoretical or physical ;).

Further the “efficiency” also makes any given system “transparent” to side channels of a bandwidth lower than the system latency.

This makes eliminating or locking down side channels difficult at best in a “commodity hardware” architecture.

The conservative method of dealing with side channels is “segregation” of “state machines” with all states known that also fail safe. The segregation is a basic piplining technique where the inputs and outputs are clocked this limits the side channel bandwidth to the usual Nyquist limit (1/2 clk). Further the data should “be known” in some manner to limit the potential for “data in data”.

Obviously although this can be done on commodity hardware the software has to have all sorts of “extras” to limit side and data in data channels.

It is this area that formal methods are needed for from the high level design all the way down to the “gate level”.

We have tools to do some of the bits but no end to end tools currently.

In reality what we need is a framework to plug methods in of different types. And also allow diferent methods to produce output that can be functionaly compared (differences usually high light problem areas).

But most of all it needs to be a proper “engineering” aproach based on sound science etc.

To much software production is ad-hoc “experiance knows best” of the artist or artisan, as evidenced by the use of “patterns” etc.

Rob Lewis December 17, 2009 3:46 PM

@Clive,

You will find the pdf interesting then.

As a teaser, we had a very interesting experience with a leading DOD Red Team this summer, using commodity hardware, and protecting against insider attacks.

Hope your recovery is fast.

Nick P December 17, 2009 11:21 PM

@ Rob Lewis

Finally come to our forum, huh? I’ve seen you on many talking about this product, but hadn’t analyzed it thoroughly. I can already say it will never be high assurance: too big to formally evaluate (nice size though); runs fully in kernel mode; bolts onto low assurance code also in kernel mode; uses access control models that have rarely worked in past. I can’t even be sure it offers a trusted path, but I’ll save that for later. I do like its design and think it has potential for medium assurance. I particularly like how its seemless and how most functionality is isolated into modules reusable across OS’s. This allows us to analyze it in isolation and then the interactions. I promise to look at your documents without bias and try to give it an honest review. I’d also like to know more about the Red Team exercise and DIACAP certification level, although pen test limited use: “testing can only prove the presence of bugs.”

On the subject of MILS, it was partly created due to MLS shortcomings. Some of these will still be in your product as well: the access control models and how they work in the business setting were (and still are) the problem. Capability systems like EROS and DARPA Browser solve this problem with intuitive POLA application, but propagation must be controlled and revocation definite and that’s tricky. MILS is more useful than that, though. MILS doesn’t just bolt security onto an OS: it’s secure from the ground up. Everything about it is designed for this. It’s basically just an enforcement mechanism for whatever policy you want.

The separation kernel (SK) is the only thing in kernel mode. The rest (even drivers & security layers) are all usermode processes and their activities are carefully regulated by SK and hardware. This isolation and some basic middleware form a solid foundation for high assurance software. The current scheme is SK & middleware with legacy OS or API paravirtualized and security-critical apps in isolated partitions. You asked about usefulness to SMB’s. Your competitors have used MILS to provide seamless solutions to these problems: separate work & personal apps on business laptops; undefaceable web servers; security software that can’t be turned off by malware; secure web browsing; confidentiality of crypto stuff (like VPN); prevent keylogging w/ true trusted path. Does Trustifier even have a trusted path that prevents keylogging or spoofing? MILS does that already and the assurance level can equal that of the SK itself in embedded or server systems.

All of the above is accomplished in a way that only depends on a few components to achieve security goals. Tiny TCB. Some formally verified. One stamped EAL6+ by NSA (Integrity-DO178B), one in EAL6+ evaluation (VxWorks MILS), one selected by Navy for high assurance platform (LynxSecure), one in millions of phones w/out failure (OKL4), one in more and more critical places (PikeOS), one endorsed by European bodies (Turaya Security Kernel), and one verified down to the code (seL4). While your company talks a lot about Red Teams and recommendations, I see very little confirmation for most claims and not even in Common Criteria evaluation. The other products I’ve mentioned have been evaluated to extremely high standards (FIPS lower levels are weak) and used in many highly critical systems. One survived NSA pentest w/ source access and a few have proofs that have been checked by reputable sources or published results.

Most of the high assurance claims for Trustifier on the web site also seem a little vaporous. There are tons of buzzwords (“neuro-fuzzy” & almost every protection model, like Biba) and many high assurance claims. I confirmed Red and Blue Team worked on it, but I still haven’t found all the specifics or confirm it passed all tests. Of course, DOD has certified plenty of weak or medium solutions, like Trusted Solaris, and I don’t entirely trust their review. Substantiation by NSA pentesters, independent security engineers, etc. would be nice. So far, the only source for most Trustifier claims is Trustifier’s company and it has no easily located public track record (that I’ve seen) in high assurance (high value assets against sophisticated attackers w/ tons of time and money). Maybe you guys are just new or low profile, but this many marketing claims with so little data or peer review always bugs me. Again, I’ll gladly look into the technology, try to find ways to use it to improve systems, try to find flaws, etc. with an objective lens. My main concerns with using this product in medium assurance systems would be alleviated if you posted specific details and links to evaluation data and peer review from reputable outside sources, esp. all the DOD stuff.

Note: Please don’t take offense to any of my blunt statements. To be clear, they are not attacks on your product. They are merely the kind of extremely skeptical probing questions and concerns that any product must be able to survive to claim high assurance (even medium). I’m always excited about innovation in the trusted systems space and hope your product meets muster. I even know of a very recent technology that could prevent attacks on your meta-kernel: is tiny (1KB), uses Intel VT; formally verified; requires very little mod to OS; already Linux compatible. Maybe if you can give me confidence in your software, I can help you create more. 😉

Rob Lewis December 18, 2009 12:17 PM

@Nick Pat,

I welcome blunt statements and probing questions in the spirit you mention, which I agree are necessary, if they lead to meaningful discussion, and no offense would be taken.

I think you will find that our product stands up pretty well to your concerns, but the conversation would be better carried on off-line. I can provide you more material and background as a start.

It would be worth your while I think. Trustifier was designed with formal design elements for EAL 6 certification. As far as the Red Team effort, our defense was complete, but there was a twist. We gave root privileges and they were unable to open a target file with the crown jewels.

If you agree this is the way to go, email my first name at trustifier dot com

Nick P December 19, 2009 11:03 PM

You take the skepticism better than most. 😉 While the use of root privileges means nothing (all MAC mechanisms enforce on root), the design, testing and preparation for evaluation make the product worth considering. I intend to talk to you offline and have added your email to my contact list. Unfortunately, I’m so swamped with work over the next few weeks that I’ll have to postpone our talk. Since you’ve been doing this for nearly seven years, I don’t think a delayed conversation would be a problem for you. You’ll probably be there when I find the time, yes?

Rob Lewis December 22, 2009 1:11 PM

@Nick Pat,

I would relish peer review and engagement. My partner has simply shown little interest in “playing the game”. My belief is that he has created something of value, especially in terms of implementation. At your convenience then.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.