Comments

Clive RobinsonJanuary 21, 2010 3:21 PM

From a skim read it covers the basics reasonably well.

It'll take a little longer read and a think to say what should be there that isn't there 8)

BF SkinnerJanuary 21, 2010 3:39 PM

Someone call the PM!

Clive is in trouble!

2 sentences? Clive. Blink twice if you're under duress.

Rich WilsonJanuary 21, 2010 6:07 PM

There's a nod to server languages besides php, but you really have to know your own environment, be it LAMP or J2EE or asp.net/IIS or anything else. The best web code can be undone by a poorly configured server. And a well configured server can be undone by poor network security. It really is a chain of onions.

And yes, that has to be the shortest comment I've ever seen from Clive! I think he was just in a hurry to get first post :-)

Carlo GrazianiJanuary 21, 2010 6:42 PM

That wasn't Clive. It was a cross-site request forgery attack on Bruce's blog, posing as Clive.

Who Mourns For Tor?January 21, 2010 8:18 PM

Sorry for offtopic, Bruce, I just thought you and others may want to know about this tidbit:

http://archives.seul.org/or/talk/Jan-2010/...

"If you use Tor, you're cautioned to update now due to a security breach. In a message
on the Tor mailing list dated Jan 20, 2010, Tor developer Roger Dingledine outlines the issue and why you should upgrade to Tor 0.2.1.22 or 0.2.2.7-alpha now: "In early January we discovered that two of the seven directory
authorities were compromised (moria1 and gabelmoo), along with metrics.torproject.org, a new server we'd recently set up to serve
metrics data and graphs. The three servers have since been reinstalled with service migrated to other servers." Tor users should visit the
download page and update ASAP!

http://slashdot.org/submission/1156116/...

Ouch!

pdf23dsJanuary 22, 2010 7:04 AM

The only reason to upgrade Tor, from reading that e-mail, is to get the addresses of a couple of new directory servers. The Tor code and protocol itself was not compromised in the least. In fact, even if the attackers had been attacking Tor (instead of just using the compromised servers for bandwidth) the use of git would have *probably* prevented any surreptitious changes to the Tor source. (Probably, because they're also using svn, which isn't as secure. I'm not sure how they're using them, so I couldn't say.) And as far as I can tell, malicious changes to the source would have been the only possible attack that could have affected the security of Tor users, short of breaking the Tor protocol. The compromised servers were, AFAIK, not Tor nodes of any sort.

pdf23dsJanuary 22, 2010 7:06 AM

Oops. Two of the machines were Tor directory servers. But the directory protocol is tolerant to a small number of compromised directory servers.

AppSecJanuary 22, 2010 7:46 AM

Pretty good article for an "intro". The focus on PHP would hopefully not be lost on those who do Java, C#, Ruby, Grails, or whatever language of choice.

I think some of the issues I take with all vulnerability documentation is the "classification" of issues. A SQL Injection, Command Injection, or XSS attack is the same attack. The only difference is the destination of the data element. The concept is the same: unvalidated input is being sent to an interpretter/processing engine which results in unintended results.

I think part of the problem with getting web Security is the inability for people to use the "K.I.S.S" method. We want to make everything sound so complicated, when in reality it really isn't. It just requires effort -- but effort doesn't necessarily equate to complexity.

Clive RobinsonJanuary 22, 2010 10:05 AM

@ B.F.Skinner, Rich Wilson,

3quick blinks 3slow blinks 3quack blinks.

(Yup the a is the hidden message 'a'uthenticator 8)

All done in time to the "Last Post",

The "Rider of Blinky" has been close for the past couple of days but the Anti-Bs have got a grip on it for now and the fresh boild lobster look is dulling.

@ AppSec

"Pretty good article for an "intro". The focus on PHP would hopefully not be lost on those who do Java, C#, Ruby, Grails, or whatever language of choice."

You indirectly highlight a problem I've seen more of recently, which is how to explain a software problem.

Once upon a time when I was a not so wee man we used psudo code alla Knuth or whom ever (even ASN1 if we had to)

But to get at the people who are most likley to need the advice "you have to keep it real" but you also "have to keep it short and sweet".

PHP is let's be honest a dogs dinner of bit's of the banquets of older languages such as C.

So the author does get kudos not just for the artical but the way he gets it across.

What the article does highlight is another altogether more troubling security problem, which is a mono culture of applications their interfaces and code reuse in the resulting end user applications dependent on them.

To understand the way the issue works we need to step back in time fifteen or so years to the mid 90's and see how it all played out.

On one hand our problems with the web where just starting with the lack of "state" a series of cludges with major security implications resulted due to the "copying" of "example code" to get around the issue.

HTTP was not designed to (nor can it still) handle state. The browsers supporting it where not designed to support applications either.

On the other hand the majority OS (MS-DOS) of the time could not either.

I need to add a note here that I'm not bashing MS (much as I would like to ;) because this was and still is an industry wide problem and MS where but are nolonger the major problem at the time.

MS tried to resolve the Dos Woes by an "application on top" which was called "Windows" to provide mult-application support. After two previous very woefull attempts (any one remember using Windows 2?) the third version started to get traction. This was due in the main to the hardware actually being able to support the idea.

However It is important to remember that MS saw the "shining path" as the issolated (except for data) desktop each running MS Apps.

MS then under competition from the likes of Ray Norder at Novel moved into the network application world (Win 3.11, NT 3.51/4.0) but only for business users to access data. However that irritating "internet thingy" was begining to become very much used by home users. And MS played catchup/spoiler with IE.

However the Internet and especialy HTTP kept moving from strength to strength.

MS now under anti competition threats moved IE into windows.

Which was (and still is) a major major security issue. IE was effectivly the new "multi-application desktop". But unlike the underlying NT desktop which had some working security measures to isolate applications memory space and threads it had none.

A number of people myself included pointed out that the OS security was fairly irrelevant when the Apps where all running under IE which had become the "new OS, without security".

You can imagine what a surprise it was to wake up and find not the old and tired tastless and stodgy MS chaff in the breakfast bowl but a nice new lean shiny chrome which architecturaly addressed many of the IE issues (and can solve many of the Internet apps issues as well just via segregation).

So historicaly security has risen behind the applications not with them. And for many multitasking environments that apps run in there is no security of any worth.

The apps have moved from being relativly begnin objects to being out right attack objects getting in to the heart of where the users have the all important data they are trying to use (and optionaly protect).

But the next stage is afoot and that is common apps with poor API's such as the likes of Adobe, MS and many others some as helpers (plugins) and some as presentation extension (applications).

For instance a minor change in say flash can have a critical security effect on not just the user but the web apps as well.

So call it my "no brainer" prediction for 2010, the industry will conclude that the cloud (so far) non event will be stalled due to application security issues.

And where are all these issues going to come from,

1, Poorly designed interfaces.
2, Insecure example code.
3, Out of date code being re-used.

But ultimatly they all derive from "Market imperative". That is the Internet has removed geospatial limitations it is now all about time to market to get the founders market share.

@.@ I hope I have made sufficient effort to show I'm not under durance vile (or medical as it was) today 8)

MarkJanuary 22, 2010 11:48 AM

This was a good intro-level overview. I found it interesting that the author seems to be addressing web developers on the newbie end of the spectrum, and he seems to take for granted that they don't know a thing about the HTTP protocol or how browsers actually work.

I think he's right; I've met plenty of web developers who lacked any knowledge of, or even interest in, the basic technologies their applications rely on. I work with a number of folks who develop for a major application suite that has a sort of pseudo-code, from which server-side Java and client-side Javascript are built. The software being a web application, I assumed the developers would at least have a passing knowledge of Javascript, but that's not the case. Many of them do not even make a conceptual distinction between code that runs on the server vs. the client.

I think this phenomenon probably has a lot to do with the sorry state of affairs we're in.

BrianJanuary 22, 2010 4:49 PM

Honestly, that page seems too remedial to me, yet, I know that people who don't understand any of that are deploying web sites... It kind of scares me really...

Nick PJanuary 23, 2010 1:19 PM

@ Rob Lewis

Yes, I remember you. I believe, though, that your DoD claims are somewhat misleading. They probably refer to the "Sai" file server, which is a particular application of Trustifier and one that traditional MLS already handles quite well. Two things: trustifier surviving in such a restricted, well-understood setup doesn't imply it would protect a web server as well; evaluations of trustifier meta-kernel do not mean the "ryu" web security solution is just as secure.

In order to truly assure Trustifier, the ryu solution would have to defend against DoD Red Teams and taking down a web application is usually easier than a cross-domain solution. Cross-domain solutions are easy to get right in comparison. I still don't see that DoD report on Trustifier's web site, which is covered in marketing stuff. If you would post it here, I'd love to see the results since the File Server alone may be of use to many companies.

Eric MOctober 29, 2010 7:14 PM

On Trustifier, I must point out that you are in error.

Red Team testing and reporting was done with original Trustifier, many months BEFORE ryu and sai were even conceived as separate product offerings, targetted to specific interests of various markets.

I strongly suggest that you try at least perform due diligence and verify facts before venturing with misleading info ... and YES, the Red Team did NOT break thru Trustifier protection, even WITH superuser privileges, because the role-centric security rules did not let them, period!!! The Trustifier-protected system was NOT subverted or corrupted. Think on that! Top dog Red Team, stopped dead within their own penetration timeline targets! Sounds like something that should be looked at closely, doesn´t it?

There is NOTHING else like it out there.

Hence the exclusivity among those selected for the recent SINET presentations to security specialists.

If they are looking at it that closely, maybe you should look closer too.

Nick PJanuary 25, 2013 11:35 AM

IN *MY* DOGHOUSE (again): Trustifier Inc.

@ Eric M

I've given your company 3 years to prove itself out. Let's see what you've accomplished. I just took a look at the current state of Trustifier. The web site is as strange and marketing-speak loaded as ever. The supposedly top notch, cost-effective sai cross-domain solution is gone. Ryu remains, although in the cloud. Hero is a new social media firewall or some crap.

"Red Team testing and reporting was done with original Trustifier, many months BEFORE ryu and sai were even conceived as separate product offerings, targeted to specific interests of various markets. "

I just did plenty of Googling. The source of all information about surviving the Red Team came from Trustifier itself. All PDF's, imagery, claims, etc are from Trustifier. In contrast, the evaluators typically have pages or press releases on their own site for passing CC, S C&A, TS C&A, CIA DCID, and certain independent evaluations. I can't find jack on Trustifier from any reputable 3rd party.

One trustifier PDF references an application firewall evaluation featuring a bunch of products. It provides an interesting slideshare link [1]. The PDF says the guy evaluated Trustifier, it outperformed the others, and he just didn't include it in the presentation. Right...

"I strongly suggest that you try at least perform due diligence and verify facts before venturing with misleading info ... and YES, the Red Team did NOT break thru Trustifier protection, even WITH superuser privileges, because the role-centric security rules did not let them, period!!!"

A Red Team running code in a usermode process & no kernel-mode [2] vulnerabilities couldn't circumvent a MAC policy. That's been common in MAC systems for decades, esp w/ simple security policies. But that was a "first time" failure of Red Team, hmm? There's quite a few products that passed CC & DOD evaluation without any known flaws from pentesting, especially firewalls. Some of these later had vulnerabilities. Many are rated below EAL4, the "certified insecure" (Shapiro) rating.

Orange Book A1-class products survived NSA pentests, covert channel analysis, and showed no kernel (or other) vulnerabilities. These had much more assurance than Trustifier. Additionally, I got the reports from government sources. Let Trustifier do their due diligence and get their "independent" evaluations published by evaluators so we can believe them.

"Hence the exclusivity among those selected for the recent SINET presentations to security specialists."

That's quite a spin. SINET's activities are a combination of fundraising, salesmenship, exploration of innovative ideas, and networking within the industry. They pick 16 possibles every year. Getting picked only means you had potential to do something and maybe a good marketing team. That your competition was invited again next year and Trustifier wasn't might mean something. That I've heard almost nothing about Trustifier in Defense, INFOSEC circles, academia, SC magazine, etc. says it's not gained the respect of industry in three years.

Trustifier had potential. This was recognized. The company made snake oil like security claims. The company talks of independent evaluations that evaluators don't claim happened. It overhypes a security approach that failed repeatedly in the past. It refuses to address criticisms of the security community. It's solutions lack longevity or specific assurance argments for their claims. All in all, Trustifier MetaKernel had potential, but currently should be treated as crapware. QED.

[1]
http://www.slideshare.net/lbsuto/...

[2] Argus Pitbull makes similar claims with their product. It turns a Solaris (maybe Linux now) system into a CMW. Their product was beaten by a kernel vulnerability one time. Supports my old claim that big kernels undermine "metakernel" and system call hook approaches' security.

Leave a comment

Allowed HTML: <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre>

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc..