Jonny November 1, 2011 3:25 PM

Based on the description, I would be a member of the target audience. However, I can’t think of a reason why I should attend.

Nick P November 1, 2011 5:38 PM

Of all the best minds in INFOSEC, i see not one. Additionally, we have the technology already. Its just not being applied. Sounds like another waste of effort not handling the real problem: incentives.

BF Skinner November 2, 2011 7:55 AM

Actually it might be worth it to hear Clarke and Potter. I’ve heard both speak before.

I don’t know Gosler but it would be interesting to be lectured to on security by the National Labs.

CE November 2, 2011 8:15 AM

It’s part of a twice-a-year get together of DARPA project managers and researchers to present work and to compare and coordinate efforts. The Colloquium is just the visible part; most the actual activity is in project-specific groups in the days before and after.

grymoire November 2, 2011 10:02 AM

The topic is “future directions in cyber security.” It’s an invitation-only event. First you needed to submit a 3-page “white paper” that answered 3 questions (this was limited to one page). The three questions:

  • At present, attackers in cyberspace seem to have the initiative and hence the advantage. What specific technologies should DARPA develop to address the imbalance?
  • Attacks on embedded computing systems have received much attention. What specific technologies should DARPA develop to secure embedded computing systems?
  • If DARPA could only invest in one cyber-security research area, what should that be and why?
  • Nick P November 2, 2011 12:17 PM

    Hash: SHA1

    “I’d hardly call Mudge or Bruce Potter lightweights…”

    Neither would I. They’re good at what they do. However, if you want real security, you have to call on the people who have tried to actually secure systems up to and beyond the limits of available technology. People like Schell, Irvine, Bell, Saltzer/Shroeder, Shaefer, Saydjari, Kemmerer, and Karger (deceased) who practically invented the principles of building secure systems. These people also participated in projects up to A1/EAL7 & at least one is still working on these things. Recent notables include Zhong, Zhang, Chhabra et. al., the Sandia SSP team, Praxis, TU Dresden team & Heisner/NICTA.

    The guys you mentioned are largely part of the “penetrate and patch” game. The other people I listed worked on “correct by construction” type systems. Their assurance arguments & technologies are getting better every year. These people know how to build trustworthy systems. The others are best at finding flaws in inherently insecure systems. It’s a critical difference.

    And I’d further note that the whole idea of “cyberattack” and “cyberdefense” is still sidetracking from the real issue: computer security. Reframe the problem and you immediately have a stack of solutions that are simply not being implemented. The government wants all the benefits of the COTS market with the kind of security assurance of the medium to high assurance market. It’s impossible. They must be willing to make tradeoffs in cost, usability or features if they want increased security and/or immunity to serious online threats. What they’ve chosen remains to be seen.

    Version: GnuPG v1.4.10 (GNU/Linux)


    Nick P November 2, 2011 12:21 PM

    @ BF Skinner

    All three should have interesting presentations. Gosler, in particular, has a strong focus on intelligence & counterintelligence. Anyone doing counterintelligence almost always has something interesting to say. 😉

    Andrew November 2, 2011 7:05 PM

    Is 75 years experience relevant in the cyber domain?

    What we need is fresh people with cyber domain experience and innovative ideas.

    These people cannot offer that.

    askme233 November 2, 2011 7:40 PM

    I thought this was great: “Attendees with cellphones will be able to text questions to the speakers.”

    Combine that with the last post on cellphone intercept technology and you could have some real fun.

    Why do I see Anon pranking the heck out of that?

    Nick P November 3, 2011 1:15 AM

    @ Andrew

    “Is 75 years experience relevant in the cyber domain? What we need is fresh people with cyber domain experience and innovative ideas. These people cannot offer that.”

    It’s a good point. Most of the improvements on the attack side, like P2P C&C & black market stratification, have come from the newcomers & younger people. I’d say what they all have in common is that their untainted by both organizational and “old school” thinking. This lets them come up with fresh ideas.

    Clive Robinson November 3, 2011 5:05 AM

    @ Woofle,

    “It appears that someone else has come to a similar conclusion to that of our favourite security guru..”

    The El Reg article is basicaly about Insurance underwriting risk.

    Unfortunatly the boat has sailed on that idea a long time ago even before Bruce started changing his opinion from legislation to insurance.

    I thought insurance was a better idea than legislation back in the 1990’s along with having an “Underwritters Lab” (UL) for software and systems.

    Well firstly I’ve matured a bit and realise that Insurance won’t work (I could give the details but people would complain about the length of the post).

    Secondly in the US due to a court case an insurance company got burnt fingers up to the armpits, the result is you now cannot get comprehensive policies any longer there are currently well over 100 different types of insurance for ICT risk all with impenetrable exclusions that make them all but expensive bits of art work (if you chose to hang the cert on your wall as you are supposed to do in some parts of the world).

    Thirdly the idea as given hangs on two things,

    1, Audit of customer systems.
    2, Actuarial data.

    The first is going to be prohibitaly expensive for the insurance company to undertake, so as with the likes of PCI auditors will be approved and the customer will pay. This creates a conflict of interest which has been shown repeatedly (think banking crisis, the Euro crisis, the crisis prior to SabOx, PCI, et al).

    The second however is the real cause of the problem. Actuarial data is based on the effects on physical items which assumes that the risk is spread in time and geography in a consistant way that makes it modelable in a linear way. The stop back for when this does not happen and goes exponential such as wild fires, earthquakes, riots etc is to externalise the risk either to “the insurer of last resort” which is the Government or “God”.

    However as we know from zero day attacks the behaviour is (due to the issues I’ve not mentioned) very much akin to wild fire, earthquakes, riots etc.

    So untill the insurance industry developes very very very deep pockets well beyond the GDP of most nations insurance is not going to be a realistic proposal except on a very expensive highly constrained basis.

    Clive Robinson November 3, 2011 5:50 AM

    @ grymoire,

    “First you needed to submit a 3-page “white paper” that answered 3 questions”

    The answers to the questions are actually increadably brief and are as Nick P noted reliant on taking the correct outlook.

    For the first question the answer should be “Do not develop any technologies”. We already have more technology than we know how to use properly.

    For the third question the answer should be “Do what is required to answer question 2”. This is because the two greatest threats we actualy face are “fallback” and overly specific “standards” not what we should have is “standards of methodologies forming frameworks”.

    For the second question the big problem is “percistance of implementation errors”. If you look at something like an electrcity meter it’s expected life time is 25-50years currently, we don’t have security standards that old, worse most of the standards have side effects that render the practical implementations all but insecure compared to the theoretical security (think AES and loop unrolling / cache attacks / timing side channels).

    Now currently most embeded solutions either cannot be upgraded or cannot be upgraded securely. This has a knock on effect in that other systems still have to work with theses set in stone systems.

    Thus when an implementation, protocol or specification error that can be exploitedd is discovered the only options are to “live with it” or “rip them all out”. Neither is a secure solution in either the short term or the long term.

    As we know the “industry” concerned will first go into denial, take legal action against anybody who raises security issues against their products, pretend they have the solution in hand whilst hoping it will all go away, as they cannot afford to address the issue.

    This lack of “upgradability” is due to device purchase price, essentialy embedded systems are extremely price sensitive and have small profit margins. However the most expensive part of embeded systems is usually the “instalation costs” and this can easily be twenty or thirty times the price on any given unit.

    So a mass replacment of embeded systems such as electricity meters is not going to happen…

    Thus the solution to the issues is “mandatory standards compliance” which you can see working with the European Union compliance “CE” and exception “(!)” marks, and in a more limited form the standards required for automobiles that in the US derived from the “Lemon Laws”. This means that all electricity meters would have to meet as a minimum the mandated standards, which would stop “the race to the bottom” in the quest to produce something at ever decreasing cost.

    However the issue then moves to the “standard”, one of the problems NIST and others have in the ICT Security area is producing very specific standards not frameworks. If you look at the EU CE standards they are hierarchical with a broad framework at the top less broad framework at the next level moving down through various “replacable” specifications that become more specific as you go down the stack.

    Thus a standard for the security of an electricity meter (and many other devices such as insulin pumps etc etc) would have as a requirment the ability to in place upgrade modules containing the base standards in a recognised way.

    Importantly the standard must include a method by which insecure moduals must be removed permanently to prevent “fallback attacks”.

    The standards should also contain a provision that the systems that connect to the embeded systems should likewise remove any modual that is connected with protocols etc that have been mandated as insecure or out of date.

    So to answer the second question DARPA should actualy get their heads and efforts around developing frameworks of standards that effectivly remove the broad categories or classes of attack that are known to afflict low cost embeded systems.

    However I suspect that my take on the three questions would not be popular with the powers that be within DARPA so I doubt I’d get an invitation 😉

    CE November 3, 2011 8:17 AM


    I’m attending, as a researcher in one of the DARPA programs.

    While it’s not a called-out topic of the Colloquium speakers, ‘Correct by design’/’Correct by construction’, and formal verification mechanisms are very much part of some of the associated DARPA projects.

    Nick P November 3, 2011 7:53 PM

    “While it’s not a called-out topic of the Colloquium speakers, ‘Correct by design’/’Correct by construction’, and formal verification mechanisms are very much part of some of the associated DARPA projects. ”

    I could be wrong, but I’m not expecting any great publicly-available results. The best programs of the past & present haven’t produced anything for “us,” just for government. The latest incarnation of the LOCK program (whatever it’s called) and TCX vaporware kernel are examples. They really just need to hand over the “building secure computers” responsibility over to a foundation of private employees & we’ll probably get better results.

    The best results are currently from private companies & academic institutions being privately or government sponsored. SecureCore, SecureMe, TU Dresden’s work, Perseus Security Framework, NICTA’s L4/Verified, many INRIA efforts (esp CompCert), and the various “verified software repositories” are good examples. Most of what the government has built was shelfed, insecure crap labeled “secure”, limited availability to the public, or unavailable to the public. The public sector just isn’t producing the security ROI the tax payers deserve.

    That said, good luck on your research. 😉

    Pat Cahalan November 5, 2011 1:17 AM

    Nick P sez “The government wants all the benefits of the COTS market with the kind of security assurance of the medium to high assurance market. It’s impossible”

    Not impossible, but highly improbable.

    You can secure COTS stuff reasonably, you just have to be pretty draconian about it down to the wire.

    I guess if you’re assuming “user expectations as regards to behavior” and “Internet ubiquity” as a benefit of the COTS market, then I’ll agree with the “impossible” bit 🙂

    Leave a comment


    Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via

    Sidebar photo of Bruce Schneier by Joe MacInnis.