The Ethics of Vulnerability Research

The standard way to take control of someone else’s computer is by exploiting a vulnerability in a software program on it. This was true in the 1960s when buffer overflows were first exploited to attack computers. It was true in 1988 when the Morris worm exploited a Unix vulnerability to attack computers on the Internet, and it’s still how most modern malware works.

Vulnerabilities are software mistakes—mistakes in specification and design, but mostly mistakes in programming. Any large software package will have thousands of mistakes. These vulnerabilities lie dormant in our software systems, waiting to be discovered. Once discovered, they can be used to attack systems. This is the point of security patching: eliminating known vulnerabilities. But many systems don’t get patched, so the Internet is filled with known, exploitable vulnerabilities.

New vulnerabilities are hot commodities. A hacker who discovers one can sell it on the black market, blackmail the vendor with disclosure, or simply publish it without regard to the consequences. Even if he does none of these, the mere fact the vulnerability is known by someone increases the risk to every user of that software. Given that, is it ethical to research new vulnerabilities?

Unequivocally, yes. Despite the risks, vulnerability research is enormously valuable. Security is a mindset, and looking for vulnerabilities nurtures that mindset. Deny practitioners this vital learning tool, and security suffers accordingly.

Security engineers see the world differently than other engineers. Instead of focusing on how systems work, they focus on how systems fail, how they can be made to fail, and how to prevent—or protect against—those failures. Most software vulnerabilities don’t ever appear in normal operations, only when an attacker deliberately exploits them. So security engineers need to think like attackers.

People without the mindset sometimes think they can design security products, but they can’t. And you see the results all over society—in snake-oil cryptography, software, Internet protocols, voting machines, and fare card and other payment systems. Many of these systems had someone in charge of “security” on their teams, but it wasn’t someone who thought like an attacker.

This mindset is difficult to teach, and may be something you’re born with or not. But in order to train people possessing the mindset, they need to search for and find security vulnerabilities—again and again and again. And this is true regardless of the domain. Good cryptographers discover vulnerabilities in others’ algorithms and protocols. Good software security experts find vulnerabilities in others’ code. Good airport security designers figure out new ways to subvert airport security. And so on.

This is so important that when someone shows me a security design by someone I don’t know, my first question is, “What has the designer broken?” Anyone can design a security system that he cannot break. So when someone announces, “Here’s my security system, and I can’t break it,” your first reaction should be, “Who are you?” If he’s someone who has broken dozens of similar systems, his system is worth looking at. If he’s never broken anything, the chance is zero that it will be any good.

Vulnerability research is vital because it trains our next generation of computer security experts. Yes, newly discovered vulnerabilities in software and airports put us at risk, but they also give us more realistic information about how good the security actually is. And yes, there are more and less responsible—and more and less legal—ways to handle a new vulnerability. But the bad guys are constantly searching for new vulnerabilities, and if we have any hope of securing our systems, we need the good guys to be at least as competent. To me, the question isn’t whether it’s ethical to do vulnerability research. If someone has the skill to analyze and provide better insights into the problem, the question is whether it is ethical for him not to do vulnerability research.

This was originally published in InfoSecurity Magazine, as part of a point-counterpoint with Marcus Ranum. You can read Marcus’s half here.

Posted on May 14, 2008 at 11:29 AM43 Comments

Comments

Timm Murray May 14, 2008 11:56 AM

Anybody have a link to a version of the counterpoint that doesn’t require registration?

Bruce Schneier May 14, 2008 12:10 PM

“Anybody have a link to a version of the counterpoint that doesn’t require registration?”

Registration is free. And you can lie.

Marcus generally puts his half up on his website. I’ll change the link when I get one from him.

DV Henkel-Wallace May 14, 2008 12:13 PM

Since the word “virus” is so common in the security world I suggest using it for an analogy: it’s not unethical to do medical research that uncovers or even explores weaknesses in the body’s systems (often one doesn’t know which is which). Sometimes this is directed work (“how the hell does HIV get into those darned T cells anyway?”) and sometimes it’s pure research.

Software is simpler being a smaller human artifact, but large emergent systems have unexpected vulnerabilities. Even if they haven’t been exploited yet, they are cautionary tales for new developers (“Gosh, sprintf () is dangerous, eh?”).

I’m all in favor of it.

Sparafucile May 14, 2008 12:21 PM

“Anybody have a link to a version of the counterpoint that doesn’t require registration?”

Registration is free. And you can lie.”

Lie! For an article on Ethics!

Tut. Tut. Whatever next? 🙂

I’m shocked, or my name isn’t Sparafucile!

Philip

Not a Horse May 14, 2008 1:02 PM

Lie! For an article on Ethics!

If lying is unethical, I’ll never be able to sleep with a clear conscience.

Mike May 14, 2008 2:01 PM

What would a really secure system look like? Would it have a strict separation of code versus data, like in a “Harvard” architecture machine? Would it disallow ever storing unencrypted data to main memory or disk?

From my point of view (I’m a programmer/physicist), if the tools allow you to make egregious mistakes, you should change the tools.

I’m really curious if anyone has ever seen a serious, or even back-of-the-envelope, design for a truly secure system, from the hardware up.

If such a thing exists, I’d love to learn!

Spider May 14, 2008 2:04 PM

“Security engineers see the world differently than other engineers. Instead of focusing on how systems work, they focus on how systems fail, how they can be made to fail, and how to prevent–or protect against–those failures.”

You just hit my BS meter. Engineers of all stripes are obviously also required to figure out how things were fail. Everyone likes to think they are special and their niche causes them to think differently than everyone else in the world, but its not necessarily true. Other people in many other diciplines must have this same failure based mindset in order to do their jobs. They are not all security engineers. Of course to be a successful security engineer, you must think that way.

Pat Cahalan May 14, 2008 2:10 PM

A chunk of Marcus’s response, because I think it’s germane:

“One place where Bruce and I agree is on the theory that you need to think in terms of failure modes in order to build something failure-resistant. Or, as Bruce puts it, “think like an attacker.” But, really, it’s just a matter of understanding failure modes–whether it’s an error from a hacking attempt or just a fumble-fingered user, software needs to be able to do the correct thing. That’s Program-ming 101: check inputs, fail safely, don’t expect the user to read the manual, etc.

But we don’t need thousands of people who know how to think like bad guys–we need dozens of them at most. New categories of errors don’t come along very often–the last big one I remember was Paul Kocher’s paper on CPU/timing attacks against public-key exponents. Once he published that, in 1996, the cryptography community added that category of problem to its list of things to worry about and moved on. Why is it that software development doesn’t react similarly?”

On this particular point, I agree with Marcus more than Bruce: “Show me what you’ve broken” is only one possible metric for evaluating whether or not someone can build something reasonably securely. Nobody can build a secure system. If someone comes up and says, “This is my idea of a secure system” my first question is going to be, “How does it break?” not “What have you broken?” A real security-mindset person will have a long list of ways that their system can be broken, and then a list of reasons why these are regarded as acceptable risks. An idiot will say, “It can’t be broken!”

And someone who says, “I’ve broken this and this and this and this” may still not be able to build something that is reasonably secure. Breaking into 20 different things by identifying 20 buffer overflows shows you know how to find buffer overflows, not that you know how to sanitize your inputs.

kurt wismer May 14, 2008 2:11 PM

if the security mindset looks at how things fail, what looks at how the security mindset fails?

“and it’s still how most modern malware works.”

IF this is true (and i can’t stress that ‘if’ enough, considering how popular social engineering is) it is purely coincidental… the capability of software to do things you don’t want it to is not a vulnerability, nor does it depend on one… the notion that there is an unbreakable link between malware and vulnerabilities is both pervasive and wrong…

Todd Knarr May 14, 2008 2:45 PM

Pat: I’d disagree with you and Marcus. Especially Marcus. Every software engineer needs to think like an attacker. It’s easy to write something that works when presented with valid inputs, but that doesn’t produce secure code. It’s not even enough to think about invalid inputs and write code to handle them. To make it secure you have to look at the code and devise invalid inputs specifically designed to break it, and redesign the code to resist that process. It’s the philosophical difference between default-allow and default-deny, or between “filter this set of known invalid characters and accept anything not invalid” vs. “accept this set of known valid characters and reject anything not valid”.

I’m minded of a rule NASA had way back when for designing equipment. Design it so it can’t possibly fail. Then assume it will fail and design it so the crew can always recover from the failure. Then assume they can’t recover and design it so they can disable the failed parts and continue the mission. Then assume they won’t be able to continue and design it so they can abort safely.

Andre Gironda May 14, 2008 2:54 PM

“If someone has the skill to analyze and provide better insights into the problem, the question is whether it is ethical for him not to do vulnerability research”

What if someone has the skill to do vuln research but chooses to be a programmer for systems with high levels of risk? What if a vuln researcher decides to become an educator for software security? What if he/she decides to become an executive at a security company? Are these unethical paths for someone with this knowledge?

dumbfounded May 14, 2008 3:15 PM

Given that, is it ethical to research new vulnerabilities?

If you obtain something legally and take it apart to see how it works (or can be made to fail), then more power to you. I fail to see how, but any stretch of the imagination, this activity per se could be considered unethical. What you do (or perhaps don’t do) with the results of said research is an entirely different matter that may or may not be ethical.

That being said I think your arguments really miss the point entirely. The quality of information obtained in vulnerability research will never be greater than the technical skill and tenacity of the researcher, i.e. the researcher may well find some holes, but if the attacker has a much higher level of expertise, then what have you really accomplished? How can you ever really know if you have accomplished anything at all? The other problem is that there are much better (faster, cheaper, and more accurate) methods to achieve secure code, such as the use of very strongly typed programming languages (ADA), parameter bounds testing, etc. Before anyone yells about how old/outdated ADA is, keep in mind that it is still used today in extremely critical systems (aviation control and the like) precisely because it can produce provably secure code. Languages such as C can’t come close simply because of their basic design. The final problem with your argument is that knowing how to hack has absolutely nothing at all to do with security or creating a secure environment.

Chris S May 14, 2008 4:23 PM

@Spider: Engineers of all stripes are obviously also required to figure out how things were fail.

I’ve suggested that software testing is to provide assurances that 100% of the required capability is present, and that security testing is to provide assurances that ONLY 100% of the required capability is present.

The engineer building the 10-ton bridge looks for failure modes to ensure that the bridge can carry from zero to 10 tons, under normal environmental conditions.

The security mindset knows that the bridge won’t carry 100,000 tons, but tries to check that nothing bad other than the failure happens when you try and carry that load. Or – to check what happens under a normal 5 ton load if the temperature is -100C.

Grahame May 14, 2008 5:05 PM

I find Marcus quite persuasive that we aren’t making any progress. On the other hand, he’s not persuasive that checking inputs etc will make you secure. Of course you should do that, but it won’t discover security faults in inputs deemed acceptable by design.

Analogy Guy May 14, 2008 5:09 PM

the bridge won’t carry 100,000 tons,
but tries to check that nothing bad
other than the failure happens when
you try and carry that load.

To extend that to the software world, a security researcher should ensure the software has no buffer overruns if the CPU is hit with a grenade.

Grahame May 14, 2008 5:14 PM

But if you hit a CPU with a grenade, you inprove it’s security rating greatly.

Pat Cahalan May 14, 2008 5:14 PM

@ Todd Knarr

I think you’re missing my point.

To make it secure you have to look at the code and devise invalid
inputs specifically designed to break it, and redesign the code to
resist that process.

Not exactly.

If you’ve designed your code properly, “sanitizing inputs” is done everywhere. “Sanitize your inputs” is a design philosophy. It is already assuming “default deny” -> in order for the input to be accepted, the input has to look like what you expect it to be, within an acceptable range.

Certainly, you need to go beyond saying, “I wrote code to sanitize my inputs”, because you have to test it, which means you have to sit back and make sure that you’ve considered the types of unsavory input you may be receiving. But that doesn’t mean that you have to go out and show that you can perform an SQL injection attack in order to show that you know how to prevent one.

What I’m arguing is that you don’t have to actually practice taking “general” things apart to prove that you know how to put things together. Failure analysis can be done while you’re constructing something.

Bruce seems to be saying that in order to know how to build a solid building, you need to show that you know how to blow one up (although I don’t actually believe he means it).

You have to show that you know how a building can fail, and that you’ve compensated for these things. Like Marcus said, fundamentally interesting ways to break things aren’t really that plentiful. If you understand conceptually what those fundamental ways are, that’s more useful than having a demonstrated ability to find a ton of buffer overflows.

Clive Robinson May 14, 2008 5:25 PM

Software would be one hell of a lot more secure if “code cutting” programers learnt how to deal with exceptions properly.

But that will not happen as long as the manager paying the code cutter sees no bankable value in dealing with exceptions properly.

Then even if the manager does apriciate the value of addressing exception handaling, it is almos gaurented that the marketing and accounts departments wont.

Therefore the key stake holders have to very much understand why you need good exception handeling to override the M&A departments.

The problem withe key stake holders is usually they are dependent on the shareholders who invariably are only interested in short term gain not longterm liability…

Much as people hate legislation HIPPA SOX etc have done way more than education to the changing of shareholders view points.

charles decker May 14, 2008 5:28 PM

I think that’s exactly what he said. If we are going to state that the security mindset “knows that the bridge won’t carry 100,000 tons, but tries to check that nothing bad other than the failure happens when you try and carry that load”

Then the same security mindset should also consider, assuming the grenade scenario, how to make sure that immediate interruption of the process/CPU/functionality can be secured. No?

Anonymous May 14, 2008 8:28 PM

Engineers who deal with physical things also have another tool: margin of safety for materials. Software engineers don’t have that tool, because they don’t have materials.

Chris S May 14, 2008 11:24 PM

@charles decker: how to make sure that immediate interruption of the process/CPU/functionality can be secured.

Quoting myself — “nothing bad other than the failure happens”.

If your CPU gets interrupted (by an interrupt routine?) then your code won’t run. That much is known. But in a well-secured system, that shouldn’t be a security issue.

That said – you might decide that mitigating that risk is too expensive, so you decide to accept the risk instead. But if the vulnerability has been studied – and perhaps an exploit has been designed – then you are going to be better informed about the risk that you are accepting.

Much of this comes back to the economics of security. I don’t actually need to spend much more than enough to make the attacks too expensive to be worthwhile to the attacker. And for that, I do need to think like the attacker. I need to understand how much the successful attack is worth, and how much it costs to carry it out. That is an essential factor in determining where and how to spend on my defence.

Spider May 14, 2008 11:50 PM

@Chris S

Let me repeat myself: engineers of all stripes look at ways things can fail and the consequences of their failure. If you can think about it, they have already thought about it. I am informed by my civil engineer friends that the effect of cold upon load bearing capacity of bridges has been studied, because civil engineers have that mindset.

Think about lawyers. Do you really think they don’t think about the implications various rulings other than the most obvious? Do they not find legal loop holes for their clients, that wouldn’t have occurred to the authors of the laws themselves? They are also researching the stated rules of the system and finding ways that it can fail. Is the mindset of security engineers really that different?

Sejanus May 15, 2008 1:09 AM

“If lying is unethical, I’ll never be able to sleep with a clear conscience.”

If lying is unethical, most of us guys will never be able to sleep with girls, too 😉

tuomoks May 15, 2008 3:45 AM

A small disagreement – “If he’s someone who has broken dozens of similar systems, his system is worth looking at. If he’s never broken anything, the chance is zero that it will be any good.” – working in and with several vendors and systems I know many ways to break those several systems – not all just software or hardware – there is a big difference if you do or don’t do it. I personally don’t trust anyone who brags braking systems in old times – you did it once, you may do it again! Seen some of that kind, maybe skilled but too immature to my taste to trust them.

Now, of course, you have to think like one who does it and it really is not so much technology but a mindset. Technology is easy, manipulating systems (infrastructure, including persons) needs skills many have but never use to break the “precious” IT.

Security is and has always been much more than just IT or technology. IT security is part of much bigger problem, important today but will fail if it is handled outside of the corporate security.

Michael Campbell May 15, 2008 5:28 AM

Every “Security Engineer” I’ve ever encountered views the world in terms of what they can shut down, cut off, or otherwise restrict to keep people who actually get work done, done.

KnoNoth May 15, 2008 5:38 AM

I think, that Marcus text is a bit off-topic. The talk should have been about ethics of vulnerability research but he talks about problems in resolving vulnerabilities. He points out, that vulnerability research has become quite pointless and expensive but he doesn’t talk about ethics. He actually agreed, that some research is useful( about Paul Kocher’s paper ) and rest of the research is useless but he didn’t said anything about ethics. Actually I understood, that he doesn’t find vulnerability research to be unethical. He simply thinks that it is useless and in long run doesn’t make anything happen.
In Debate competition 9 out of 10 judges would have declared Bruce as certain winner.

Poobah May 15, 2008 8:12 AM

Security Engineers (good ones, anyway) realize that there’s no such thing as a foolproof, completely secure system. We deal with risk and how to minimize and mitigate it. That’s all.

kurt wismer May 15, 2008 9:08 AM

@antonomasia
“> the capability of software to do things you don’t want it to is not a vulnerability

How would you convince me of that?”

the most straightforward way is to point out that sometimes you want your disk to get formatted and sometimes you don’t… software whose behaviour deviates from your desires (does something you don’t want it to do) is possible simply because it’s impossible to programatically determine what those desires are to an arbitrary degree of precision…

Carbon14 May 15, 2008 12:07 PM

I really get laugh when something happens like Sony spending millions and a year to create a few lines of copy security for music CD’s and a week later, its beaten by a 79cent black marker.
Low tech will still work when all the electonic infrastructure is toast.

Antonomasia May 15, 2008 12:31 PM

@kurt

sometimes you want your disk to get formatted and sometimes you don’t

Sometimes the formatting program gets to run and sometimes it doesn’t.

determine what those desires are

So part of the UI will be in the TCB and able to hand out authority to the programs you call.

Not all software is the same. A large part of what’s wrong with current systems is that all programs get a chance to misbehave with resources that are none of their business. The fact that a media player can find word processed files and mail them to half the world is a design flaw (in the entire execution environment) that I call a vulnerability.

Current systems have to determine the user’s desires too. With some (perhaps sizeable) changes in the detail to provide programs with the authority you want them to have (even after they’ve started running) it should be possible to improve drastically on what we have now. When was the last time you used a media player to write anything beside video and audio output?

Davi Ottenheimer May 15, 2008 5:29 PM

@ Clive Robinson

I agree 100%. I do not think there is much point in discussing this topic unless the environment is included.

Humans adapt to the situation they are in and most follow a simple incentive system.

Reward people for thinking about and anticipating fault and the likelihood for secure systems will increase.

This should not be a discussion about “special people” but rather what is wrong with the incentive system most engineers have to work in.

Everyone has the innate ability to break things and history illustrates this nicely.

The difference in one person’s ability is mainly due to things like experience, creativity and and aptitude for analysis, rather than anything unique to security. You might find, for example, that people who study music are likely to make excellent security analysts.

On the flip side, if you reward people for raw output and employ management theories like “no whining” and “do not bring up problems unless you have solutions”…engineers will focus on quantity rather than quality and secure systems will be the exception.

Unfortunately vulnerability research should not be a specialty, but the culture of many work environments means there is hardly other way to introduce it.

moo May 15, 2008 7:28 PM

Security engineering is not like building-a-bridge engineering. Building-a-bridge is about, this bridge should always hold 10 tonnes under normal conditions, so we’ll design it so it really can hold 13 tonnes, or 11.5 tonnes if there’s a huge storm going on, etc.

Normal engineers consider failure modes in terms of things that aren’t supposed to happen, but they could happen, maybe, under some conditions. And the bridge would have to be able to withstand that. And if one part of the system fails, will the entire bridge (or the entire electrical grid, or …) all come crashing down? Or will just that one part fail, and the rest manage to deal with it?

In contrast, security engineering is about dealing with a malicious adversary, who can and will construct a targeted attack just to put 100,000 tonnes of weight on your bridge. It’s about dealing with an adversary who will hit both ends of the bridge with smart bombs at the exact same moment, to make the whole thing collapse.

A better analogy might be locksmithing (or safe-cracking). If its your job to design a huge safe, or a bank vault, you want to secure that thing every possible way. You want to make it as difficult as you possibly can unauthorized users to crack in and steal stuff. When they drill through a solid inch of iron to get at the locking mechanism, you want it to shatter a glass plate and another locking mechanism to slam down, cutting them off. You want defense in depth.

I think security engineering is like that. Think of your company’s servers, or the physical site you’re trying to protect, as a bank vault (or set of bank vaults). You have to assess what the risks are, what things are more or less critical to protect, you need layers of different kinds of protection, you need mechanisms in place to detect tampering and set off alarms, and you need a plan to respond rapidly and effectively when the alarms go off. I’m sure banks have procedures they have to follow after a break-in or a hold-up or something…. after their security has been compromised, how do they restore things to a trustworthy state? Security engineering is about that, too–design things so they can’t possibly fail, design them so that when they fail they can’t possibly wreck other things, design them so that when everything gets wrecked you have a realistic plan for recovery.

Anonymous May 15, 2008 7:55 PM

@ moo,

I think that you and a number of others to this post are having a little difficulty with distinquishing between a “conventional engineer” and a “research engineer” which is what some “security engineers” and “forensic engineers” tend to be.

A “conventional engineer” deals with “known risks” or “failier modes”, which are usualy well catagorised “hundred and thousand year storms” being an example. They might also consider some likley senarios from known risks in other engineering domains. As “design engineers” they also “dare to dream of what can be” on the creative side tempered by practical considerations.

A “research engineer” considers not just known and likley risks but also uses their view point to consider unknown risk potential and usualy further to investigate it (Newtons observe/theorise/expermint definition of a scientist).

In essence your “research engineer” is close to an ideal (as opsed to academic) scientist with a breadth of experiance covering many many fields of endevor. In a way they are like “Renasonce man” and are very scarce resources in that they “do dare to dream” not just on the light but “on the dark side” as well 8)

It would also be interesting to find out what percentage are left handed (which is what the words “sinister” “gauche” etc realy refere to).

Clive Robinson May 15, 2008 8:18 PM

Opps done it again 8(

The above anon post (@moo) was mine.

To be a “bad workman” I shall blaim the Motorola Sidekick Slide I’m using as it’s browser is not “auto filling” known fields (as other more feature rich browsers do much to my anoyance 8)

TheDoctor May 16, 2008 2:49 AM

The funny thing is, that the german government in all its glory decided to ban access to allmost all tools that “can” be used to crack software systems.

Basicly the law says: “If you we think you are a good guy, you can use these tools, but if we think you you are one of the evil ones, you are f**kt”

It is really so indetermined, and the definition of good or evil is left to the government/the judge.

So if you are a software professional and have all those tools like wireshark, cain etc. you are halfway in prision.

What a bright sight to learn security evaluation.

kurt wismer May 16, 2008 9:16 AM

@antonomasia
“Sometimes the formatting program gets to run and sometimes it doesn’t.”

easier said than done… how is the system supposed to accurately disambiguate the context – and if it does so by asking the user, how is the user supposed to know that something should or shouldn’t be allowed to do what it’s trying to do…

formatting the disk is one of the easiest examples for a person to figure out, there are plenty that are far more difficult…

“So part of the UI will be in the TCB and able to hand out authority to the programs you call.”

and how is it supposed to know when something should be granted authority?

“Not all software is the same. A large part of what’s wrong with current systems is that all programs get a chance to misbehave with resources that are none of their business. The fact that a media player can find word processed files and mail them to half the world is a design flaw (in the entire execution environment) that I call a vulnerability.”

and i call it a consequence of the generality of interpretation… if we were going to try and design a system where this couldn’t happen we’d not only have to stop the media player from accessing the file directly, we’d also have to stop the media player from communicating with a process that does have authorization to access the file for emailing in some manner (like your browser with the aid of webmail)…

and that would only cover the very narrowly defined act of emailing, not all the other ways it could leak data such as encoded in the uri’s it connects to or encoding the data it’s leaking in pauses in it’s downloading of the media stream it’s serving…

“Current systems have to determine the user’s desires too. With some (perhaps sizeable) changes in the detail to provide programs with the authority you want them to have (even after they’ve started running) it should be possible to improve drastically on what we have now.”

i don’t doubt that there is room for improvement if you define what acts a program can and cannot perform, however there is a limit to the granularity which you can reasonable expect those limits to be defined… end users have a hard enough time knowing which programs are safe to run, never mind what those programs should be allowed to do… centralized bodies have a hard enough time adding safe programs to whitelists due to the sheer volume… organizations that digitally sign applications for locked down environments have been unable to avoid signing malware due to lack of expertise… and the software publisher themselves can’t be trusted to define a safe list of permissions for their software – even the non-malicious ones will invariably suggest greater privileges than are needed by any particular user, both because the publisher is lazy and also because the user is unlikely to use every single feature in the software (making the permissions required for those features irrelevant)..

“When was the last time you used a media player to write anything beside video and audio output?”

when was the last time you used a media player or any other program that didn’t need to engage in at least some interprocess communication some of the time…

WD Milner May 19, 2008 12:27 PM

In thinking “in terms of failure modes” I recently asked a crypto product firm if they used the original PRNG in their AES implementation. Gieven the time it took to get a response and the fact that I had to ask multiple times you’de think I’d asked for the keys to the kingdom.

The reply, “We use AES in CBC mode (FIPS cert #655). The DRNG (FIPS cert #380) is based on X9.31 and is seeded by a TRNG. Currently, the asymmetric keypairs are generated on-board with the above TRNG and the private keys are generated read only to the device (Much like an HSM.) The symmetric (AES) keys are generated on the device (with the above DRNG)”, was detailed enough to send me rummaging for specs, details etc. and I’m still not sure of the precise answer.

What was wrong with “yes” or “no”. To someone just a bit paranoid it looks like “I don’t know”.

What a world.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.