Is There Strategic Software?

If you define “critical infrastructure” as “things essential for the functioning of a society and economy,” then software is critical infrastructure. For many companies and individuals, if their computers stop working, they stop working.

It’s a situation that snuck up on us. Everyone knew that the software that flies 747s or targets cruise missiles was critical, but who thought of the airlines’ weight and balance computers, or the operating system running the databases and spreadsheets that determine which cruise missiles get shipped where?

And over the years, common, off-the-shelf, personal- and business-grade software has been used for more and more critical applications. Today we find ourselves in a situation where a well-positioned flaw in Windows, Cisco routers or Apache could seriously affect the economy.

It’s perfectly rational to assume that some programmers—a tiny minority I’m sure—are deliberately adding vulnerabilities and back doors into the code they write. I’m actually kind of amazed that back doors secretly added by the CIA/NSA, MI5, the Chinese, Mossad and others don’t conflict with each other. Even if these groups aren’t infiltrating software companies with back doors, you can be sure they’re scouring products for vulnerabilities they can exploit, if necessary. On the other hand, we’re already living in a world where dozens of new flaws are discovered in common software products weekly, and the economy is humming along. But we’re not talking about this month’s worm from Asia or new phishing software from the Russian mafia—we’re talking national intelligence organizations. “Infowar” is an overhyped term, but the next war will have a cyberspace component, and these organizations wouldn’t be doing their jobs if they weren’t preparing for it.

Marcus is 100 percent correct when he says it’s simply too late to do anything about it. The software industry is international, and no country can start demanding domestic-only software and expect to get anywhere. Nor would that actually solve the problem, which is more about the allegiance of millions of individual programmers than which country they happen to inhabit.

So, what to do? The key here is to remember the real problem: current commercial software practices are not secure enough to reliably detect and delete deliberately inserted malicious code. Once you understand this, you’ll drop the red herring arguments that led to CheckPoint not being able to buy Sourcefire and concentrate on the real solution: defense in depth.

In theory, security software are after-the-fact kludges because the underlying OS and apps are riddled with vulnerabilities. If your software were written properly, you wouldn’t need a firewall—right?

If we were to get serious about critical infrastructure, we’d recognize it’s all critical and start building security software to protect it. We’d build our security based on the principles of safe failure; we’d assume security would fail and make sure it’s OK when it does. We’d use defense in depth and compartmentalization to minimize the effects of failure. Basically, we’d do everything we’re supposed to do now to secure our networks.

It’d be expensive, probably prohibitively so. Maybe it would be easier to continue to ignore the problem, or at least manage geopolitics so that no national military wants to take us down.

This is the second half of a point/counterpoint I did with Marcus Ranum (here’s his half) for the September 2006 issue of Information Security Magazine.

Posted on September 12, 2006 at 10:38 AM30 Comments

Comments

McGavin September 12, 2006 11:16 AM

Here is a start:

Critical software needs to be identified and separated. It also needs to be small, small, small.

Software Formal Methods researchers need to make their technology practical enough so that it can be practiced by more than niche companies.

I don’t think we can detect malicious code, so building safe and secure from ground up seems to be the only solution — which still might not prevent it.

Is this reasonable?

Pat Cahalan September 12, 2006 11:21 AM

It’d be expensive, probably prohibitively so. Maybe it would be easier to continue to ignore
the problem, or at least manage geopolitics so that no national military wants to take us down.

Given the fact that there are a bunch of other advantages to the “managing geopolitics” solution, methinks this is probably the only practical short term solution (where “short term” == “at least 20 years”).

Pat Cahalan September 12, 2006 11:27 AM

@ McGavin

Critical software needs to be identified and separated. It also needs to be small,
small, small.

I don’t think it is practical to try and identify all software that is critical (for example, is the Blackberry software really “critical”? Probably not, but if you have to replace it with something that has a similiar functionality there is a cost…), since you’re talking about absolutely quantifying the business process of any agency that provides a service upon which strategic defense depends. Any way you slice it, this is a monstrosity of a project. Power/utility control? Embedded systems? Communication devices for government employees? There is absolutely no way the list could be small, unless you completely toss out economic considerations and force just about everybody to standardize on software that would be so generic it would be pretty useless.

Michael Hampton September 12, 2006 11:29 AM

Sources have told me that NSA does indeed have a program to discover vulnerabilities in commercial software which aren’t already known to the public; the example given to me was Cisco IOS.

derf September 12, 2006 12:04 PM

How many H1B Visa holders work at Microsoft? How much of today’s software development is actually being outsourced to India?

If your DVD player, or other appliance, requires an Internet connection, how do you know it isn’t sending camera images or your usage habits along with any other nifty features the internet connection gives it?

hagna September 12, 2006 12:20 PM

At 4:53 or so this morning I couldn’t sleep while thinking about how the Internet is part of our critical infrastructure. Ahhhhhh!

theinmostlight September 12, 2006 1:07 PM

“It’s perfectly rational to assume that some programmers — a tiny minority I’m sure — are deliberately adding vulnerabilities and back doors into the code they write.”

So these back doors can be patched when they are found by 3rd parties and called remote exploits instead of backdoors as they really are.

ClosedSourceIsForSheep.exe

David September 12, 2006 1:51 PM

Open source suffers from the combination of changes that individually may seem okay, but together create the exploitable hole (like those liquid bomb attacks).

David September 12, 2006 1:53 PM

Oops, I didn’t mean to pick on open source alone above. The same can be done with closed source apps and may even be easier because there are fewer extra eyes looking it all over. I just added it because the prior comment made it seem that open source was less vulnerable, yet because anybody can contribute, it’s easier to sneak it in than getting yourself hired at the company that develops closed source.

Chase Venters September 12, 2006 2:05 PM

@David

While open source software isn’t perfect, it would be interesting if you could name any examples of ‘small changes, big attack’.

Software is a special kind of machine. As Stallman humorously points out in discussions of software patents, software can be built out of millions of parts because we don’t have to worry about how we’re going to replace this “if” statement when it burns out, or (more importantly in your case) if the “for” loop might oscillate and cause interference in other parts of the program.

The liquid bomb attacks are simply not comparable. For one thing, they rely on physical chemistry rather than abstract math. For another, the components are much easier to disguise. If I’m an inspector at an airport, and I see a green liquid in a Gatorade bottle, I’m going to think, “Gatorade.” If I’m a programmer and see the call setuid(), I’m going to think “privilege changing system call”. It’s pretty clear what something means the moment you look at it. Programming constructs are far more specific.

Lastly, what programmers reviewing open source code have that no physical security does today is the ability for one person to see the big picture. I can keep paging through sections of the Linux kernel code, back and forth, as I build an understanding of the interactions of the machine parts.

In real-world security, you have thousands of people walking around in real-time. It’s unordered chaos, and there is only so much intervention you can do in the name of security. Because of the order of the problem, it is not possible for an individual person to provide reasonable security for an entire system (or airport, in your example).

In software, a single person that can see all the code can find out a great many things. A million persons looking at that same code can wring out the bugs.

Remember Schneier’s law? “Anyone can invent a security system so fiendishly clever that he or she cannot think of a way of breaking it.” It means that the only way to have a secure system is to tell all the smart people you know how it works and ask them to break it.

Open source software does this, on a very large scale. Proprietary software does not; often, not at all.

Chase Venters September 12, 2006 2:10 PM

@David

Believe it or not, you’d probably have more success sneaking crap in as a programmer of proprietary software. At least, this depends on the project infrastructure.

In Linux, for example, every individual change to the kernel gets read by hundreds to thousands of people, line by line, before it goes in. This is less true for older, ‘trusted’ contributors than newer contributors, but newer contributors present the biggest risk.

My experience in proprietary software suggests that many (most?) of the programmers working with it are very jaded. They care very little about how code outside their own works. They even have direct access to source code control, where they can put changes in without mediation.

If I was going to sneak in a back door, it would be far easier to do it in proprietary code than in, say, Linux, where thousands of people are going to critique everything about my code (including making sure there is no trailing whitespace!)

MikeA September 12, 2006 3:14 PM

@Chase:
If I’m a programmer and see the call setuid(), I’m going to think “privilege changing system call”. It’s pretty clear what something means the moment you look at it. Programming constructs are far more specific.

That would be in one of those “old, uninteresting”
programming languages. Even then, the IOCCC
shows that one can disguise things pretty well,
and in a language where, say, ‘<<‘ could do
literally anything, “first glance” is useless.

MikeA

Fergus Doyle September 12, 2006 3:20 PM

One thing which is worth remembering about open source software is that the more “important” or main stream a product is the more scrutiny it will attract. So we should expect more main stream products to be more secure.

On the other hand small or niche or open source software will have less scrutiny probabaly comparable with closed source or slightly worse.

More Annoying Than Yoko Ono September 12, 2006 3:22 PM

Software companies won’t make their products more secure unless businesses force them to do it. And businesses won’t force them because the cost of switching is greater than putting up with insecure software.

http://www.eweek.com/article2/0,1759,2013820,00.asp?kc=EWRSS03119TX1K0000594“>http://www.eweek.com/article2/0,1759,2013820,00.asp?kc=EWRSS03119TX1K0000594“>http://www.eweek.com/article2/0,1759,2013820,00.asp?kc=EWRSS03119TX1K0000594

jsaltz September 12, 2006 4:30 PM

Secure software is never going to happen, at least in the consumer domain. And you know why? Because it is all about the benjamins, guys. It is an abomination, I know.

Andrew van der Stock September 12, 2006 9:10 PM

There’s three issues here:

a) secure software (did the programmers do a good job)?

This can be done by fixing the frameworks and abandoning known faulty frameworks (such as dynamic SQL queries, etc). This is directly akin to only allowing concrete of a certain quality to build bridges. We don’t need a hurrican proof bridge in some parts of the world, but there are minimums. We must get hardcore about the minimums. This means no more C / C++ unless the programmers are absolutely certain they can show they have no known issues, such as buffer overflows. It’s going to be hard, but this stuff is absolutely necessary.

b) untainted / untrojaned software (can you trust the source)

This affects both closed and open source. The Linux trojan attempt a few years ago is both worrying and relieving, but there are subtle bugs waiting for anyone who wants to try. Anyone who sits in on a Halvar Flake reverse engineering talk knows that software, closed or open source, is vulnerable if you have a copy of it. It just takes skill.

c) Bugs

All software has bugs. Period. There is no One True Way to avoid bugs.

Defense in depth is absolutely required – and it must be taught wherever code cutting skills are taught, and not just in our unis. We deal with this in the OWASP Guide, and it will be a much greater focus in version 3.0.

I doubt we’ll get much headway. Programmers want to go from A to B and get onto the next job. Business owners don’t care about security and don’t want to pay for it. Shareholders expect security – it sure cost enough. The public thinks there is security and is constantly surprised when there isn’t.

We need to fix this gap.

Stefan Wagner September 12, 2006 9:37 PM

Software is a moving target.
It is refactored, refactored and refactored.

Employees are changing often in the buisiness, so how can you keep your backdoors secret?

By luck?

Well – how often do we here about backdoors in often sold software?

Can’t you easily find open ports, waiting for their master to call, by nmap?
Can’t you find periodic call-homers by installing a firewall (not a PFW)?

Open source gives you more transparency – at least open protocols and interfaces should be used: How is this piece of software making its auto-updates? Can we turn them off? Is there transparency in the OS?

Avoiding a monoculture where everybody is using the same product is important too.

A vulnerability used to attac the infrastructure of the web (i.e. DNS-Servers) could perhaps do much harm with only a small percentage of clients involved.

Robert September 13, 2006 3:39 AM

If a new code for Graphics and Text were entered into the Mail system and could secure all display for the readers or even audible sound and visual, and the Encription were secured by another Program also with the Libraries secured would this run a secure network on existing programs if only the Individuals with the decryption keys to read were trusted? this would be along the lines of a secret php or higher run by National Security.

Brian September 13, 2006 11:38 AM

As a Programmer myself, I’ve got to tell you that there is very little malicious intent, but merely pure incompetence, stupidity, and just plain laziness and bad quality in software. There are very few good programmers in the world.

Clive Robinson September 13, 2006 1:25 PM

As jsaltz says above “Secure software is never going to happen, at least in the consumer domain.” and in this day and age pretty much all companies fall into the “consumer domain”

As I remarked just a few days ago on this blog when rasing this very issue with regards to terrorism,

http://www.schneier.com/blog/archives/2006/09/more_than_10_wa.html#c112909

“The simple expedient of moving our now very vulnerable systems back towards an older security system that is better understood and controled would be a very very significant invesment in our real safety against terrorisim.”

This means don’t use “share price” motivators to define your security systems.

After all in the long term which is less costly, have your mission critical systems issolated from public access and have 24×7 staff on site, or lose the lot to a criminal, terorist or idely curious student?

As is often said “your choice” which also means “your risk and loss” if you don’t.

However as long as CEO’s etc are paid for short term results security will not even realy make it onto the top level discussion list (especially as the CEO is unlikley to take the fall if he can pass the buck).

Clive Robinson September 13, 2006 1:55 PM

I forgot to mention that “Defense in Depth” in of it’s self is not security.

Defence in Depth is like the multiple fences around a secure compound. The number of fences you need are dependent on your monitoring and response times (and the security of those processes)….

Also anybody considering defense in depth using just one “software OS” needs their head looking at (alternativly mix a little cement powder and water into the sand they have buried their head in 😉

There is one heck of a lot more involved with Defense in Depth than most people either know or are propared to admit.

On balance I personaly prefere Open Source solutions, that being said none of them are secure in their own right. Closed source just gives me bad vibes of the type “Trust me I’m a proffesional” when said by a con artist. And as we know atleast one major software vendor took open source code and implemented it in their own closed products very badly and went into denial when all their current OS’s of the time where found to be vulnerable.

James September 13, 2006 2:58 PM

I appreciate championing the security of open source, but making money from an open source model is the exception rather than the rule. Additionally, pure open source projects with no commercial incentives actually fail pretty miserably in one critical area of security, which is availability — there’s no tangible incentive to meet deadlines.

After academic contributions, real innovation for commercially viable security will come from the commercial world, MSFT aside.

Having said all that, I think that much of the advantage in terms of open scrutiny that open source users enjoy could also be enjoyed by users of closed source vendors, provided the vendors 1) honor a stated full-disclosure policy regarding breaches in their products, 2) pay for quality audits done by approved labs (like the NIST Fips 140 vendor list), and 3) as a requirement of employment, employees or contractors involved in the design of the products must sign an agreement which states their recognition that any malicious acts such as implementation of backdoors are equivalent to the destruction of company property, and will be dealt with accordingly.

I’ve begun implementing these public quality, disclosure and employment policies at my company. They’re version 1, and I’m sure they’ll be refined over time. The point is, is that there’s a valuable commercial component for a security product vendor to publicly state its position on these issues. While this apparently doesn’t work well for companies like MSFT, as much disclosure and the scrutiny that that invites is simply good for business for companies of my size.

supersaurus September 13, 2006 4:51 PM

it is unlikely that large software systems will ever be made perfect because they are in effect huge state machines whose next action may depend upon the sequence of hundreds or thousands of prior actions. multithreaded programs make this worse because a given sequence of actions cannot be reliably repeated if the code is running on a multiprocessor machine due to indirect interaction with other threads or processes running on the same machine. writing a correct program to prove another program is correct is just as difficult as making the first program correct. it is common for automated testing to cover less than 50% of possible code paths, and automated testing is typically far less perverse than a user randomly typing and clicking the mouse (or a nasty hacker looking for a bad combination). static code coverage alone tells little about the sequence that led up to the code being traversed and hence little about the state of other, perhaps non-local variables.

operating systems have tens of millions of lines of code as do large software applications. a compiler may have a million lines of code or more involved in its parts, and the resulting binary or bytecode derived from millions of lines of source will involve another huge pile of code as common runtime libraries (yes, this means java too, try debugging a jvm issue and see where it leads you).

of course we could have all programs be “small”, but say goodby to graphical user interfaces, relational databases, web servers, and . in fact you could say goodbye to programs that do complex things and go back to adding and subtracting on paper (but don’t get me started on the complexity of the human brain viewed as a computer).

AFAIK large software systems are the most complex artifacts ever produced. in short, there is no need for malicious programmers to allow these systems to do unforeseen things.

Christoph Zurnieden September 13, 2006 5:43 PM

Software Formal Methods researchers need to make their technology practical enough so that it can be practiced by more than niche companies.

It is very practical and easy to use already and a couple of ISO/IEC standards exists too.
So why doesn’t anybody use it?
It may have a lot of innocent reasons: missinformation or no information at all, trouble at finding qualified staff, it doesn’t work for 100% of all cases so it’s crap, and much more, but I think the main reasons are:
– if we change the workflow we would automatically admit that we had been wrong before
– it works now, the customers pay, why change anything?

I work with formal methods for quite some time now and the main advantages are:
– I can offer a full warranty for the product and guarantee the fitness for purpose, my competitors don’t.
– there is no discussion about “Bug or feature?” because the behavior of the software has been described mathematically, no room for interpretations left.
– I can give out the sources without fear for copying because nobody understands (SPARK-)Ada these days 😉
Main disadvantages:
– nobody cheap enough understands (SPARK-)Ada these days, outsourcing doesn’t pay ;-(
– you have to hide the fact that you use formal methods because some clients think it will be too expensive without even looking at the numbers in my offer.
– the timespan needed untill the software works as described is very subjective. Almost all clients got used to beta-testing the software themself freely, they get something scarcely resembling the software they ordered and need a long sequence of “bug found->patch/notabug/payusmore” untill it works almost fully as ordered, but they have something in their hands that runs very fast. Using formal methods gets you an almost fully working software in the same or even less time but the client waits longer for that “something in the hands”. I guess that is the main reason for the misunderstanding that using formal methods results in longer developing times.

“Formal Methods” are the base for mathematicaly proof of correctness and don’t replace it. To prove a software to be correct does cost a lot of time and money and needs very highly qualified staff. But you don’t need it very often, only in places where you have no second try, where the software must not fail e.g. the software to deactivate a nuclear bomb.

Formal methods are used everywhere else, why not for building software? Every engineer reads the requirements carefully (same in sw-development), draws a blueprint (would be one of the ISO/IEC standardized formal languages in sw-development), calculates the cost (could be done with the help of the formal languages mentioned above) and the final build is done by well trained craftsmen (there is no real counterpart in sw-developing).

The busines is very young and even boilers don’t explode as often as they did a hundred years ago, so if mankind survives long enough … but that’s a different kind of problem and I don’t hug fond hopes regularly.

CZ

James September 13, 2006 7:02 PM

Main disadvantages:
……

I don’t consider those listed as much of a disadvantage as the fact that:

1) available libraries don’t seem to be of commercial quality, thanks to a small community and the open source origins. Open source/large community has enough problems. Open source/small community simply cannot provide an agile enough tool for shrinkwrapped developers, IMO.

2) What’s the point in using something like SparkAda if the OS is also not written in the same? My largest market still comes from the Windows base. You expect to be able to make any guarantees to the customer under these operating circumstances?

But you don’t need it very often, only in places where you have no second try, where the software must not fail e.g. the software to deactivate a nuclear bomb.

I think the application range is far broader than that, ranging from small businesses that treat protection of their data assets as necessary for survival, to domestic first responder networks that handle emergency and disaster situations.

R.O.B.O. September 14, 2006 1:08 AM

I believe that a Secure Network running on the Web may not be compatable with every single application, but also it would be a shock and awe if the secure network trend started running into the open source vulnerabilities of our enemy. Also if the applications were run by qualified individuals there may be an actual code that would run on the web that would also run on open source that would be compatable say as a monitoring software and would be reading and sending its signal back to Secure networks and then the same could be duplicated on visual and voice and text codes also then sent to their destinations all on seperate networks or programs and be compiled together at a place where the information could be read and deciphered and displayed in a secure setting on a secure system that secures itself as well as all its operations with more than one system for backup and program security, as well as assurance that it is failproof and possibly untracable by our enemies, at least in trial as it once was.

R.O.B.O. September 14, 2006 1:17 AM

Remeber the Enigma of the Western Coalition of W.W.II. only transformed into a code for Secrecy, and Detection, and Early Warning Sytems. As Well as Advanced Fighting Machines and All National Security Systems. Including Intelligent Systems.

Rob September 21, 2006 11:10 AM

Given the cornicopia of deliberately inserted malware and items sloppy of sloppy code pointed out by both debators, defense in depth security must include the core O/S layer to be effective.

Only a system that featured multilevel/trusted security, allowing compartmentalization of everything, separated root from the system, operated using Marcus’s babies deny-default and enumerating goodness (white list privileges) and non-negotiable audit function would offer a course of protection that differed from status quo reactive “plug and patch” technology.

solinym September 22, 2006 10:56 PM

@Chase Ventors:

“As Stallman humorously points out in discussions of software patents, software can be built out of millions of parts because we don’t have to worry about how we’re going to replace this “if” statement when it burns out, or (more importantly in your case) if the “for” loop might oscillate and cause interference in other parts of the program.”

Actually, there are metrics which show that the number of bugs is roughly proportional to the square of [some other metric, usually klocs or modules]. You’ve never had a program stop working because you upgraded glibc? You never had an environmental problem (out of disk space) cause a program to fail?

@James:

`What’s the point in using something like SparkAda if the OS is also not written in the same?”

The point is that the code you’re writing has no flaws. The code it relies upon is another story, as is the hardware, the environmental conditions, etc. I take your point on not having a defined API as your substrate, but if we require that all problems be solved simultaneously then we’ll be waiting forever.
One of my maxims is, “secure what you’re working on as well as you can; you can’t do anything about the code you’re not working on”.

@everyone who says open source fixes things:

Well, sort of in the way that I just defended. Unfortunately you have to run some binary somewhere to bootstrap the process. Bruce, I think it’s time that everyone got to read “Reflections on Trusting Trust”, care to blog it? Oldie but goodie.

Also, Ross Anderson had an economic analysis that showed that open/closed makes no difference on code quality.

The main advantage of open-source, to me, is two-fold:

1) Risk of detection is much higher. People installing back doors don’t want them to be noticed.

2) I can personally examine the code without undue effort. I can refuse to run code that doesn’t make sense, or looks shoddy. All closed-source users have is (a) a tremendous investiture in reverse-engineering or (b) the assurances of their vendor (fox guarding the henhouse). I am personally responsible for security penetrations, and no large vendor I know of is willing to take responsibility, so I have to. With closed-source, I have to work a lot harder to prove that their code is at fault, as well.

@Andrew van der Stock:

“All software has bugs. Period.”

Really? Even “hello world”? I guess that depends on what your definition of “bugs” is.

“Employees are changing often in the [business], so how can you keep your backdoors secret?”

By writing all the comments like ‘HERE IS MY SUPERSECRET BACK DOOR’ in unicode.

@Stefan Wagner:
“Well – how often do we here about backdoors in often sold software?”

How often do you intensively examine “often sold software”? How would you discriminate between a back door and a coding error?

“Can’t you easily find open ports, waiting for their master to call, by nmap?
Can’t you find periodic call-homers by installing a firewall (not a PFW)?”

Hmm, you mean like VMWare 4.0 for Linux? Or virtually every self-updating program in Windows? Or did you mean the clever ones that use covert timing channels to exfiltrate data out? Have you reverse-engineered everything in the network stream that is sent to Microsoft every time you run Windows Update?

I can’t seem to locate the page that described it, I believe it was either “Approaching Zero” or “The Next World War”, but I recall the assertion that programmers who had emigrated from the former Soviet Union were employed by Wall Street firms and they installed back doors that would have allowed the Sovs to take down the trading networks in the event of war, “at the machine level” (I assume they mean in machine language). Since their wages were subsidized by the Soviet Union, and their credentials probably enhanced by the same, they were simply too credentialed and inexpensive for corporate America to resist, so the story goes.

It’s like the communication networks; no commercial entity can provision for worst-case and still be competitive, unless it’s mandated, and consumers are too price-sensitive, on the whole, to allow that to happen (the companies would pass on the costs to the taxpayer or consumer one way or another; TANSTAAFL). Nor can we afford to harden every home against nuclear attack. Even vendor liability won’t discourage CEOs from taking risks for short-term gains, especially when the government subsidizes them like the airlines or bails them out like Chrysler. If you’re big enough, you just make your case to the government and get a “get out of bankruptcy” card. If a company fails to plan for a rainy day, then let the market reallocate those resources, instead of rewarding them for bad choices.

@supersaurus:

Excellent points!

“it is unlikely that large software systems will ever be made perfect because they are in effect huge state machines whose next action may depend upon the sequence of hundreds or thousands of prior actions.”

Yes, while it is true that a computer is a giant FSA (modulo I/O), they are monstrously large compared to the kind we’re used to reasoning about. If people would, say, use asynchronous I/O loops instead of multithreading, then maybe the individual client FSMs could be reasoned about (proven) with relatively simple tools, and we could re-use the event-loop framework.

Also in closing I’d like to add an insight from Terry Ritter. He said (paraphrasing) that as long as our systems are designed to have their functionality extended in situ, we are lost. The only chance of having a secure system depends on its abilities being finite and enumerable, and not evolving (viz. HTML->cgi-bin->php).

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.