Rajendrasinh Makwana was a UNIX contractor for Fannie Mae. On October 24, he was fired. Before he left, he slipped a logic bomb into the organization's network. The bomb would have "detonated" on January 31. It was programmed to disable access to the server on which it was running, block any network monitoring software, systematically and irretrievably erase everything—and then replicate itself on all 4,000 Fannie Mae servers. Court papers claim the damage would have been in the millions of dollars, a number that seems low. Fannie Mae would have been shut down for at least a week.

Luckily—and it does seem it was pure luck—another programmer discovered the script a week later, and disabled it.

Insiders are a perennial problem. They have access, and they're known by the system. They know how the system and its security works, and its weak points. They have opportunity. Bank heists, casino thefts, large-scale corporate fraud, train robberies: many of the most impressive criminal attacks involve insiders. And, like Makwana's attempt at revenge, these insiders can have pretty intense motives—motives that can only intensify as the economy continues to suffer and layoffs increase.

Insiders are especially pernicious attackers because they're trusted. They have access because they're supposed to have access. They have opportunity, and an understanding of the system, because they use it—or they designed, built, or installed it. They're already inside the security system, making them much harder to defend against.

It's not possible to design a system without trusted people. They're everywhere. In offices, employees are trusted people given access to facilities and resources, and allowed to act—sometimes broadly, sometimes narrowly—in the company's name. In stores, employees are allowed access to the back room and the cash register; and customers are trusted to walk into the store and touch the merchandise. IRS employees are trusted with personal tax information; hospital employees are trusted with personal health information. Banks, airports, and prisons couldn't operate without trusted people.

Replacing trusted people with computers doesn't make the problem go away; it just moves it around and makes it even more complex. The computer, software, and network designers, implementers, coders, installers, maintainers, etc. are all trusted people. See any analysis of the security of electronic voting machines, or some of the frauds perpetrated against computerized gambling machines, for some graphic examples of the risks inherent in replacing people with computers.

Of course, this problem is much, much older than computers. And the solutions haven't changed much throughout history, either. There are five basic techniques to deal with trusted people:

1. Limit the number of trusted people. This one is obvious. The fewer people who have root access to the computer system, know the combination to the safe, or have the authority to sign checks, the more secure the system is.

2. Ensure that trusted people are also trustworthy. This is the idea behind background checks, lie detector tests, personality profiling, prohibiting convicted felons from getting certain jobs, limiting other jobs to citizens, the TSA's no-fly list, and so on, as well as behind bonding employees, which means there are deep pockets standing behind them if they turn out not to be trustworthy.

3. Limit the amount of trust each person has. This is compartmentalization; the idea here is to limit the amount of damage a person can do if he ends up not being trustworthy. This is the concept behind giving people keys that only unlock their office or passwords that only unlock their account, as well as "need to know" and other levels of security clearance.

4. Give people overlapping spheres of trust. This is what security professionals call defense in depth. It's why it takes two people with two separate keys to launch nuclear missiles, and two signatures on corporate checks over a certain value. It's the idea behind bank tellers requiring management overrides for high-value transactions, double-entry bookkeeping, and all those guards and cameras at casinos. It's why, when you go to a movie theater, one person sells you a ticket and another person standing a few yards away tears it in half: It makes it much harder for one employee to defraud the system. It's why key bank employees need to take their two-week vacations all at once—so their replacements have a chance to uncover any fraud.

5. Detect breaches of trust after the fact and prosecute the guilty. In the end, the four previous techniques can only do so well. Trusted people can subvert a system. Most of the time, we discover the security breach after the fact and then punish the perpetrator through the legal system: publicly, so as to provide a deterrence effect and increase the overall level of security in society. This is why audit is so vital.

These security techniques don't only protect against fraud or sabotage; they protect against the more common problem: mistakes. Trusted people aren't perfect; they can inadvertently cause damage. They can make a mistake, or they can be tricked into making a mistake through social engineering.

Good security systems use multiple measures, all working together. Fannie Mae certainly limits the number of people who have the ability to slip malicious scripts into their computer systems, and certainly limits the access that most of these people have. It probably has a hiring process that makes it less likely that malicious people come to work at Fannie Mae. It obviously doesn't have an audit process by which a change one person makes on the servers is checked by someone else; I'm sure that would be prohibitively expensive. Certainly the company's IT department should have terminated Makwana's network access as soon as he was fired, and not at the end of the day.

In the end, systems will always have trusted people who can subvert them. It's important to keep in mind that incidents like this don't happen very often; that most people are honest and honorable. Security is very much designed to protect against the dishonest minority. And often little things—like disabling access immediately upon termination—can go a long way.

This essay originally appeared on the Wall Street Journal website.

Posted on February 16, 2009 at 12:20 PM • 53 Comments


Nomadic LogicFebruary 16, 2009 1:52 PM

I will never understand why Makwana was fired in the late morning and allowed to work out the rest of his day. It's the silliest thing I've ever heard, and FM almost paid the price for their stupidity.

Carlo GrazianiFebruary 16, 2009 1:54 PM

You may want to edit out the reference to "double-entry bookkeeping". That has nothing to do with security, or with trust. It's an error-detection scheme, like parity checking.

Nice article, otherwise.

Tangerine BlueFebruary 16, 2009 2:30 PM

Hi Carlo,

Double-entry, with supporting debit/credit paperwork, makes a natural backup system. Do you not consider backups and errror-detection part of security?

Carlo GrazianiFebruary 16, 2009 2:42 PM

@Tangerine Blue:

Sure, in a broad sense error-detection and backups are "security". However, if I'd been editing this article, I would still have struck the reference to "double-entry bookkeeping" from the list of trust-limitation examples in which it is embedded, as it is not in fact a trust-limitation measure, but rather an error-detection measure.

Matthew CarrickFebruary 16, 2009 2:50 PM

Hey Nomadic Logic: Is it possible Management was hoping he would do something to implicate himself further?

christopherFebruary 16, 2009 3:04 PM

Disagree with #5. The punishment has no correlated effect on deterrence. DoJ crime statistics overwhelmingly state that the only correlation to deterrence was perceived chance of getting caught.

Impossibly StupidFebruary 16, 2009 3:13 PM

Backups are *not* part of security. On the contrary, they usually represent an additional vulnerability. This is somewhat related to recent stories on hard drive encryption; the weakest link isn't always where you think it is. It's also interesting how ticket taking is a two-way street, too, since just a few days ago it was discussed as a potential weakness.

Another editing tweak that should be considered is the phrase "so their replacements have a change to uncover any fraud". It reads awkwardly like a typo of "chance". Maybe it even is but, if not, I'd reword it to be something like "the change allows their replacements to uncover any fraud".

Damages?February 16, 2009 3:26 PM

"Court papers claim the damage would have been in the millions of dollars, a number that seems low." Compared to what? The billions that were squandered buying/writing bad mortgages? Where is the #5 for that travesty?

ChrisFebruary 16, 2009 3:27 PM

Another measure that seems obvious: make people care. If the breach is my personal loss I will not only be more cautious about making errors and motivated to find and fix existing ones, but also will I be more likely to investigate when I find something that looks fishy. People assuming responsibility can go a long way in protecting an asset

Bruce SchneierFebruary 16, 2009 3:53 PM

"You may want to edit out the reference to 'double-entry bookkeeping.' That has nothing to do with security, or with trust. It's an error-detection scheme, like parity checking."

It's designed to detect errors, and make it harder to deliberately cook the books. Or, at least, it was when originally designed. Now that both halves of the ledger are created from the same database populated by the same people, this bit of security is lost.

Gordon MessmerFebruary 16, 2009 3:59 PM

Bruce, there's an error in your "a" tag around the word "any". There's no "href=".

RoyFebruary 16, 2009 4:44 PM

When I worked as configuration management lead on a software project, all software, including the scripts I wrote and maintained for running builds were themselves under configuration control. To correct a simple spelling error required checking in the corrected script, and getting the change put through only if the build was successful. Thus no change could be made without a full audit trail.

pdf23dsFebruary 16, 2009 6:01 PM

Roy: was deployment to production servers a limited right, and was the only way the average employee was able to get software deployed by putting through the configuration control? Was this just a single customer, or a lot of customers? If yes and the latter, my hat's off to you.

RoyFebruary 16, 2009 6:15 PM

pdf23ds: There were more than a dozen developers, almost all of them with a Ph.D. (all in the hard disciplines). At meetings, I was the dumbest guy in the room.

PjFebruary 16, 2009 9:47 PM

Roy: And more importantly, who controlled the version control ? And who has access to the hardware ? And...

Not saying that it is not a good step - it allows you to track back mistakes and correct them efficiently - but it definitely is no protection against an insider who means harm.

I say have few or lots of trusted people, but identify them well and throw them to the crocodiles after they're finished ;)

DiogenesFebruary 17, 2009 1:04 AM

It's not possible to design a system without trusted people.

Of course, part of the current economic crisis is that more and more people are becoming alienated.

And CEO's and bankers continue to reap bonuses while governmental torturers and violators of civil rights go free, while pensions are stripped and concessions demanded, then serious demoralization results.

AC2February 17, 2009 2:05 AM

He must be a bloody genius to figure it all out and implement it within a few hours of being fired... Even while filling out exit interview forms, returning his canteen pass and what not!!

Unless of course he was already into this and it was the reason why he was fired in the first place.. Which makes the so-called 'lucky' discovery either part of a planned review or the most incredible piece of dumb luck ever...

Insider TreatFebruary 17, 2009 2:32 AM

Obviously, this is not the only example of Insider Threats.
the statistics suggest that the majority of attacks come from outside of an organisation but the must larger value of loss comes from insiders. From my experience, very few organisations deploy sufficient controls or detective measures to reduce this risk. There is a trade-off. The cost of countermeasures often outweigh the cost of a breach. As the cost of breaches is increasing, hopefully organisations will start to invest in the tools, procedures and personnel required to identify and stop malicious insiders.

ckFebruary 17, 2009 3:40 AM

Well you missed another important "technique" concerning trusted employees:

pay them well

an unhappy trusted employee will soon be not trustworthy

CalumFebruary 17, 2009 3:57 AM


Actually, one of the main purposes of double entry book-keeping was to prevent fraud. Traditionally, seperate people would carry out each entry and therefore to slip through a false entry they would have to connive, multiplying the risk to them. Error-correction is just a useful side effect.

SauronFebruary 17, 2009 5:49 AM

ck is completely right. Bruce, you forgot :

6. People are the most important aspect in your security. Pay and manage your employees in sensible positions well, care that they are happy and remain loyal. Avoid making them disgruntled. Have a sensible human resources policy .

Most administrators I know are not very well paid, and not well considered. But most of them make a very good job at keeping their company's systems working. And I can understand they may be unhappy, when one sees the bonuses in the financial sector.

And for what I've read, the error that got him fired was not that terrible : complaint #13 states "[He] erroneously created a computer script that changed the settings on the Unix server without the proper authority of his supervisor. This computer script was not maliciously created".

It would have been better management not to fire him, but first give him a formal warning, and some training so that he avoids making the same error in the future.

For what I've understood, he was a contractor. I think it's a mistake too : sensible operations should never be outsourced. Contractors know they are the first to be fired at the boss' will, or laid off in difficult times : how can they be loyal ?

And how can your regularly employed administrators remain loyal too ? When they know that the company will lay off 20% or 30% of the people, and first the ones which are considered "superfluous", because they don't directly bring revenues or added value ...

RandyFebruary 17, 2009 6:41 AM


It must be a slow day...

*Everyone* is finding all sorts of corrections and disagreements.

Although, I do agree with the proposed #6.


mirroreyesFebruary 17, 2009 6:42 AM

ck and Sauron have likely hit the nail on the head with lowering the probability of a Makwana, I would only add to Sauron's last words "the company will lay off 20% or 30% of the people, and first the ones which are considered "superfluous", because they don't directly bring revenues or added value ..." ... whilst seeing their companies big cats pull in billions in bonuses.

novatechFebruary 17, 2009 7:19 AM


“And how can your regularly employed administrators remain loyal too ? When they know that the company will lay off 20% or 30% of the people, and first the ones which are considered "superfluous", because they don't directly bring revenues or added value”

That may be true enough in large companies, but in smaller companies, even though the IT support staff are generally considered mere ‘overhead’, they can be essential to keep the company’s systems (and hence the company itself) up and running. Sometimes, just one key IT person can become a ‘linchpin’ that a small company can become dependent on.

It would be a foolish small company that gets into trading difficulties and decides to dispense with IT staff before considering other alternatives.

gbromageFebruary 17, 2009 7:53 AM

I'm confused.... didn't Fannie Mae's management do a pretty good job of trying to shut it down, without needing any help from the IT people? :-D

Clive RobinsonFebruary 17, 2009 9:11 AM

@ ck, Sauron, novatech,

One point about IT Staff and managment.

IT staff don't speak business, and in general they expect managment to understand what they do.

Managment cut your checks, have the curtisy to speak their language and you might be in a better position to communicate what you do in a way that does not put "the man" of to sleep, or alow your competition to "put one over on you" to your detriment.

Also stop trying to sell security as a service etc and trying to make up "ROI" figures anybody "in the know" knows it's pure "weapons grade bullonium".

You want to get security into the work place sell it as a "quality issue" which in fact is actually what it is to all intents and purposes.

Good mangment understand "quality" and why it's essential...

AlbatrossFebruary 17, 2009 10:14 AM

I was once approached by a large national nonprofit, who asked me to sit in on a disciplinary meeting with their senior network administrator. My purpose? To explain how the evidence left on the file server indicated that the senior network admin had been browsing personnel files.

They were extremely distressed when I explained that if they had lost trust in their senior network administrator, disciplining him and leaving him on the job was the absolutely worst thing they could do. They were completely dependent upon him and were intimidated by his attitude and behavior. I explained that their obligation to the privacy of their customers demanded an untrusted net admin had to go. I walked them through the steps necessary to disable his access and walk him out of the building.

In so doing we discovered that he had a T1 link between the data center and his apartment, where all this organization's data and servers were mirrored.

I also learned a lesson when complaints started coming in regarding e-mail and web problems. I had missed the step of contacting business partners to inform them that the net admin was gone. After being let go, the net admin had called the local ISP and throttled the network bandwidth down to about 56K. Not drastic enough to be instantly detected.

Clive RobinsonFebruary 17, 2009 10:48 AM

@ Bruce,

"It obviously doesn't have an audit process by which a change one person makes on the servers is checked by someone else; I'm sure that would be prohibitively expensive."

Don't confuse "audit" with "multipath security".

First off I'm not sure audit is the way to go...

After Enron the US Gov gave a blank sheet of paper to the audit industry and said "tell us what needs to be done to stop this happening" the resulting legislation (Sab-Ox) should have been able to spot the toxic-trading etc that has brought the likes of Fannie-Mae to their knees.

But importantly it did not...

The fact that the auditors did not pick up on it is very very telling about the audit process in general.

That is audit people are not as "smart" as some of those they are auditing and don't have the depth of experiance to spot new tricks (only known ones)...

So audit of other peoples code is not going to stop some smart people slipping stuff past the code auditors either in one go or more likley in little bits in different tools etc.

Also audit is a slow and expensive process which invariably is tallent wasting and does not attract the tallent (as there are effectivly no rewards).

Therefore another way has to be considered and taken.

One way to do multipath security is as is done in safety critical systems.

Different non communicating teams produce code bassed on the same specification (that has undergone some level of formal qualification).

The different sets of code are run in parallel with the same input. Therefore their outputs should be the same under all conditions (if not there is a serious issue).

Generally an automatic "voting system" picks up on differences and raises an alarm when one path differs from the others.

Ususally in safety critical systems they automaticaly "vote" on the correct action by selecting the majority choice. To "vote" usually requires three or more independent paths.

However to just raise an alarm only two paths are required.

Now in mission critical systems there are usually two or more systems running the same code and receiving the same input. If one fails the other continues.

Therefor implementing a system using two sets of different code is after development actually not going to be that much more expensive than the systems already in place.

The advantage of multipath security is which ever way is chosen (alarm/vote/both) unlike audit it checks the results as they happen and will pick up all sorts of things as and when they occure not some considerable time later (or not at all).

As such it is a much finer mesh net than audit and will catch much more, usually at much less cost than audit...

MalvolioFebruary 17, 2009 11:16 AM

" 6. People are the most important aspect in your security. Pay and manage your employees in sensible positions well, care that they are happy and remain loyal. Avoid making them disgruntled. Have a sensible human resources policy ."

Without this one, all the others are worthless. In a rapidly evolving system, even a junior developer or administrator can destroy your business if he is both clever enough and pissed off enough. The only choices are (1) trust to luck that all your employees are stupid or happy or both; (2) slow down your growth so that you are only reliant on a tiny priesthood of senior, well-trusted admins and developers and hope that they are stupid or happy or both; or (3) establish a culture where employees don't feel like they are in an all-out war with their own employers.

The last seems like the obvious wise choice to me, but most companies make a different decision.

Robert In San DiegoFebruary 17, 2009 11:17 AM

Albatross, you're scaring me. Reminds me of how I stopped volunteering for one non-profit's youth program because every time I came in they wanted me to fill out the four page volunteer enrollment form. "Don't you have this on the computer?" "It had a virus." The second time this routine went down, I stopped volunteering.

kangarooFebruary 17, 2009 2:08 PM

And Bruce misses the key point. "Trusted" people should be deeply invested in the organization, at whatever level they are. That's the only real protection. If you give the key to the building to the janitor, then treat him as a "contractor" with no job security, constant harassment and no future -- well, what do you expect?

Convergent interest. It's particularly easy to do with low-level employees who have limited opportunity to create massive frauds, and much to gain with stable and interesting work.

But that would require dismissing the entire concept of "Human Resources" upon which we've built our house of cards.

RandyFebruary 17, 2009 2:38 PM


It seems to me that two person teams would suffice. One (or the other) does the work, then the other *reviews* the code. Now *both* reputations are on the line.

I'm not sure that implementing two identical code bases will be rewarding enough for many developers as well. I understand that some mission critical code can be done this way, but *not* because of security or auditing.

OTOH, I may not understand what you were driving at.

Regardless, this topic sure struck a nerve. I wonder why? Is someone here trying to hide something?


Stu HendersonFebruary 17, 2009 2:45 PM

Mr. Schneier, Thanks for your Wall St. Journal article on how to prevent insider hacks, and describing what happened at Fannie Mae. Your comments were brief and practical. They gave valuable advice to people who need it.

I was struck by your comment that it would be prohibitively expensive to have a second person review any changes made to the servers. In many shops I've seen, such Change Control and Quality Assurance or Production Turnover pays for itself. There are fewer system problems in shops that have such a function, and fewer negative side effects from someone making a change without being aware of the effect it might have on other parts of the system.

Formal Change Control makes it possible to roll a change back, if it turns out to have problems. Change Control also provides a single chokepoint if you want to be sure that every relevant party gets to review any change before it is implemented.

Formal Change Control also means that you can have only one userid with update access to system libraries: the change control id. This reduces the need to have several adminsitrators all of whom have write access to critical datasets.

Of course the expense of formal Chg Control and QA can be more or less depending upon how it's implemented.

I'm not disagreeing with you here; just suggesting that it's worth a second look if change control seems prohibitively expensive at first glance.

And while I have your atttention, thanks for what you've done, and are doing, to bring practical common sense and more complete knowledge to the IT security community. We'd all be better off if we made better use of the sensible advice you give us so freely. Thanks.

Best regards, Stu

Stu Henderson (301) 229-7187 SITE= fax=301-229-3958

neillFebruary 17, 2009 3:07 PM

IF they had a good backup hardware & dataset they would loose only a few minutes switching over

backup, backup, backup ...

Davi OttenheimerFebruary 17, 2009 3:32 PM

"It obviously doesn't have an audit process by which a change one person makes on the servers is checked by someone else; I'm sure that would be prohibitively expensive."

On the contrary, two weeks prior to his termination he had his ability to push code to production restricted. Thus his level of trust was reduced and his code was meant to be checked by someone else.

I also noted you do not mention behavioral analysis and surveillance controls.

How about in addition to "Detect breaches of trust after the fact" we also try to detect changes in behavior or identify patterns and red flags before trust is breached?

For example, how would you treat a series of email messages to India informing his relatives to stay out of the US? This was discovered in an investigation after his termination, but it just as easily could have been flagged before and helped prevent the script from being installed.

Clive RobinsonFebruary 17, 2009 6:33 PM

@ Randy,

"OTOH, I may not understand what you were driving at."

The principle is quite simple, like that of using two seperate ledgers. The results should balance, if they don't there is a problem.

Two or more software teams are given identical functional specifications, and idealy they are not aware of the other teams so should write independent code. For non trivial code the teams will go about doing things in different ways.

If the specifications are correct and sufficiently detailed then the code the independent teams write will be different but have similar if not identical responses to the same input.

If one of the teams code does something different then there is something wrong, which should raise a red flag as either it is accidental (a bug) or deliberate. If deleberate then it boils down to either it is malicious or inocent (specification / interpretation issue).

The problem with code audits is I know that they simply do not work...

Software developers are more like craftsmen than engineers and they don't take kindly to critisisum at the best of times. And frequently they have a higher view ot their abilities than others do.

By and large software developers are very bad at reviewing their own code let alone starting in on the thankless task of reviewing their peers code.

Further there are very very few software developers who have the ability to spot subtal problems in code especialy when obsfucated by some one who has a trick or two up their sleave.

For instance how many would spot the passing of data in a buffer that has been de-allocated and then re-allocated but not initialised?

Or other more subtal tricks?

Auditing is also very time consuming and mind numbing task and is frequently given to the person (probably) least qualified to do it.

I have on one or two occasions deliberatly put code in with subtal features just to see if the code audit picked it up (they never did). Which has made me very wary of "code audits" as anything other than "check box" excersises.

However no mater how subtal the a coding trick, it will at some point behave differently to code from another team at which point the red flag goes up and the "issue" will be found before to much harm could have been done.

Sambaiah KilaruFebruary 18, 2009 12:07 AM

As you mentioned insiders are big threat, system and network adminstrators pose much bigger threat. The nice book on Unix system administration hand book mentioned long back. Don't give notice period to system administrators and terminate all the access immediately etc.

MysticKnightoftheSeaFebruary 18, 2009 3:25 AM

As logic bombs go, I think what happened after my wife left (was 'encouraged to leave' because she knew too much...) tops the list. My wife had the job of maintaining the data base, making sure accurate information was contained within (good employee). She and her immediate superior had the only clearances to access the DB 'officially'. Due to workload my wife was told to provide access under her bona fides to a co-worker. Somewhere along the line the upper management decided to get a little shady and started misquoting statisics for their own ends.

*Bing* my wife's superior gets the axe. When it was discovered that my wife had integrity, she was railroaded out the door as well. Remember, these were the only two with security clearances, i.e. UN/PWs set up for the database. My wife's boss never shared her credentials, and my wife called down to the data center and told them she was leaving the company and could you please cancel my UN/PW? The IT folks did so.

What has this got to do with the price of rice?

About a week later it gets back to my wife that the powers that be were asking her co-worker to get into the data base to log/retrieve some data, and she couldn't (remember, she was using my wife's credentials). The powers that be made noises about my wife sabotaging the system, when all she did was do the right thing. It didn't occure to them to call IT and find out what happened, at least not immediately.

Apparently, they got it figured out, because they never brought any action against her (perhaps worried about a wrongful dismissal counter-suit?). Besides, as I recall they got their comeuppance.

This was some twenty or so years ago, and the concept of the use of computers was unclear to (at least this branch of) management.


RandyFebruary 18, 2009 7:59 AM


I see a few issues that may gum up the works of your idea.

Talk about expensive! Hiring *two* whole teams. I don't know *any* company that could afford that, except for mission and human critical devices like the Space Shuttle or X-Ray machines. And then these I believe have redundant *hardware* to prevent catastrophes.

The odds of their output matching 100% would *never* happen, so there will also have to be a team that reconciles the differences and always forces the two teams to change their output so they match. This will kill their creativity since they will have no control.

And they will eventually find out about each other, which would kill both team's ambition to work hard.

Randy -- unlessimstillnotgettingit

JakeFebruary 18, 2009 1:42 PM

Certainly the company's IT department should have terminated Makwana's network access as soon as he was fired, and not at the end of the day.

I would set up a dead-man's switch, then. Unless I log in at least once every N hours, the logic bomb is activated.

ElvisFebruary 18, 2009 4:25 PM


Having the teams know about eachother isn't a problem - set it up as a competition.

You compare the two teams by running their output through the same set of tests. Assuming that both pass the tests (ie. produce the same output under many different situations) then you can pick the one that performs better. It's a way of adding redundancy to the design process, finding faults easily, and selecting better designs.

Clive RobinsonFebruary 19, 2009 1:13 PM

@ Randy,

"The odds of their output matching 100% would *never* happen,"

That depends on what you mean by 100%.

If the specification says the data enetered in this format goes in this field of the DB entry...

Then I hope that both bits of code do exactly that or one of the teams needs to take up another occupation.

IF (and it's a very big if) you produce the specification correctly then there will be "way points" which can be checked by the "supervising process". Irespective of the direction the individual parts of the code take they should go through these way points in the same way each and every time. If they don't or one bit of code goes through a way point the other code does not then there most definatly is a problem that needs to be resolved.

Providing the way points are picked to be at the security sensitive points then any "extra code activity" has to be explained.

As for the cost it is probably cheeper to employ two sets of reasonable development teams than it is to employ one adiquate development team and one "good code audit team".

As for the hardware redundancy I already have refered to this in my previous posting.

The important point is that by running both teams code at the same time it provides a dynamic way of finding faults as well especialy with "maintanence" changes.

For instance if the data input format is changed and one teams code behaves differently then this will become apparent the first time "problem data" is used. The supervisory code picks it up at a way point and raises a flag.

In essence the way points and supervisory code checking them acts in the same way as a quality control system on a production line.

Obviously not all programs are amenable to this type of supervisory process, but most "back end" and "mission critical" code is.

At the end of the day "you pay your money and you make your choice".

But having been through several code audits, I can assure you they realy only pick up "the known suspects" when it comes to security.

And worse it is a "static" process that often only happens as part of the initial code release QA, and not for subsiquent "maintanence" updates.

Sometimes you have to accept that what appears "worse" can often be a considerable improvment.

As I have said befor many times "software engineer" is not the correct term to use for your average code cutter, "software artist/artisan" would be better.

Likewise security is not something you bolt on as an after thought, like quality it needs to be part of the entire production process.

Which gives most code bases and development processes a significant problem. How do you get from bad to good without re-doing everything that has gone before?

The answer is by a process of instilation not instalation, and by supervising what has gone before untill it is replaced.

Peter E. RetepFebruary 20, 2009 6:05 PM

re: "It was programmed to disable access to the server on which it was running, block any network monitoring software, systematically and irretrievably erase everything—and then replicate itself on all 4,000 [X] servers. Court papers claim the damage would have been in the millions of dollars, a number that seems low. [X] would have been shut down for at least a week."

What you describe has been fielded before in many dynamic forms as an anti-secuity system virus over the past three years. I was one of the first to have trapped a variant myself.

What's new is that now it can be lodged on top of the e-mail notifying system. Today I caught trace of one doing a DOS on Federal agency portals, and discovered a second Federal system bug:

Since it rode in on an "FBI"-Nigeria scam e-mail, the ISP wasn't even interested in the fact that a super-parasite virus was exploiting a vulnerability on their servers even at the WebMail level - i.e. their focus was distracted completely by the content.

Conversely, going to CERT/FBI, the portals require info only a believing VICTIM coiuld give after being e-mugged, so there was no way to notify any agency, it being a System Bug of their Portals' input threshhold design.

So there was no immediate way to give any of them any early notification BEFORE it propagates a disaster.

Kurt Vonnegutt could have made a fine novel out if this. Nap's Cradle?

joseFebruary 21, 2009 5:54 PM

Good for Rajendrasinh Makwana because some dirty capitalist companies deserve one lesson of respect from their emploees. Rajendrasinh Makwana good look and he has done one good lesson to this bad people,because they have fired him without reason.

Stew BabyMarch 15, 2009 4:49 PM

Another technique (NOT!) is so send the following email to your System administrators on a Friday afternoon...

To EDS Business Unit Employees in the United States and Puerto Rico:
Since becoming part of HP last year, we have accomplished a great deal and should be proud of what we have delivered together. Our service excellence remains high, and we have closed a number of significant deals. Our execution during the transition phase has been outstanding. As we move from the integration phase into the transformation phase, we know from experience with our own client projects this will be the most difficult part of our journey.
Our goal is to transform the business into a future state, which will grow faster than the market and enable us to take share from our competitors. We will then be able to deliver above-industry benchmark returns to our shareholders and price deals that win more business while providing flexibility to invest in innovation, delivering greater value to our clients.
The gap between where we are today and accomplishing our goals is widened by the current economic climate. As a result, we need to take temporary actions to get us through this difficult period. Our customers expect EDS to be a financially strong partner and, as employees, we expect a healthy company as well. With this in mind, we announced specific actions on March 9 to reduce our cost structure and enable the business to improve operating profit and grow as we enter fiscal year 2010.
Unfortunately, we need to take additional action. Specifically, we have decided to make a temporary, additional reduction in base salary affecting EDS business unit employees in the United States and Puerto Rico.
Base salaries for all United States and Puerto Rico employees in the EDS business unit will be temporarily reduced beyond those reductions previously announced by HP on February 18, as follows:
An additional, temporary reduction of 10 percent in base salary effective for April 2009
Base salary will not be reduced for employees below an annualized, full-time equivalent income of $40,000 by this additional, temporary action
In May 2009, base salary for United States and Puerto Rico employees in the EDS business unit will be reinstated to the levels of base salary effective on March 16. This includes reductions previously outlined in HP’s February 18 announcement. While we have no plans for an additional base salary reduction, we will continue to closely monitor the performance of our business and make further adjustments as required in the coming months.
We recognize these are tough actions, and you can be assured we made this decision after much thought and assessment. We ask for your support and understanding as we work through these very difficult times. We are confident we will strengthen our position in a consolidating market. We will be one of the industry’s strongest and safest pairs of hands, trusted by our clients to solve their technology challenges.
The EDS Senior Leadership Team

Eric EddlerJune 16, 2011 1:39 PM

WhatsUpGold's Networking Monitoring Software is a great tool to manage your network. Their Flow Manager can offer you the best in network management.

Leave a comment

Allowed HTML: <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre>

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc..