Entries Tagged "insiders"

Page 4 of 9

$100 to Put a Bomb on an Airplane

An undercover TSA agent successfully bribed JetBlue ticket agent to check a suitcase under a random passenger’s name and put it on an airplane.

As with a lot of these tests, I’m not that worried because it’s not a reliable enough tactic to build a plot around. But untrustworthy airline personnel—or easily bribeable airline personal—could be used in a smarter and less risky plot.

Posted on January 28, 2011 at 1:40 PMView Comments

Gift Cards and Employee Retail Theft

Retail theft by employees has always been a problem, but gift cards make it easier:

At the Saks flagship store in Manhattan, a 23-year-old sales clerk was caught recently ringing up $130,000 in false merchandise returns and siphoning the money onto a gift card.

[…]

Many of the gift card crimes are straightforward, frequently involving young sales clerks and smaller amounts than the Saks theft. Among the variations of such crimes, cashiers often do fake refunds of merchandise and then, with the amount refunded, use their registers to electronically fill gift cards, which they take. Or sometimes when shoppers buy gift cards, cashiers give them blank cards and then divert the shoppers’ money onto cards for themselves.

That last tactic is particularly Grinch-like.

Posted on January 7, 2010 at 5:46 AMView Comments

Don't Let Hacker Inmates Reprogram Prison Computers

You’d think this would be obvious:

Douglas Havard, 27, serving six years for stealing up to £6.5million using forged credit cards over the internet, was approached after governors wanted to create an internal TV station but needed a special computer program written.

He was left unguarded and hacked into the system’s hard drive at Ranby Prison, near Retford, Notts. Then he set up a series of passwords so no one else could get into the system.

And you shouldn’t give a prisoner who is a lockpicking expert access to the prison’s keys, either. No, wait:

The blunder emerged a week after the Sunday Mirror revealed how an inmate at the same jail managed to get a key cut that opened every door.

Next week: inmate sharpshooters in charge of prison’s gun locker.

Posted on October 6, 2009 at 2:32 PMView Comments

Subpoenas as a Security Threat

Blog post from Ed Felten:

Usually when the threat model mentions subpoenas, the bigger threats in reality come from malicious intruders or insiders. The biggest risk in storing my documents on CloudCorp’s servers is probably that somebody working at CloudCorp, or a contractor hired by them, will mess up or misbehave.

So why talk about subpoenas rather than intruders or insiders? Perhaps this kind of talk is more diplomatic than the alternative. If I’m talking about the risks of Gmail, I might prefer not to point out that my friends at Google could hire someone who is less than diligent, or less than honest. If I talk about subpoenas as the threat, nobody in the room is offended, and the security measures I recommend might still be useful against intruders and insiders. It’s more polite to talk about data losses that are compelled by a mysterious, powerful Other—in this case an Anonymous Lawyer.

Politeness aside, overemphasizing subpoena threats can be harmful in at least two ways. First, we can easily forget that enforcement of subpoenas is often, though not always, in society’s interest. Our legal system works better when fact-finders have access to a broader range of truthful evidence. That’s why we have subpoenas in the first place. Not all subpoenas are good—and in some places with corrupt or evil legal systems, subpoenas deserve no legitimacy at all—but we mustn’t lose sight of society’s desire to balance the very real cost imposed on the subpoena’s target and affected third parties, against the usefulness of the resulting evidence in administering justice.

The second harm is to security. To the extent that we focus on the subpoena threat, rather than the larger threats of intruders and insiders, we risk finding “solutions” that fail to solve our biggest problems. We might get lucky and end up with a solution that happens to address the bigger threats too. We might even design a solution for the bigger threats, and simply use subpoenas as a rhetorical device in explaining our solution—though it seems risky to mislead our audience about our motivations. If our solution flows from our threat model, as it should, then we need to be very careful to get our threat model right.

Posted on September 4, 2009 at 6:18 AMView Comments

John Walker and the Fleet Broadcasting System

Ph.D. thesis from 2001:

An Analysis of the Systemic Security Weaknesses of the U.S. Navy Fleet Broadcasting System, 1967-1974, as exploited by CWO John Walker, by MAJ Laura J. Heath

Abstract: CWO John Walker led one of the most devastating spy rings ever unmasked in the US. Along with his brother, son, and friend, he compromised US Navy cryptographic systems and classified information from 1967 to 1985. This research focuses on just one of the systems compromised by John Walker himself: the Fleet Broadcasting System (FBS) during the period 1967-1975, which was used to transmit all US Navy operational orders to ships at sea. Why was the communications security (COMSEC) system so completely defenseless against one rogue sailor, acting alone? The evidence shows that FBS was designed in such a way that it was effectively impossible to detect or prevent rogue insiders from compromising the system. Personnel investigations were cursory, frequently delayed, and based more on hunches than hard scientific criteria. Far too many people had access to the keys and sensitive materials, and the auditing methods were incapable, even in theory, of detecting illicit copying of classified materials. Responsibility for the security of the system was distributed between many different organizations, allowing numerous security gaps to develop. This has immediate implications for the design of future classified communications systems.

EDITED TO ADD (9/23): I blogged about this in 2005. Apologies; I forgot.

Posted on June 23, 2009 at 1:30 PMView Comments

Industry Differences in Types of Security Breaches

Interhack has been working on a taxonomy of security breaches, and has an interesting conclusion:

The Health Care and Social Assistance sector reported a larger than average proportion of lost and stolen computing hardware, but reported an unusually low proportion of compromised hosts. Educational Services reported a disproportionally large number of compromised hosts, while insider conduct and lost and stolen hardware were well below the proportion common to the set as a whole. Public Administration’s proportion of compromised host reports was below average, but their proportion of processing errors was well above the norm. The Finance and Insurance sector experienced the smallest overall proportion of processing errors, but the highest proportion of insider misconduct. Other sectors showed no statistically significant difference from the average, either due to a true lack of variance, or due to an insignificant number of samples for the statistical tests being used

Full study is here.

Posted on June 10, 2009 at 6:18 AMView Comments

Insiders

Rajendrasinh Makwana was a UNIX contractor for Fannie Mae. On October 24, he was fired. Before he left, he slipped a logic bomb into the organization’s network. The bomb would have “detonated” on January 31. It was programmed to disable access to the server on which it was running, block any network monitoring software, systematically and irretrievably erase everything—and then replicate itself on all 4,000 Fannie Mae servers. Court papers claim the damage would have been in the millions of dollars, a number that seems low. Fannie Mae would have been shut down for at least a week.

Luckily—and it does seem it was pure luck—another programmer discovered the script a week later, and disabled it.

Insiders are a perennial problem. They have access, and they’re known by the system. They know how the system and its security works, and its weak points. They have opportunity. Bank heists, casino thefts, large-scale corporate fraud, train robberies: many of the most impressive criminal attacks involve insiders. And, like Makwana’s attempt at revenge, these insiders can have pretty intense motives—motives that can only intensify as the economy continues to suffer and layoffs increase.

Insiders are especially pernicious attackers because they’re trusted. They have access because they’re supposed to have access. They have opportunity, and an understanding of the system, because they use it—or they designed, built, or installed it. They’re already inside the security system, making them much harder to defend against.

It’s not possible to design a system without trusted people. They’re everywhere. In offices, employees are trusted people given access to facilities and resources, and allowed to act—sometimes broadly, sometimes narrowly—in the company’s name. In stores, employees are allowed access to the back room and the cash register; and customers are trusted to walk into the store and touch the merchandise. IRS employees are trusted with personal tax information; hospital employees are trusted with personal health information. Banks, airports, and prisons couldn’t operate without trusted people.

Replacing trusted people with computers doesn’t make the problem go away; it just moves it around and makes it even more complex. The computer, software, and network designers, implementers, coders, installers, maintainers, etc. are all trusted people. See any analysis of the security of electronic voting machines, or some of the frauds perpetrated against computerized gambling machines, for some graphic examples of the risks inherent in replacing people with computers.

Of course, this problem is much, much older than computers. And the solutions haven’t changed much throughout history, either. There are five basic techniques to deal with trusted people:

1. Limit the number of trusted people. This one is obvious. The fewer people who have root access to the computer system, know the combination to the safe, or have the authority to sign checks, the more secure the system is.

2. Ensure that trusted people are also trustworthy. This is the idea behind background checks, lie detector tests, personality profiling, prohibiting convicted felons from getting certain jobs, limiting other jobs to citizens, the TSA’s no-fly list, and so on, as well as behind bonding employees, which means there are deep pockets standing behind them if they turn out not to be trustworthy.

3. Limit the amount of trust each person has. This is compartmentalization; the idea here is to limit the amount of damage a person can do if he ends up not being trustworthy. This is the concept behind giving people keys that only unlock their office or passwords that only unlock their account, as well as “need to know” and other levels of security clearance.

4. Give people overlapping spheres of trust. This is what security professionals call defense in depth. It’s why it takes two people with two separate keys to launch nuclear missiles, and two signatures on corporate checks over a certain value. It’s the idea behind bank tellers requiring management overrides for high-value transactions, double-entry bookkeeping, and all those guards and cameras at casinos. It’s why, when you go to a movie theater, one person sells you a ticket and another person standing a few yards away tears it in half: It makes it much harder for one employee to defraud the system. It’s why key bank employees need to take their two-week vacations all at once—so their replacements have a chance to uncover any fraud.

5. Detect breaches of trust after the fact and prosecute the guilty. In the end, the four previous techniques can only do so well. Trusted people can subvert a system. Most of the time, we discover the security breach after the fact and then punish the perpetrator through the legal system: publicly, so as to provide a deterrence effect and increase the overall level of security in society. This is why audit is so vital.

These security techniques don’t only protect against fraud or sabotage; they protect against the more common problem: mistakes. Trusted people aren’t perfect; they can inadvertently cause damage. They can make a mistake, or they can be tricked into making a mistake through social engineering.

Good security systems use multiple measures, all working together. Fannie Mae certainly limits the number of people who have the ability to slip malicious scripts into their computer systems, and certainly limits the access that most of these people have. It probably has a hiring process that makes it less likely that malicious people come to work at Fannie Mae. It obviously doesn’t have an audit process by which a change one person makes on the servers is checked by someone else; I’m sure that would be prohibitively expensive. Certainly the company’s IT department should have terminated Makwana’s network access as soon as he was fired, and not at the end of the day.

In the end, systems will always have trusted people who can subvert them. It’s important to keep in mind that incidents like this don’t happen very often; that most people are honest and honorable. Security is very much designed to protect against the dishonest minority. And often little things—like disabling access immediately upon termination—can go a long way.

This essay originally appeared on the Wall Street Journal website.

Posted on February 16, 2009 at 12:20 PMView Comments

Audit

As the first digital president, Barack Obama is learning the hard way how difficult it can be to maintain privacy in the information age. Earlier this year, his passport file was snooped by contract workers in the State Department. In October, someone at Immigration and Customs Enforcement leaked information about his aunt’s immigration status. And in November, Verizon employees peeked at his cell phone records.

What these three incidents illustrate is not that computerized databases are vulnerable to hacking—we already knew that, and anyway the perpetrators all had legitimate access to the systems they used—but how important audit is as a security measure.

When we think about security, we commonly think about preventive measures: locks to keep burglars out of our homes, bank safes to keep thieves from our money, and airport screeners to keep guns and bombs off airplanes. We might also think of detection and response measures: alarms that go off when burglars pick our locks or dynamite open bank safes, sky marshals on airplanes who respond when a hijacker manages to sneak a gun through airport security. But audit, figuring out who did what after the fact, is often far more important than any of those other three.

Most security against crime comes from audit. Of course we use locks and alarms, but we don’t wear bulletproof vests. The police provide for our safety by investigating crimes after the fact and prosecuting the guilty: that’s audit.

Audit helps ensure that people don’t abuse positions of trust. The cash register, for example, is basically an audit system. Cashiers have to handle the store’s money. To ensure they don’t skim from the till, the cash register keeps an audit trail of every transaction. The store owner can look at the register totals at the end of the day and make sure the amount of money in the register is the amount that should be there.

The same idea secures us from police abuse, too. The police have enormous power, including the ability to intrude into very intimate aspects of our life in order to solve crimes and keep the peace. This is generally a good thing, but to ensure that the police don’t abuse this power, we put in place systems of audit like the warrant process.

The whole NSA warrantless eavesdropping scandal was about this. Some misleadingly painted it as allowing the government to eavesdrop on foreign terrorists, but the government always had that authority. What the government wanted was to not have to submit a warrant, even after the fact, to a secret FISA court. What they wanted was to not be subject to audit.

That would be an incredibly bad idea. Law enforcement systems that don’t have good audit features designed in, or are exempt from this sort of audit-based oversight, are much more prone to abuse by those in power—because they can abuse the system without the risk of getting caught. Audit is essential as the NSA increases its domestic spying. And large police databases, like the FBI Next Generation Identification System, need to have strong audit features built in.

For computerized database systems like that—systems entrusted with other people’s information—audit is a very important security mechanism. Hospitals need to keep databases of very personal health information, and doctors and nurses need to be able to access that information quickly and easily. A good audit record of who accessed what when is the best way to ensure that those trusted with our medical information don’t abuse that trust. It’s the same with IRS records, credit reports, police databases, telephone records – anything personal that someone might want to peek at during the course of his job.

Which brings us back to President Obama. In each of those three examples, someone in a position of trust inappropriately accessed personal information. The difference between how they played out is due to differences in audit. The State Department’s audit worked best; they had alarm systems in place that alerted superiors when Obama’s passport files were accessed and who accessed them. Verizon’s audit mechanisms worked less well; they discovered the inappropriate account access and have narrowed the culprits down to a few people. Audit at Immigration and Customs Enforcement was far less effective; they still don’t know who accessed the information.

Large databases filled with personal information, whether managed by governments or corporations, are an essential aspect of the information age. And they each need to be accessed, for legitimate purposes, by thousands or tens of thousands of people. The only way to ensure those people don’t abuse the power they’re entrusted with is through audit. Without it, we will simply never know who’s peeking at what.

This essay first appeared on the Wall Street Journal website.

Posted on December 10, 2008 at 2:21 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.