Entries Tagged "physical security"

Page 4 of 24

Risks of Keyloggers on Public Computers

Brian Krebs is reporting that:

The U.S. Secret Service is advising the hospitality industry to inspect computers made available to guests in hotel business centers, warning that crooks have been compromising hotel business center PCs with keystroke-logging malware in a bid to steal personal and financial data from guests.

It’s actually a very hard problem to solve. The adversary can have unrestricted access to the computer, especially hotel business center computers that are often tucked away where no one else is looking. I assume that if someone has physical access to my computer, he can own it. This is doubly true if he has hardware access.

Posted on July 15, 2014 at 2:30 PMView Comments

1971 Social Engineering Attack

From Betty Medsger’s book on the 1971 FBI burglary (page 22):

As burglars, they used some unusual techniques, ones Davidon enjoyed recalling years later, such as what some of them did in 1970 at a draft board office in Delaware. During their casing, they had noticed that the interior door that opened to the draft board office was always locked. There was no padlock to replace, as they had done at a draft board raid in Philadelphia a few months earlier, and no one in the group was able to pick the lock. The break-in technique they settled on at that office must be unique in the annals of burglary. Several hours before the burglary was to take place, one of them wrote a note and tacked it to the door they wanted to enter: “Please don’t lock this door tonight.” Sure enough, when the burglars arrived that night, someone had obediently left the door unlocked. The burglars entered the office with ease, stole the Selective Service records, and left. They were so pleased with themselves that one of them proposed leaving a thank-you note on the door. More cautious minds prevailed. Miss Manners be damned, they did not leave a note.

Posted on February 5, 2014 at 6:02 AMView Comments

1971 FBI Burglary

Interesting story:

…burglars took a lock pick and a crowbar and broke into a Federal Bureau of Investigation office in a suburb of Philadelphia, making off with nearly every document inside.

They were never caught, and the stolen documents that they mailed anonymously to newspaper reporters were the first trickle of what would become a flood of revelations about extensive spying and dirty-tricks operations by the F.B.I. against dissident groups.

Video article. And the book.

Interesting precursor to Edward Snowden.

Posted on January 10, 2014 at 6:45 AMView Comments

"Military Style" Raid on California Power Station

I don’t know what to think about this:

Around 1:00 AM on April 16, at least one individual (possibly two) entered two different manholes at the PG&E Metcalf power substation, southeast of San Jose, and cut fiber cables in the area around the substation. That knocked out some local 911 services, landline service to the substation, and cell phone service in the area, a senior U.S. intelligence official told Foreign Policy. The intruder(s) then fired more than 100 rounds from what two officials described as a high-powered rifle at several transformers in the facility. Ten transformers were damaged in one area of the facility, and three transformer banks—or groups of transformers—were hit in another, according to a PG&E spokesman.

The article worries that this might be a dry-run to some cyberwar-like attack, but that doesn’t make sense. But it’s just too complicated and weird to be a prank.

Anyone have any ideas?

Posted on January 2, 2014 at 6:40 AMView Comments

Dry Ice Bombs at LAX

The news story about the guy who left dry ice bombs in restricted areas of LAX is really weird.

I can’t get worked up over it, though. Dry ice bombs are a harmless prank. I set off a bunch of them when I was in college, although I used liquid nitrogen, because I was impatient—and they’re harmless. I know of someone who set a few off over the summer, just for fun. They do make a very satisfying boom.

Having them set off in a secure airport area doesn’t illustrate any new vulnerabilities. We already know that trusted people can subvert security systems. So what?

I’ve done a bunch of press interviews on this. One radio announcer really didn’t like my nonchalance. He really wanted me to complain about the lack of cameras at LAX, and was unhappy when I pointed out that we didn’t need cameras to catch this guy.

I like my kicker quote in this article:

Various people, including former Los Angeles Police Chief William Bratton, have called LAX the No. 1 terrorist target on the West Coast. But while an Algerian man discovered with a bomb at the Canadian border in 1999 was sentenced to 37 years in prison in connection with a plot to cause damage at LAX, Schneier said that assessment by Bratton is probably not true.

“Where can you possibly get that data?” he said. “I don’t think terrorists respond to opinion polls about how juicy targets are.”

Posted on October 23, 2013 at 5:35 AMView Comments

Human-Machine Trust Failures

I jacked a visitor’s badge from the Eisenhower Executive Office Building in Washington, DC, last month. The badges are electronic; they’re enabled when you check in at building security. You’re supposed to wear it on a chain around your neck at all times and drop it through a slot when you leave.

I kept the badge. I used my body as a shield, and the chain made a satisfying noise when it hit bottom. The guard let me through the gate.

The person after me had problems, though. Some part of the system knew something was wrong, and wouldn’t let her out. Eventually, the guard had to manually override something.

My point in telling this story is not to demonstrate how I beat the EEOB’s security—I’m sure the badge was quickly deactivated and showed up in some missing-badge log next to my name—but to illustrate how security vulnerabilities can result from human/machine trust failures. Something went wrong between when I went through the gate and when the person after me did. The system knew it but couldn’t adequately explain it to the guards. The guards knew it but didn’t know the details. Because the failure occurred when the person after me tried to leave the building, they assumed she was the problem. And when they cleared her of wrongdoing, they blamed the system.

In any hybrid security system, the human portion needs to trust the machine portion. To do so, both must understand the expected behavior for every state—how the system can fail and what those failures look like. The machine must be able to communicate its state and have the capacity to alert the humans when an expected state transition doesn’t happen as expected. Things will go wrong, either by accident or as the result of an attack, and the humans are going to need to troubleshoot the system in real time—that requires understanding on both parts. Each time things go wrong, and the machine portion doesn’t communicate well, the human portion trusts it a little less.

This problem is not specific to security systems, but inducing this sort of confusion is a good way to attack systems. When the attackers understand the system—especially the machine part—better than the humans in the system do, they can create a failure to exploit. Many social engineering attacks fall into this category. Failures also happen the other way. We’ve all experienced trust without understanding, when the human part of the system defers to the machine, even though it makes no sense: “The computer is always right.”

Humans and machines have different strengths. Humans are flexible and can do creative thinking in ways that machines cannot. But they’re easily fooled. Machines are more rigid and can handle state changes and process flows much better than humans can. But they’re bad at dealing with exceptions. If humans are to serve as security sensors, they need to understand what is being sensed. (That’s why “if you see something, say something” fails so often.) If a machine automatically processes input, it needs to clearly flag anything unexpected.

The more machine security is automated, and the more the machine is expected to enforce security without human intervention, the greater the impact of a successful attack. If this sounds like an argument for interface simplicity, it is. The machine design will be necessarily more complicated: more resilience, more error handling, and more internal checking. But the human/computer communication needs to be clear and straightforward. That’s the best way to give humans the trust and understanding they need in the machine part of any security system.

This essay previously appeared in IEEE Security & Privacy.

Posted on September 5, 2013 at 8:32 AMView Comments

Scientists Banned from Revealing Details of Car-Security Hack

The UK has banned researchers from revealing details of security vulnerabilities in car locks. In 2008, Phillips brought a similar suit against researchers who broke the Mifare chip. That time, they lost. This time, Volkswagen sued and won.

This is bad news for security researchers. (Remember back in 2001 when security researcher Ed Felten sued the RIAA in the US to be able to publish his research results?) We’re not going to improve security unless we’re allowed to publish our results. And we can’t start suppressing scientific results, just because a big corporation doesn’t like what it does to their reputation.

EDITED TO ADD (8/14): Here’s the ruling.

Posted on August 1, 2013 at 6:37 AMView Comments

NSA Implements Two-Man Control for Sysadmins

In an effort to lock the barn door after the horse has escaped, the NSA is implementing two-man control for sysadmins:

NSA chief Keith Alexander said his agency had implemented a “two-man rule,” under which any system administrator like Snowden could only access or move key information with another administrator present. With some 15,000 sites to fix, Alexander said, it would take time to spread across the whole agency.

[…]

Alexander said that server rooms where such data is stored are now locked and require a two-man team to access them—safeguards that he said would be implemented at the Pentagon and intelligence agencies after a pilot at the NSA.

This kind of thing has happened before. After USN Chief Warrant Officer John Walker sold encryption keys to the Soviets, the Navy implemented two-man control for key material.

It’s an effective, if expensive, security measure—and an easy one for the NSA to implement while it figures out what it really has to do to secure information from IT insiders.

Posted on July 24, 2013 at 6:18 AMView Comments

The Japanese Response to Terrorism

Lessons from Japan’s response to Aum Shinrikyo:

Yet what’s as remarkable as Aum’s potential for mayhem is how little of it, on balance, they actually caused. Don’t misunderstand me: Aum’s crimes were horrific, not merely the terrible subway gassing but their long history of murder, intimidation, extortion, fraud, and exploitation. What they did was unforgivable, and the human cost, devastating. But at no point did Aum Shinrikyo represent an existential threat to Japan or its people. The death toll of Aum was several dozen; again, a terrible human cost, but not an existential threat. At no time was the territorial integrity of Japan threatened. At no time was the operational integrity of the Japanese government threatened. At no time was the day-to-day operation of the Japanese economy meaningfully threatened. The threat to the average Japanese citizen was effectively nil.

Just as important was what the Japanese government and people did not do. They didn’t panic. They didn’t make sweeping changes to their way of life. They didn’t implement a vast system of domestic surveillance. They didn’t suspend basic civil rights. They didn’t begin to capture, torture, and kill without due process. They didn’t, in other words, allow themselves to be terrorized. Instead, they addressed the threat. They investigated and arrested the cult’s leadership. They tried them in civilian courts and earned convictions through due process. They buried their dead. They mourned. And they moved on. In every sense, it was a rational, adult, mature response to a terrible terrorist act, one that remained largely in keeping with liberal democratic ideals.

Posted on June 21, 2013 at 6:25 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.