Entries Tagged "physical security"

Page 3 of 24

Security Lessons from a Power Saw

Lance Spitzner looks at the safety features of a power saw and tries to apply them to Internet security:

By the way, here are some of the key safety features that are built into the DeWalt Mitre Saw. Notice in all three of these the human does not have to do anything special, just use the device. This is how we need to think from a security perspective.

  • Safety Cover: There is a plastic safety cover that protects the entire rotating blade. The only time the blade is actually exposed is when you lower the saw to actually cut into the wood. The moment you start to raise the blade after cutting, the plastic cover protects everything again. This means to hurt yourself you have to manually lower the blade with one hand then insert your hand into the cutting blade zone.
  • Power Switch: Actually, there is no power switch. Instead, after the saw is plugged in, to activate the saw you have to depress a lever. Let the lever go and saw stops. This means if you fall, slip, blackout, have a heart attack or any other type of accident and let go of the lever, the saw automatically stops. In other words, the saw always fails to the off (safe) position.
  • Shadow: The saw has a light that projects a shadow of the cutting blade precisely on the wood where the blade will cut. No guessing where the blade is going to cut.

Safety is like security, you cannot eliminate risk. But I feel this is a great example of how security can learn from others on how to take people into account.

Posted on October 19, 2016 at 6:45 AMView Comments

Self-Destructing Computer Chip

The chip is built on glass:

Shattering the glass is straightforward. When the proper circuit is toggled, a small resistor within the substrate heats up until the glass shatters. According to Corning, it will continue shattering even after the initial break, rendering the entire chip unusable. The demo chip resistor was triggered by a photo diode that switched the circuit when a laser shone upon it. The glass plate quickly shattered into fragments once the laser touches it.

Posted on September 17, 2015 at 7:17 AMView Comments

Vulnerabilities in Brink's Smart Safe

Brink’s sells an Internet-enabled smart safe called the CompuSafe Galileo. Despite being sold as a more secure safe, it’s wildly insecure:

Vulnerabilities found in CompuSafe Galileo safes, smart safes made by the ever-reliable Brinks company that are used by retailers, restaurants, and convenience stores, would allow a rogue employee or anyone else with physical access to them to command their doors to open and relinquish their cash….

The hack has the makings of the perfect crime, because a thief could also erase any evidence that the theft occurred simply by altering data in a back-end database where the smartsafe logs how much money is inside and who accessed it.

Nothing about these vulnerabilities is a surprise to anyone who works in computer security:

But the safes have an external USB port on the side of the touchscreens that allows service technicians to troubleshoot and obtain a backup of the database. This, unfortunately, creates an easy entrypoint for thieves to take complete, administrative control of the devices.

“Once you’re able to plug into that USB port, you’re able to access lots of things that you shouldn’t normally be able to access,” Petro told WIRED. “There is a full operating system…that you’re able to…fully take over…and make [the safe] do whatever you want it to do.”

The researchers created a malicious script that, once inserted into a safe on a USB stick, lets a thief automatically open the safe doors by emulating certain mouse and keyboard actions and bypassing standard application controls. “You plug in this little gizmo, wait about 60 seconds, and the door just pops open,” says Petro.

If it sounds like the people who designed this e-safe ignored all of the things we’ve learned about computer security in the last few decades, you’re right. And that’s the problem with Internet-of-Things security: it’s often designed by people who don’t know computer or Internet security.

They also haven’t learned the lessons of full disclosure or rapid patching:

They notified Brinks about the vulnerabilities more than a year ago, but say the company appears to have done nothing to resolve the issues. Although Brinks could disable driver software associated with the USB port to prevent someone from controlling the safes in this way, or lock down the system and database so it’s not running in administrative mode and the database can’t be changed, but so far the company appears to have done none of these.

.

Again, this all sounds familiar. The computer industry learned its lessons over a decade ago. Before then they ignored security vulnerabilities, threatened researchers, and generally behaved very badly. I expect the same things to happen with Internet-of-Things companies.

Posted on August 3, 2015 at 1:27 PMView Comments

Human and Technology Failures in Nuclear Facilities

This is interesting:

We can learn a lot about the potential for safety failures at US nuclear plants from the July 29, 2012, incident in which three religious activists broke into the supposedly impregnable Y-12 facility at Oak Ridge, Tennessee, the Fort Knox of uranium. Once there, they spilled blood and spray painted “work for peace not war” on the walls of a building housing enough uranium to build thousands of nuclear weapons. They began hammering on the building with a sledgehammer, and waited half an hour to be arrested. If an 82-year-old nun with a heart condition and two confederates old enough to be AARP members could do this, imagine what a team of determined terrorists could do.

[…]

Where some other countries often rely more on guards with guns, the United States likes to protect its nuclear facilities with a high-tech web of cameras and sensors. Under the Nunn-Lugar program, Washington has insisted that Russia adopt a similar approach to security at its own nuclear sites­—claiming that an American cultural preference is objectively superior. The Y-12 incident shows the problem with the American approach of automating security. At the Y-12 facility, in addition to the three fences the protestors had to cut through with wire-cutters, there were cameras and motion detectors. But we too easily forget that technology has to be maintained and watched to be effective. According to Munger, 20 percent of the Y-12 cameras were not working on the night the activists broke in. Cameras and motion detectors that had been broken for months had gone unrepaired. A security guard was chatting rather than watching the feed from a camera that did work. And guards ignored the motion detectors, which were so often set off by local wildlife that they assumed all alarms were false positives….

Instead of having government forces guard the site, the Department of Energy had hired two contractors: Wackenhut and Babcock and Wilcox. Wackenhut is now owned by the British company G4S, which also botched security for the 2012 London Olympics, forcing the British government to send 3,500 troops to provide security that the company had promised but proved unable to deliver. Private companies are, of course, driven primarily by the need to make a profit, but there are surely some operations for which profit should not be the primary consideration.

Babcock and Wilcox was supposed to maintain the security equipment at the Y-12 site, while Wackenhut provided the guards. Poor communication between the two companies was one reason sensors and cameras were not repaired. Furthermore, Babcock and Wilcox had changed the design of the plant’s Highly Enriched Uranium Materials Facility, making it a more vulnerable aboveground building, in order to cut costs. And Wackenhut was planning to lay off 70 guards at Y-12, also to cut costs.

There’s an important lesson here. Security is a combination of people, process, and technology. All three have to be working in order for security to work.

Slashdot thread.

Posted on July 14, 2015 at 5:53 AMView Comments

Easily Cracking a Master Combination Lock

Impressive.

Kamkar told Ars his Master Lock exploit started with a well-known vulnerability that allows Master Lock combinations to be cracked in 100 or fewer tries. He then physically broke open a combination lock and noticed the resistance he observed was caused by two lock parts that touched in a way that revealed important clues about the combination. (He likened the Master Lock design to a side channel in cryptographic devices that can be exploited to obtain the secret key.) Kamkar then made a third observation that was instrumental to his Master Lock exploit: the first and third digit of the combination, when divided by four, always return the same remainder. By combining the insights from all three weaknesses he devised the attack laid out in the video.

Posted on May 5, 2015 at 6:59 AMView Comments

Measuring the Expertise of Burglars

New research paper: “New methods for examining expertise in burglars in natural and simulated environments: preliminary findings“:

Expertise literature in mainstream cognitive psychology is rarely applied to criminal behaviour. Yet, if closely scrutinised, examples of the characteristics of expertise can be identified in many studies examining the cognitive processes of offenders, especially regarding residential burglary. We evaluated two new methodologies that might improve our understanding of cognitive processing in offenders through empirically observing offending behaviour and decision-making in a free-responding environment. We tested hypotheses regarding expertise in burglars in a small, exploratory study observing the behaviour of ‘expert’ offenders (ex-burglars) and novices (students) in a real and in a simulated environment. Both samples undertook a mock burglary in a real house and in a simulated house on a computer. Both environments elicited notably different behaviours between the experts and the novices with experts demonstrating superior skill. This was seen in: more time spent in high value areas; fewer and more valuable items stolen; and more systematic routes taken around the environments. The findings are encouraging and provide support for the development of these observational methods to examine offender cognitive processing and behaviour.

The lead researcher calls this “dysfunctional expertise,” but I disagree. It’s expertise.

Claire Nee, a researcher at the University of Portsmouth in the U.K., has been studying burglary and other crime for over 20 years. Nee says that the low clearance rate means that burglars often remain active, and some will even gain expertise in the crime. As with any job, practice results in skills. “By interviewing burglars over a number of years we’ve discovered that their thought processes become like experts in any field, that is they learn to automatically pick up cues in the environment that signify a successful burglary without even being aware of it. We call it ‘dysfunctional expertise,'” explains Nee.

See also this paper.

Posted on April 30, 2015 at 2:22 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.