Entries Tagged "physical security"

Page 3 of 25

Jackpotting Attacks Against US ATMs

Brian Krebs is reporting sophisticated jackpotting attacks against US ATMs. The attacker gains physical access to the ATM, plants malware using specialized electronics, and then later returns and forces the machine to dispense all the cash it has inside.

The Secret Service alert explains that the attackers typically use an endoscope—a slender, flexible instrument traditionally used in medicine to give physicians a look inside the human body—to locate the internal portion of the cash machine where they can attach a cord that allows them to sync their laptop with the ATM’s computer.

“Once this is complete, the ATM is controlled by the fraudsters and the ATM will appear Out of Service to potential customers,” reads the confidential Secret Service alert.

At this point, the crook(s) installing the malware will contact co-conspirators who can remotely control the ATMs and force the machines to dispense cash.

“In previous Ploutus.D attacks, the ATM continuously dispensed at a rate of 40 bills every 23 seconds,” the alert continues. Once the dispense cycle starts, the only way to stop it is to press cancel on the keypad. Otherwise, the machine is completely emptied of cash, according to the alert.

Lots of details in the article.

Posted on February 1, 2018 at 6:23 AMView Comments

Turning an Amazon Echo into an Eavesdropping Device

For once, the real story isn’t as bad as it seems. A researcher has figured out how to install malware onto an Echo that causes it to stream audio back to a remote controller, but:

The technique requires gaining physical access to the target Echo, and it works only on devices sold before 2017. But there’s no software fix for older units, Barnes warns, and the attack can be performed without leaving any sign of hardware intrusion.

The way to implement this attack is by intercepting the Echo before it arrives at the target location. But if you can do that, there are a lot of other things you can do. So while this is a vulnerability that needs to be fixed—and seems to have inadvertently been fixed—it’s not a cause for alarm.

Posted on August 10, 2017 at 1:54 PMView Comments

Clever Physical ATM Attack

This is an interesting combination of computer and physical attack:

Researchers from the Russian security firm Kaspersky on Monday detailed a new ATM-emptying attack, one that mixes digital savvy with a very precise form of physical penetration. Kaspersky’s team has even reverse engineered and demonstrated the attack, using only a portable power drill and a $15 homemade gadget that injects malicious commands to trigger the machine’s cash dispenser. And though they won’t name the ATM manufacturer or the banks affected, they warn that thieves have already used the drill attack across Russia and Europe, and that the technique could still leave ATMs around the world vulnerable to having their cash safes disemboweled in a matter of minutes.

“We wanted to know: To what extent can you control the internals of the ATM with one drilled hole and one connected wire? It turns out we can do anything with it,” says Kaspersky researcher Igor Soumenkov, who presented the research at the company’s annual Kaspersky Analyst Summit. “The dispenser will obey and dispense money, and it can all be done with a very simple microcomputer.”

Posted on April 5, 2017 at 6:29 AMView Comments

Security Lessons from a Power Saw

Lance Spitzner looks at the safety features of a power saw and tries to apply them to Internet security:

By the way, here are some of the key safety features that are built into the DeWalt Mitre Saw. Notice in all three of these the human does not have to do anything special, just use the device. This is how we need to think from a security perspective.

  • Safety Cover: There is a plastic safety cover that protects the entire rotating blade. The only time the blade is actually exposed is when you lower the saw to actually cut into the wood. The moment you start to raise the blade after cutting, the plastic cover protects everything again. This means to hurt yourself you have to manually lower the blade with one hand then insert your hand into the cutting blade zone.
  • Power Switch: Actually, there is no power switch. Instead, after the saw is plugged in, to activate the saw you have to depress a lever. Let the lever go and saw stops. This means if you fall, slip, blackout, have a heart attack or any other type of accident and let go of the lever, the saw automatically stops. In other words, the saw always fails to the off (safe) position.
  • Shadow: The saw has a light that projects a shadow of the cutting blade precisely on the wood where the blade will cut. No guessing where the blade is going to cut.

Safety is like security, you cannot eliminate risk. But I feel this is a great example of how security can learn from others on how to take people into account.

Posted on October 19, 2016 at 6:45 AMView Comments

Self-Destructing Computer Chip

The chip is built on glass:

Shattering the glass is straightforward. When the proper circuit is toggled, a small resistor within the substrate heats up until the glass shatters. According to Corning, it will continue shattering even after the initial break, rendering the entire chip unusable. The demo chip resistor was triggered by a photo diode that switched the circuit when a laser shone upon it. The glass plate quickly shattered into fragments once the laser touches it.

Posted on September 17, 2015 at 7:17 AMView Comments

Vulnerabilities in Brink's Smart Safe

Brink’s sells an Internet-enabled smart safe called the CompuSafe Galileo. Despite being sold as a more secure safe, it’s wildly insecure:

Vulnerabilities found in CompuSafe Galileo safes, smart safes made by the ever-reliable Brinks company that are used by retailers, restaurants, and convenience stores, would allow a rogue employee or anyone else with physical access to them to command their doors to open and relinquish their cash….

The hack has the makings of the perfect crime, because a thief could also erase any evidence that the theft occurred simply by altering data in a back-end database where the smartsafe logs how much money is inside and who accessed it.

Nothing about these vulnerabilities is a surprise to anyone who works in computer security:

But the safes have an external USB port on the side of the touchscreens that allows service technicians to troubleshoot and obtain a backup of the database. This, unfortunately, creates an easy entrypoint for thieves to take complete, administrative control of the devices.

“Once you’re able to plug into that USB port, you’re able to access lots of things that you shouldn’t normally be able to access,” Petro told WIRED. “There is a full operating system…that you’re able to…fully take over…and make [the safe] do whatever you want it to do.”

The researchers created a malicious script that, once inserted into a safe on a USB stick, lets a thief automatically open the safe doors by emulating certain mouse and keyboard actions and bypassing standard application controls. “You plug in this little gizmo, wait about 60 seconds, and the door just pops open,” says Petro.

If it sounds like the people who designed this e-safe ignored all of the things we’ve learned about computer security in the last few decades, you’re right. And that’s the problem with Internet-of-Things security: it’s often designed by people who don’t know computer or Internet security.

They also haven’t learned the lessons of full disclosure or rapid patching:

They notified Brinks about the vulnerabilities more than a year ago, but say the company appears to have done nothing to resolve the issues. Although Brinks could disable driver software associated with the USB port to prevent someone from controlling the safes in this way, or lock down the system and database so it’s not running in administrative mode and the database can’t be changed, but so far the company appears to have done none of these.

.

Again, this all sounds familiar. The computer industry learned its lessons over a decade ago. Before then they ignored security vulnerabilities, threatened researchers, and generally behaved very badly. I expect the same things to happen with Internet-of-Things companies.

Posted on August 3, 2015 at 1:27 PMView Comments

Human and Technology Failures in Nuclear Facilities

This is interesting:

We can learn a lot about the potential for safety failures at US nuclear plants from the July 29, 2012, incident in which three religious activists broke into the supposedly impregnable Y-12 facility at Oak Ridge, Tennessee, the Fort Knox of uranium. Once there, they spilled blood and spray painted “work for peace not war” on the walls of a building housing enough uranium to build thousands of nuclear weapons. They began hammering on the building with a sledgehammer, and waited half an hour to be arrested. If an 82-year-old nun with a heart condition and two confederates old enough to be AARP members could do this, imagine what a team of determined terrorists could do.

[…]

Where some other countries often rely more on guards with guns, the United States likes to protect its nuclear facilities with a high-tech web of cameras and sensors. Under the Nunn-Lugar program, Washington has insisted that Russia adopt a similar approach to security at its own nuclear sites­—claiming that an American cultural preference is objectively superior. The Y-12 incident shows the problem with the American approach of automating security. At the Y-12 facility, in addition to the three fences the protestors had to cut through with wire-cutters, there were cameras and motion detectors. But we too easily forget that technology has to be maintained and watched to be effective. According to Munger, 20 percent of the Y-12 cameras were not working on the night the activists broke in. Cameras and motion detectors that had been broken for months had gone unrepaired. A security guard was chatting rather than watching the feed from a camera that did work. And guards ignored the motion detectors, which were so often set off by local wildlife that they assumed all alarms were false positives….

Instead of having government forces guard the site, the Department of Energy had hired two contractors: Wackenhut and Babcock and Wilcox. Wackenhut is now owned by the British company G4S, which also botched security for the 2012 London Olympics, forcing the British government to send 3,500 troops to provide security that the company had promised but proved unable to deliver. Private companies are, of course, driven primarily by the need to make a profit, but there are surely some operations for which profit should not be the primary consideration.

Babcock and Wilcox was supposed to maintain the security equipment at the Y-12 site, while Wackenhut provided the guards. Poor communication between the two companies was one reason sensors and cameras were not repaired. Furthermore, Babcock and Wilcox had changed the design of the plant’s Highly Enriched Uranium Materials Facility, making it a more vulnerable aboveground building, in order to cut costs. And Wackenhut was planning to lay off 70 guards at Y-12, also to cut costs.

There’s an important lesson here. Security is a combination of people, process, and technology. All three have to be working in order for security to work.

Slashdot thread.

Posted on July 14, 2015 at 5:53 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.