Entries Tagged "smartphones"

Page 3 of 4

Interesting Attack on the EMV Smartcard Payment Standard

It’s complicated, but it’s basically a man-in-the-middle attack that involves two smartphones. The first phone reads the actual smartcard, and then forwards the required information to a second phone. That second phone actually conducts the transaction on the POS terminal. That second phone is able to convince the POS terminal to conduct the transaction without requiring the normally required PIN.

From a news article:

The researchers were able to demonstrate that it is possible to exploit the vulnerability in practice, although it is a fairly complex process. They first developed an Android app and installed it on two NFC-enabled mobile phones. This allowed the two devices to read data from the credit card chip and exchange information with payment terminals. Incidentally, the researchers did not have to bypass any special security features in the Android operating system to install the app.

To obtain unauthorized funds from a third-party credit card, the first mobile phone is used to scan the necessary data from the credit card and transfer it to the second phone. The second phone is then used to simultaneously debit the amount at the checkout, as many cardholders do nowadays. As the app declares that the customer is the authorized user of the credit card, the vendor does not realize that the transaction is fraudulent. The crucial factor is that the app outsmarts the card’s security system. Although the amount is over the limit and requires PIN verification, no code is requested.

The paper: “The EMV Standard: Break, Fix, Verify.”

Abstract: EMV is the international protocol standard for smartcard payment and is used in over 9 billion cards worldwide. Despite the standard’s advertised security, various issues have been previously uncovered, deriving from logical flaws that are hard to spot in EMV’s lengthy and complex specification, running over 2,000 pages.

We formalize a comprehensive symbolic model of EMV in Tamarin, a state-of-the-art protocol verifier. Our model is the first that supports a fine-grained analysis of all relevant security guarantees that EMV is intended to offer. We use our model to automatically identify flaws that lead to two critical attacks: one that defrauds the cardholder and another that defrauds the merchant. First, criminals can use a victim’s Visa contact-less card for high-value purchases, without knowledge of the card’s PIN. We built a proof-of-concept Android application and successfully demonstrated this attack on real-world payment terminals. Second, criminals can trick the terminal into accepting an unauthentic offline transaction, which the issuing bank should later decline, after the criminal has walked away with the goods. This attack is possible for implementations following the standard, although we did not test it on actual terminals for ethical reasons. Finally, we propose and verify improvements to the standard that prevent these attacks, as well as any other attacks that violate the considered security properties.The proposed improvements can be easily implemented in the terminals and do not affect the cards in circulation.

Posted on September 14, 2020 at 6:21 AMView Comments

The NSA on the Risks of Exposing Location Data

The NSA has issued an advisory on the risks of location data.

Mitigations reduce, but do not eliminate, location tracking risks in mobile devices. Most users rely on features disabled by such mitigations, making such safeguards impractical. Users should be aware of these risks and take action based on their specific situation and risk tolerance. When location exposure could be detrimental to a mission, users should prioritize mission risk and apply location tracking mitigations to the greatest extent possible. While the guidance in this document may be useful to a wide range of users, it is intended primarily for NSS/DoD system users.

The document provides a list of mitigation strategies, including turning things off:

If it is critical that location is not revealed for a particular mission, consider the following recommendations:

  • Determine a non-sensitive location where devices with wireless capabilities can be secured prior to the start of any activities. Ensure that the mission site cannot be predicted from this location.
  • Leave all devices with any wireless capabilities (including personal devices) at this non-sensitive location. Turning off the device may not be sufficient if a device has been compromised.
  • For mission transportation, use vehicles without built-in wireless communication capabilities, or turn off the capabilities, if possible.

Of course, turning off your wireless devices is itself a signal that something is going on. It’s hard to be clandestine in our always connected world.

News articles.

Posted on August 6, 2020 at 12:15 PMView Comments

Hacking Voice Assistants with Ultrasonic Waves

I previously wrote about hacking voice assistants with lasers. Turns you can do much the same thing with ultrasonic waves:

Voice assistants—the demo targeted Siri, Google Assistant, and Bixby—are designed to respond when they detect the owner’s voice after noticing a trigger phrase such as ‘Ok, Google’.

Ultimately, commands are just sound waves, which other researchers have already shown can be emulated using ultrasonic waves which humans can’t hear, providing an attacker has a line of sight on the device and the distance is short.

What SurfingAttack adds to this is the ability to send the ultrasonic commands through a solid glass or wood table on which the smartphone was sitting using a circular piezoelectric disc connected to its underside.

Although the distance was only 43cm (17 inches), hiding the disc under a surface represents a more plausible, easier-to-conceal attack method than previous techniques.

Research paper. Demonstration video.

Posted on March 23, 2020 at 6:19 AMView Comments

Modern Mass Surveillance: Identify, Correlate, Discriminate

Communities across the United States are starting to ban facial recognition technologies. In May of last year, San Francisco banned facial recognition; the neighboring city of Oakland soon followed, as did Somerville and Brookline in Massachusetts (a statewide ban may follow). In December, San Diego suspended a facial recognition program in advance of a new statewide law, which declared it illegal, coming into effect. Forty major music festivals pledged not to use the technology, and activists are calling for a nationwide ban. Many Democratic presidential candidates support at least a partial ban on the technology.

These efforts are well-intentioned, but facial recognition bans are the wrong way to fight against modern surveillance. Focusing on one particular identification method misconstrues the nature of the surveillance society we’re in the process of building. Ubiquitous mass surveillance is increasingly the norm. In countries like China, a surveillance infrastructure is being built by the government for social control. In countries like the United States, it’s being built by corporations in order to influence our buying behavior, and is incidentally used by the government.

In all cases, modern mass surveillance has three broad components: identification, correlation and discrimination. Let’s take them in turn.

Facial recognition is a technology that can be used to identify people without their knowledge or consent. It relies on the prevalence of cameras, which are becoming both more powerful and smaller, and machine learning technologies that can match the output of these cameras with images from a database of existing photos.

But that’s just one identification technology among many. People can be identified at a distance by their heartbeat or by their gait, using a laser-based system. Cameras are so good that they can read fingerprints and iris patterns from meters away. And even without any of these technologies, we can always be identified because our smartphones broadcast unique numbers called MAC addresses. Other things identify us as well: our phone numbers, our credit card numbers, the license plates on our cars. China, for example, uses multiple identification technologies to support its surveillance state.

Once we are identified, the data about who we are and what we are doing can be correlated with other data collected at other times. This might be movement data, which can be used to “follow” us as we move throughout our day. It can be purchasing data, Internet browsing data, or data about who we talk to via email or text. It might be data about our income, ethnicity, lifestyle, profession and interests. There is an entire industry of data brokers who make a living analyzing and augmenting data about who we are ­—using surveillance data collected by all sorts of companies and then sold without our knowledge or consent.

There is a huge ­—and almost entirely unregulated ­—data broker industry in the United States that trades on our information. This is how large Internet companies like Google and Facebook make their money. It’s not just that they know who we are, it’s that they correlate what they know about us to create profiles about who we are and what our interests are. This is why many companies buy license plate data from states. It’s also why companies like Google are buying health records, and part of the reason Google bought the company Fitbit, along with all of its data.

The whole purpose of this process is for companies—­ and governments ­—to treat individuals differently. We are shown different ads on the Internet and receive different offers for credit cards. Smart billboards display different advertisements based on who we are. In the future, we might be treated differently when we walk into a store, just as we currently are when we visit websites.

The point is that it doesn’t matter which technology is used to identify people. That there currently is no comprehensive database of heartbeats or gaits doesn’t make the technologies that gather them any less effective. And most of the time, it doesn’t matter if identification isn’t tied to a real name. What’s important is that we can be consistently identified over time. We might be completely anonymous in a system that uses unique cookies to track us as we browse the Internet, but the same process of correlation and discrimination still occurs. It’s the same with faces; we can be tracked as we move around a store or shopping mall, even if that tracking isn’t tied to a specific name. And that anonymity is fragile: If we ever order something online with a credit card, or purchase something with a credit card in a store, then suddenly our real names are attached to what was anonymous tracking information.

Regulating this system means addressing all three steps of the process. A ban on facial recognition won’t make any difference if, in response, surveillance systems switch to identifying people by smartphone MAC addresses. The problem is that we are being identified without our knowledge or consent, and society needs rules about when that is permissible.

Similarly, we need rules about how our data can be combined with other data, and then bought and sold without our knowledge or consent. The data broker industry is almost entirely unregulated; there’s only one law ­—passed in Vermont in 2018 ­—that requires data brokers to register and explain in broad terms what kind of data they collect. The large Internet surveillance companies like Facebook and Google collect dossiers on us are more detailed than those of any police state of the previous century. Reasonable laws would prevent the worst of their abuses.

Finally, we need better rules about when and how it is permissible for companies to discriminate. Discrimination based on protected characteristics like race and gender is already illegal, but those rules are ineffectual against the current technologies of surveillance and control. When people can be identified and their data correlated at a speed and scale previously unseen, we need new rules.

Today, facial recognition technologies are receiving the brunt of the tech backlash, but focusing on them misses the point. We need to have a serious conversation about all the technologies of identification, correlation and discrimination, and decide how much we as a society want to be spied on by governments and corporations—and what sorts of influence we want them to have over our lives.

This essay previously appeared in the New York Times.

EDITED TO ADD: Rereading this post-publication, I see that it comes off as overly critical of those who are doing activism in this space. Writing the piece, I wasn’t thinking about political tactics. I was thinking about the technologies that support surveillance capitalism, and law enforcement’s usage of that corporate platform. Of course it makes sense to focus on face recognition in the short term. It’s something that’s easy to explain, viscerally creepy, and obviously actionable. It also makes sense to focus specifically on law enforcement’s use of the technology; there are clear civil and constitutional rights issues. The fact that law enforcement is so deeply involved in the technology’s marketing feels wrong. And the technology is currently being deployed in Hong Kong against political protesters. It’s why the issue has momentum, and why we’ve gotten the small wins we’ve had. (The EU is considering a five-year ban on face recognition technologies.) Those wins build momentum, which lead to more wins. I should have been kinder to those in the trenches.

If you want to help, sign the petition from Public Voice calling on a moratorium on facial recognition technology for mass surveillance. Or write to your US congressperson and demand similar action. There’s more information from EFF and EPIC.

EDITED TO ADD (3/16): This essay has been translated into Spanish.

Posted on January 27, 2020 at 12:21 PMView Comments

Smartphone Election in Washington State

This year:

King County voters will be able to use their name and birthdate to log in to a Web portal through the Internet browser on their phones, says Bryan Finney, the CEO of Democracy Live, the Seattle-based voting company providing the technology.

Once voters have completed their ballots, they must verify their submissions and then submit a signature on the touch screen of their device.

Finney says election officials in Washington are adept at signature verification because the state votes entirely by mail. That will be the way people are caught if they log in to the system under false pretenses and try to vote as someone else.

The King County elections office plans to print out the ballots submitted electronically by voters whose signatures match and count the papers alongside the votes submitted through traditional routes.

While advocates say this creates an auditable paper trail, many security experts say that because the ballots cross the Internet before they are printed, any subsequent audits on them would be moot. If a cyberattack occurred, an audit could essentially require double-checking ballots that may already have been altered, says Buell.

Of course it’s not an auditable paper trail. There’s a reason why security experts use the phrase “voter-verifiable paper ballots.” A centralized printout of a received Internet message is not voter verifiable.

Another news article.

Posted on January 27, 2020 at 6:03 AMView Comments

Technical Report of the Bezos Phone Hack

Motherboard obtained and published the technical report on the hack of Jeff Bezos’s phone, which is being attributed to Saudi Arabia, specifically to Crown Prince Mohammed bin Salman.

…investigators set up a secure lab to examine the phone and its artifacts and spent two days poring over the device but were unable to find any malware on it. Instead, they only found a suspicious video file sent to Bezos on May 1, 2018 that “appears to be an Arabic language promotional film about telecommunications.”

That file shows an image of the Saudi Arabian flag and Swedish flags and arrived with an encrypted downloader. Because the downloader was encrypted this delayed or further prevented “study of the code delivered along with the video.”

Investigators determined the video or downloader were suspicious only because Bezos’ phone subsequently began transmitting large amounts of data. “[W]ithin hours of the encrypted downloader being received, a massive and unauthorized exfiltration of data from Bezos’ phone began, continuing and escalating for months thereafter,” the report states.

“The amount of data being transmitted out of Bezos’ phone changed dramatically after receiving the WhatsApp video file and never returned to baseline. Following execution of the encrypted downloader sent from MBS’ account, egress on the device immediately jumped by approximately 29,000 percent,” it notes. “Forensic artifacts show that in the six (6) months prior to receiving the WhatsApp video, Bezos’ phone had an average of 430KB of egress per day, fairly typical of an iPhone. Within hours of the WhatsApp video, egress jumped to 126MB. The phone maintained an unusually high average of 101MB of egress data per day for months thereafter, including many massive and highly atypical spikes of egress data.”

The Motherboard article also quotes forensic experts on the report:

A mobile forensic expert told Motherboard that the investigation as depicted in the report is significantly incomplete and would only have provided the investigators with about 50 percent of what they needed, especially if this is a nation-state attack. She says the iTunes backup and other extractions they did would get them only messages, photo files, contacts and other files that the user is interested in saving from their applications, but not the core files.

“They would need to use a tool like Graykey or Cellebrite Premium or do a jailbreak to get a look at the full file system. That’s where that state-sponsored malware is going to be found. Good state-sponsored malware should never show up in a backup,” said Sarah Edwards, an author and teacher of mobile forensics for the SANS Institute.

“The full file system is getting into the device and getting every single file on there­—the whole operating system, the application data, the databases that will not be backed up. So really the in-depth analysis should be done on that full file system, for this level of investigation anyway. I would have insisted on that right from the start.”

The investigators do note on the last page of their report that they need to jailbreak Bezos’s phone to examine the root file system. Edwards said this would indeed get them everything they would need to search for persistent spyware like the kind created and sold by the NSO Group. But the report doesn’t indicate if that did get done.

Posted on January 24, 2020 at 8:34 AMView Comments

SIM Hijacking

SIM hijacking—or SIM swapping—is an attack where a fraudster contacts your cell phone provider and convinces them to switch your account to a phone that they control. Since your smartphone often serves as a security measure or backup verification system, this allows the fraudster to take over other accounts of yours. Sometimes this involves people inside the phone companies.

Phone companies have added security measures since this attack became popular and public, but a new study (news article) shows that the measures aren’t helping:

We examined the authentication procedures used by five pre-paid wireless carriers when a customer attempted to change their SIM card. These procedures are an important line of defense against attackers who seek to hijack victims’ phone numbers by posing as the victim and calling the carrier to request that service be transferred to a SIM card the attacker possesses. We found that all five carriers used insecure authentication challenges that could be easily subverted by attackers.We also found that attackers generally only needed to target the most vulnerable authentication challenges, because the rest could be bypassed.

It’s a classic security vs. usability trade-off. The phone companies want to provide easy customer service for their legitimate customers, and that system is what’s being exploited by the SIM hijackers. Companies could make the fraud harder, but it would necessarily also make it harder for legitimate customers to modify their accounts.

Posted on January 21, 2020 at 6:30 AMView Comments

ToTok Is an Emirati Spying Tool

The smartphone messaging app ToTok is actually an Emirati spying tool:

But the service, ToTok, is actually a spying tool, according to American officials familiar with a classified intelligence assessment and a New York Times investigation into the app and its developers. It is used by the government of the United Arab Emirates to try to track every conversation, movement, relationship, appointment, sound and image of those who install it on their phones.

ToTok, introduced only months ago, was downloaded millions of times from the Apple and Google app stores by users throughout the Middle East, Europe, Asia, Africa and North America. While the majority of its users are in the Emirates, ToTok surged to become one of the most downloaded social apps in the United States last week, according to app rankings and App Annie, a research firm.

Apple and Google have removed it from their app stores. If you have it on your phone, delete it now.

Posted on December 24, 2019 at 1:13 PMView Comments

Security Vulnerabilities in Android Firmware

Researchers have discovered and revealed 146 vulnerabilities in various incarnations of Android smartphone firmware. The vulnerabilities were found by scanning the phones of 29 different Android makers, and each is unique to a particular phone or maker. They were found using automatic tools, and it is extremely likely that many of the vulnerabilities are not exploitable—making them bugs but not security concerns. There is no indication that any of these vulnerabilities were put there on purpose, although it is reasonable to assume that other organizations do this same sort of scanning and use the findings for attack. And since they’re firmware bugs, in many cases there is no ability to patch them.

I see this as yet another demonstration of how hard supply chain security is.

News article.

Posted on November 18, 2019 at 6:33 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.