Securing Your Smartphone
This is part 3 of Sean Gallagher’s advice for “securing your digital life.”
Page 1 of 21
This is part 3 of Sean Gallagher’s advice for “securing your digital life.”
ArsTechnica’s Sean Gallagher has a two–part article on “securing your digital life.”
It’s pretty good.
TorrentFreak surveyed nineteen VPN providers, asking them questions about their privacy practices: what data they keep, how they respond to court order, what country they are incorporated in, and so on.
Most interesting to me is the home countries of these companies. Express VPN is incorporated in the British Virgin Islands. NordVPN is incorporated in Panama. There are VPNs from the Seychelles, Malaysia, and Bulgaria. There are VPNs from more Western and democratic countries like the US, Switzerland, Canada, and Sweden. Presumably all of those companies follow the laws of their home country.
And it matters. I’ve been thinking about this since Trojan Shield was made public. This is the joint US/Australia-run encrypted messaging service that lured criminals to use it, and then spied on everything they did. Or, at least, Australian law enforcement spied on everyone. The FBI wasn’t able to because the US has better privacy laws.
We don’t talk about it a lot, but VPNs are entirely based on trust. As a consumer, you have no idea which company will best protect your privacy. You don’t know the data protection laws of the Seychelles or Panama. You don’t know which countries can put extra-legal pressure on companies operating within their jurisdiction. You don’t know who actually owns and runs the VPNs. You don’t even know which foreign companies the NSA has targeted for mass surveillance. All you can do is make your best guess, and hope you guessed well.
Microsoft researchers just released an open-source automation tool for security testing AI systems: “Counterfit.” Details on their blog.
President Biden wants his Peloton in the White House. For those who have missed the hype, it’s an Internet-connected stationary bicycle. It has a screen, a camera, and a microphone. You can take live classes online, work out with your friends, or join the exercise social network. And all of that is a security risk, especially if you are the president of the United States.
Any computer brings with it the risk of hacking. This is true of our computers and phones, and it’s also true about all of the Internet-of-Things devices that are increasingly part of our lives. These large and small appliances, cars, medical devices, toys and—yes—exercise machines are all computers at their core, and they’re all just as vulnerable. Presidents face special risks when it comes to the IoT, but Biden has the NSA to help him handle them.
Not everyone is so lucky, and the rest of us need something more structural.
US presidents have long tussled with their security advisers over tech. The NSA often customizes devices, but that means eliminating features. In 2010, President Barack Obama complained that his presidential BlackBerry device was “no fun” because only ten people were allowed to contact him on it. In 2013, security prevented him from getting an iPhone. When he finally got an upgrade to his BlackBerry in 2016, he complained that his new “secure” phone couldn’t take pictures, send texts, or play music. His “hardened” iPad to read daily intelligence briefings was presumably similarly handicapped. We don’t know what the NSA did to these devices, but they certainly modified the software and physically removed the cameras and microphones—and possibly the wireless Internet connection.
President Donald Trump resisted efforts to secure his phones. We don’t know the details, only that they were regularly replaced, with the government effectively treating them as burner phones.
The risks are serious. We know that the Russians and the Chinese were eavesdropping on Trump’s phones. Hackers can remotely turn on microphones and cameras, listening in on conversations. They can grab copies of any documents on the device. They can also use those devices to further infiltrate government networks, maybe even jumping onto classified networks that the devices connect to. If the devices have physical capabilities, those can be hacked as well. In 2007, the wireless features of Vice President Richard B. Cheney’s pacemaker were disabled out of fears that it could be hacked to assassinate him. In 1999, the NSA banned Furbies from its offices, mistakenly believing that they could listen and learn.
Physically removing features and components works, but the results are increasingly unacceptable. The NSA could take Biden’s Peloton and rip out the camera, microphone, and Internet connection, and that would make it secure—but then it would just be a normal (albeit expensive) stationary bike. Maybe Biden wouldn’t accept that, and he’d demand that the NSA do even more work to customize and secure the Peloton part of the bicycle. Maybe Biden’s security agents could isolate his Peloton in a specially shielded room where it couldn’t infect other computers, and warn him not to discuss national security in its presence.
This might work, but it certainly doesn’t scale. As president, Biden can direct substantial resources to solving his cybersecurity problems. The real issue is what everyone else should do. The president of the United States is a singular espionage target, but so are members of his staff and other administration officials.
Members of Congress are targets, as are governors and mayors, police officers and judges, CEOs and directors of human rights organizations, nuclear power plant operators, and election officials. All of these people have smartphones, tablets, and laptops. Many have Internet-connected cars and appliances, vacuums, bikes, and doorbells. Every one of those devices is a potential security risk, and all of those people are potential national security targets. But none of those people will get their Internet-connected devices customized by the NSA.
That is the real cybersecurity issue. Internet connectivity brings with it features we like. In our cars, it means real-time navigation, entertainment options, automatic diagnostics, and more. In a Peloton, it means everything that makes it more than a stationary bike. In a pacemaker, it means continuous monitoring by your doctor—and possibly your life saved as a result. In an iPhone or iPad, it means…well, everything. We can search for older, non-networked versions of some of these devices, or the NSA can disable connectivity for the privileged few of us. But the result is the same: in Obama’s words, “no fun.”
And unconnected options are increasingly hard to find. In 2016, I tried to find a new car that didn’t come with Internet connectivity, but I had to give up: there were no options to omit that in the class of car I wanted. Similarly, it’s getting harder to find major appliances without a wireless connection. As the price of connectivity continues to drop, more and more things will only be available Internet-enabled.
Internet security is national security—not because the president is personally vulnerable but because we are all part of a single network. Depending on who we are and what we do, we will make different trade-offs between security and fun. But we all deserve better options.
Regulations that force manufacturers to provide better security for all of us are the only way to do that. We need minimum security standards for computers of all kinds. We need transparency laws that give all of us, from the president on down, sufficient information to make our own security trade-offs. And we need liability laws that hold companies liable when they misrepresent the security of their products and services.
I’m not worried about Biden. He and his staff will figure out how to balance his exercise needs with the national security needs of the country. Sometimes the solutions are weirdly customized, such as the anti-eavesdropping tent that Obama used while traveling. I am much more worried about the political activists, journalists, human rights workers, and oppressed minorities around the world who don’t have the money or expertise to secure their technology, or the information that would give them the ability to make informed decisions on which technologies to choose.
This essay previously appeared in the Washington Post.
Sunoo Park and Kendra Albert have published “A Researcher’s Guide to Some Legal Risks of Security Research.”
From a summary:
Such risk extends beyond anti-hacking laws, implicating copyright law and anti-circumvention provisions (DMCA §1201), electronic privacy law (ECPA), and cryptography export controls, as well as broader legal areas such as contract and trade secret law.
Our Guide gives the most comprehensive presentation to date of this landscape of legal risks, with an eye to both legal and technical nuance. Aimed at researchers, the public, and technology lawyers alike, its aims both to provide pragmatic guidance to those navigating today’s uncertain legal landscape, and to provoke public debate towards future reform.
Comprehensive, and well worth reading.
Here’s a Twitter thread by Kendra.
Interesting usability study: “More Than Just Good Passwords? A Study on Usability and Security Perceptions of Risk-based Authentication“:
Abstract: Risk-based Authentication (RBA) is an adaptive security measure to strengthen password-based authentication. RBA monitors additional features during login, and when observed feature values differ significantly from previously seen ones, users have to provide additional authentication factors such as a verification code. RBA has the potential to offer more usable authentication, but the usability and the security perceptions of RBA are not studied well.
We present the results of a between-group lab study (n=65) to evaluate usability and security perceptions of two RBA variants, one 2FA variant, and password-only authentication. Our study shows with significant results that RBA is considered to be more usable than the studied 2FA variants, while it is perceived as more secure than password-only authentication in general and comparably se-cure to 2FA in a variety of application types. We also observed RBA usability problems and provide recommendations for mitigation.Our contribution provides a first deeper understanding of the users’perception of RBA and helps to improve RBA implementations for a broader user acceptance.
Paper’s website. I’ve blogged about risk-based authentication before.
Really interesting conversation with someone who negotiates with ransomware gangs:
For now, it seems that paying ransomware, while obviously risky and empowering/encouraging ransomware attackers, can perhaps be comported so as not to break any laws (like anti-terrorist laws, FCPA, conspiracy and others) and even if payment is arguably unlawful, seems unlikely to be prosecuted. Thus, the decision whether to pay or ignore a ransomware demand, seems less of a legal, and more of a practical, determination almost like a cost-benefit analysis.
The arguments for rendering a ransomware payment include:
- Payment is the least costly option;
- Payment is in the best interest of stakeholders (e.g. a hospital patient in desperate need of an immediate operation whose records are locked up);
- Payment can avoid being fined for losing important data;
- Payment means not losing highly confidential information; and
- Payment may mean not going public with the data breach.
The arguments against rendering a ransomware payment include:
- Payment does not guarantee that the right encryption keys with the proper decryption algorithms will be provided;
- Payment further funds additional criminal pursuits of the attacker, enabling a cycle of ransomware crime;
- Payment can do damage to a corporate brand;
- Payment may not stop the ransomware attacker from returning;
- If victims stopped making ransomware payments, the ransomware revenue stream would stop and ransomware attackers would have to move on to perpetrating another scheme; and
- Using Bitcoin to pay a ransomware attacker can put organizations at risk. Most victims must buy Bitcoin on entirely unregulated and free-wheeling exchanges that can also be hacked, leaving buyers’ bank account information stored on these exchanges vulnerable.
When confronted with a ransomware attack, the options all seem bleak. Pay the hackers and the victim may not only prompt future attacks, but there is also no guarantee that the hackers will restore a victim’s dataset. Ignore the hackers and the victim may incur significant financial damage or even find themselves out of business. The only guarantees during a ransomware attack are the fear, uncertainty and dread inevitably experienced by the victim.
The NSA has issued an advisory on the risks of location data.
Mitigations reduce, but do not eliminate, location tracking risks in mobile devices. Most users rely on features disabled by such mitigations, making such safeguards impractical. Users should be aware of these risks and take action based on their specific situation and risk tolerance. When location exposure could be detrimental to a mission, users should prioritize mission risk and apply location tracking mitigations to the greatest extent possible. While the guidance in this document may be useful to a wide range of users, it is intended primarily for NSS/DoD system users.
The document provides a list of mitigation strategies, including turning things off:
If it is critical that location is not revealed for a particular mission, consider the following recommendations:
- Determine a non-sensitive location where devices with wireless capabilities can be secured prior to the start of any activities. Ensure that the mission site cannot be predicted from this location.
- Leave all devices with any wireless capabilities (including personal devices) at this non-sensitive location. Turning off the device may not be sufficient if a device has been compromised.
- For mission transportation, use vehicles without built-in wireless communication capabilities, or turn off the capabilities, if possible.
Of course, turning off your wireless devices is itself a signal that something is going on. It’s hard to be clandestine in our always connected world.
Interesting research: “Identifying Unintended Harms of Cybersecurity Countermeasures“:
Abstract: Well-meaning cybersecurity risk owners will deploy countermeasures (technologies or procedures) to manage risks to their services or systems. In some cases, those countermeasures will produce unintended consequences, which must then be addressed. Unintended consequences can potentially induce harm, adversely affecting user behaviour, user inclusion, or the infrastructure itself (including other services or countermeasures). Here we propose a framework for preemptively identifying unintended harms of risk countermeasures in cybersecurity.The framework identifies a series of unintended harms which go beyond technology alone, to consider the cyberphysical and sociotechnical space: displacement, insecure norms, additional costs, misuse, misclassification, amplification, and disruption. We demonstrate our framework through application to the complex,multi-stakeholder challenges associated with the prevention of cyberbullying as an applied example. Our framework aims to illuminate harmful consequences, not to paralyze decision-making, but so that potential unintended harms can be more thoroughly considered in risk management strategies. The framework can support identification and preemptive planning to identify vulnerable populations and preemptively insulate them from harm. There are opportunities to use the framework in coordinating risk management strategy across stakeholders in complex cyberphysical environments.
Security is always a trade-off. I appreciate work that examines the details of that trade-off.
Sidebar photo of Bruce Schneier by Joe MacInnis.