Entries Tagged "security engineering"
Page 1 of 14
Ross Anderson’s fantastic textbook, Security Engineering, will have a third edition. The book won’t be published until December, but Ross has been making drafts of the chapters available online as he finishes them. Now that the book is completed, I expect the publisher to make him take the drafts off the Internet.
I personally find both the electronic and paper versions to be incredibly useful. Grab an electronic copy now while you still can.
The BSA — also known as the Software Alliance, formerly the Business Software Alliance (which explains the acronym) — is an industry lobbying group. They just published “Policy Principles for Building a Secure and Trustworthy Internet of Things.”
They call for:
- Distinguishing between consumer and industrial IoT.
- Offering incentives for integrating security.
- Harmonizing national and international policies.
- Establishing regularly updated baseline security requirements
As with pretty much everything else, you can assume that if an industry lobbying group is in favor of it, then it doesn’t go far enough.
And if you need more security and privacy principles for the IoT, here’s a list of over twenty.
French police hacked EncroChat secure phones, which are widely used by criminals:
Encrochat’s phones are essentially modified Android devices, with some models using the “BQ Aquaris X2,” an Android handset released in 2018 by a Spanish electronics company, according to the leaked documents. Encrochat took the base unit, installed its own encrypted messaging programs which route messages through the firm’s own servers, and even physically removed the GPS, camera, and microphone functionality from the phone. Encrochat’s phones also had a feature that would quickly wipe the device if the user entered a PIN, and ran two operating systems side-by-side. If a user wanted the device to appear innocuous, they booted into normal Android. If they wanted to return to their sensitive chats, they switched over to the Encrochat system. The company sold the phones on a subscription based model, costing thousands of dollars a year per device.
This allowed them and others to investigate and arrest many:
Unbeknownst to Mark, or the tens of thousands of other alleged Encrochat users, their messages weren’t really secure. French authorities had penetrated the Encrochat network, leveraged that access to install a technical tool in what appears to be a mass hacking operation, and had been quietly reading the users’ communications for months. Investigators then shared those messages with agencies around Europe.
Only now is the astonishing scale of the operation coming into focus: It represents one of the largest law enforcement infiltrations of a communications network predominantly used by criminals ever, with Encrochat users spreading beyond Europe to the Middle East and elsewhere. French, Dutch, and other European agencies monitored and investigated “more than a hundred million encrypted messages” sent between Encrochat users in real time, leading to arrests in the UK, Norway, Sweden, France, and the Netherlands, a team of international law enforcement agencies announced Thursday.
EncroChat learned about the hack, but didn’t know who was behind it.
Going into full-on emergency mode, Encrochat sent a message to its users informing them of the ongoing attack. The company also informed its SIM provider, Dutch telecommunications firm KPN, which then blocked connections to the malicious servers, the associate claimed. Encrochat cut its own SIM service; it had an update scheduled to push to the phones, but it couldn’t guarantee whether that update itself wouldn’t be carrying malware too. That, and maybe KPN was working with the authorities, Encrochat’s statement suggested (KPN declined to comment). Shortly after Encrochat restored SIM service, KPN removed the firewall, allowing the hackers’ servers to communicate with the phones once again. Encrochat was trapped.
Encrochat decided to shut itself down entirely.
Lots of details about the hack in the article. Well worth reading in full.
The UK National Crime Agency called it Operation Venetic: “46 arrests, and £54m criminal cash, 77 firearms and over two tonnes of drugs seized so far.”
EDITED TO ADD (7/14): Some people are questioning the official story. I don’t know.
New research: “Best Practices for IoT Security: What Does That Even Mean?” by Christopher Bellman and Paul C. van Oorschot:
Abstract: Best practices for Internet of Things (IoT) security have recently attracted considerable attention worldwide from industry and governments, while academic research has highlighted the failure of many IoT product manufacturers to follow accepted practices. We explore not the failure to follow best practices, but rather a surprising lack of understanding, and void in the literature, on what (generically) “best practice” means, independent of meaningfully identifying specific individual practices. Confusion is evident from guidelines that conflate desired outcomes with security practices to achieve those outcomes. How do best practices, good practices, and standard practices differ? Or guidelines, recommendations, and requirements? Can something be a best practice if it is not actionable? We consider categories of best practices, and how they apply over the lifecycle of IoT devices. For concreteness in our discussion, we analyze and categorize a set of 1014 IoT security best practices, recommendations, and guidelines from industrial, government, and academic sources. As one example result, we find that about 70\% of these practices or guidelines relate to early IoT device lifecycle stages, highlighting the critical position of manufacturers in addressing the security issues in question. We hope that our work provides a basis for the community to build on in order to better understand best practices, identify and reach consensus on specific practices, and then find ways to motivate relevant stakeholders to follow them.
Back in 2017, I catalogued nineteen security and privacy guideline documents for the Internet of Things. Our problem right now isn’t that we don’t know how to secure these devices, it’s that there is no economic or regulatory incentive to do so.
…we have identified a path forward that balances the legitimate right of all users to privacy and the safety of users on our platform. This will enable us to offer E2EE as an advanced add-on feature for all of our users around the globe — free and paid — while maintaining the ability to prevent and fight abuse on our platform.
To make this possible, Free/Basic users seeking access to E2EE will participate in a one-time process that will prompt the user for additional pieces of information, such as verifying a phone number via a text message. Many leading companies perform similar steps on account creation to reduce the mass creation of abusive accounts. We are confident that by implementing risk-based authentication, in combination with our current mix of tools — including our Report a User function — we can continue to prevent and fight abuse.
Thank you, Zoom, for coming around to the right answer.
And thank you to everyone for commenting on this issue. We are learning — in so many areas — the power of continued public pressure to change corporate behavior.
EDITED TO ADD (6/18): Let’s do Apple next.
New research on using specially crafted inputs to slow down machine-learning neural network systems:
Sponge Examples: Energy-Latency Attacks on Neural Networks shows how to find adversarial examples that cause a DNN to burn more energy, take more time, or both. They affect a wide range of DNN applications, from image recognition to natural language processing (NLP). Adversaries might use these examples for all sorts of mischief — from draining mobile phone batteries, though degrading the machine-vision systems on which self-driving cars rely, to jamming cognitive radar.
So far, our most spectacular results are against NLP systems. By feeding them confusing inputs we can slow them down over 100 times. There are already examples in the real world where people pause or stumble when asked hard questions but we now have a dependable method for generating such examples automatically and at scale. We can also neutralize the performance improvements of accelerators for computer vision tasks, and make them operate on their worst case performance.
Researcher Bhavuk Jain discovered a vulnerability in the “Sign in with Apple” feature, and received a $100,000 bug bounty from Apple. Basically, forged tokens could gain access to pretty much any account.
It is fixed.
EDITED TO ADD (6/2): Another story.
Note that this is “announced,” so we don’t know when it’s actually going to be implemented.
Facebook today announced new features for Messenger that will alert you when messages appear to come from financial scammers or potential child abusers, displaying warnings in the Messenger app that provide tips and suggest you block the offenders. The feature, which Facebook started rolling out on Android in March and is now bringing to iOS, uses machine learning analysis of communications across Facebook Messenger’s billion-plus users to identify shady behaviors. But crucially, Facebook says that the detection will occur only based on metadata — not analysis of the content of messages — so that it doesn’t undermine the end-to-end encryption that Messenger offers in its Secret Conversations feature. Facebook has said it will eventually roll out that end-to-end encryption to all Messenger chats by default.
That default Messenger encryption will take years to implement.
Facebook hasn’t revealed many details about how its machine-learning abuse detection tricks will work. But a Facebook spokesperson tells WIRED the detection mechanisms are based on metadata alone: who is talking to whom, when they send messages, with what frequency, and other attributes of the relevant accounts — essentially everything other than the content of communications, which Facebook’s servers can’t access when those messages are encrypted. “We can get pretty good signals that we can develop through machine learning models, which will obviously improve over time,” a Facebook spokesperson told WIRED in a phone call. They declined to share more details in part because the company says it doesn’t want to inadvertently help bad actors circumvent its safeguards.
The company’s blog post offers the example of an adult sending messages or friend requests to a large number of minors as one case where its behavioral detection mechanisms can spot a likely abuser. In other cases, Facebook says, it will weigh a lack of connections between two people’s social graphs — a sign that they don’t know each other — or consider previous instances where users reported or blocked a someone as a clue that they’re up to something shady.
One screenshot from Facebook, for instance, shows an alert that asks if a message recipient knows a potential scammer. If they say no, the alert suggests blocking the sender, and offers tips about never sending money to a stranger. In another example, the app detects that someone is using a name and profile photo to impersonate the recipient’s friend. An alert then shows the impersonator’s and real friend’s profiles side-by-side, suggesting that the user block the fraudster.
Details from Facebook
EDITED TO ADD (7/1): This entry has been translated into Spanish.
This is new research on a Bluetooth vulnerability (called BIAS) that allows someone to impersonate a trusted device:
Abstract: Bluetooth (BR/EDR) is a pervasive technology for wireless communication used by billions of devices. The Bluetooth standard includes a legacy authentication procedure and a secure authentication procedure, allowing devices to authenticate to each other using a long term key. Those procedures are used during pairing and secure connection establishment to prevent impersonation attacks. In this paper, we show that the Bluetooth specification contains vulnerabilities enabling to perform impersonation attacks during secure connection establishment. Such vulnerabilities include the lack of mandatory mutual authentication, overly permissive role switching, and an authentication procedure downgrade. We describe each vulnerability in detail, and we exploit them to design, implement, and evaluate master and slave impersonation attacks on both the legacy authentication procedure and the secure authentication procedure. We refer to our attacks as Bluetooth Impersonation AttackS (BIAS).
Our attacks are standard compliant, and are therefore effective against any standard compliant Bluetooth device regardless the Bluetooth version, the security mode (e.g., Secure Connections), the device manufacturer, and the implementation details. Our attacks are stealthy because the Bluetooth standard does not require to notify end users about the outcome of an authentication procedure, or the lack of mutual authentication. To confirm that the BIAS attacks are practical, we successfully conduct them against 31 Bluetooth devices (28 unique Bluetooth chips) from major hardware and software vendors, implementing all the major Bluetooth versions, including Apple, Qualcomm, Intel, Cypress, Broadcom, Samsung, and CSR.
Sidebar photo of Bruce Schneier by Joe MacInnis.