Interesting research: Earlence Fernandes, Jaeyeon Jung, and Atul Prakash, “Security Analysis of Emerging Smart Home Applications“:
Abstract: Recently, several competing smart home programming frameworks that support third party app development have emerged. These frameworks provide tangible benefits to users, but can also expose users to significant security risks. This paper presents the first in-depth empirical security analysis of one such emerging smart home programming platform. We analyzed Samsung-owned SmartThings, which has the largest number of apps among currently available smart home platforms, and supports a broad range of devices including motion sensors, fire alarms, and door locks. SmartThings hosts the application runtime on a proprietary, closed-source cloud backend, making scrutiny challenging. We overcame the challenge with a static source code analysis of 499 SmartThings apps (called SmartApps) and 132 device handlers, and carefully crafted test cases that revealed many undocumented features of the platform. Our key findings are twofold. First, although SmartThings implements a privilege separation model, we discovered two intrinsic design flaws that lead to significant overprivilege in SmartApps. Our analysis reveals that over 55% of SmartApps in the store are overprivileged due to the capabilities being too coarse-grained. Moreover, once installed, a SmartApp is granted full access to a device even if it specifies needing only limited access to the device. Second, the SmartThings event subsystem, which devices use to communicate asynchronously with SmartApps via events, does not sufficiently protect events that carry sensitive information such as lock codes. We exploited framework design flaws to construct four proof-of-concept attacks that: (1) secretly planted door lock codes; (2) stole existing door lock codes; (3) disabled vacation mode of the home; and (4) induced a fake fire alarm. We conclude the paper with security lessons for the design of emerging smart home programming frameworks.
Research website. News article — copy and paste into a text editor to avoid the ad blocker blocker.
EDITED TO ADD: Another article.
Posted on May 2, 2016 at 9:01 AM •
The Project Zero team at Google has posted details of a new attack that targets a computer’s’ DRAM. It’s called Rowhammer. Here’s a good description:
Here’s how Rowhammer gets its name: In the Dynamic Random Access Memory (DRAM) used in some laptops, a hacker can run a program designed to repeatedly access a certain row of transistors in the computer’s memory, “hammering” it until the charge from that row leaks into the next row of memory. That electromagnetic leakage can cause what’s known as “bit flipping,” in which transistors in the neighboring row of memory have their state reversed, turning ones into zeros or vice versa. And for the first time, the Google researchers have shown that they can use that bit flipping to actually gain unintended levels of control over a victim computer. Their Rowhammer hack can allow a “privilege escalation,” expanding the attacker’s influence beyond a certain fenced-in portion of memory to more sensitive areas.
When run on a machine vulnerable to the rowhammer problem, the process was able to induce bit flips in page table entries (PTEs). It was able to use this to gain write access to its own page table, and hence gain read-write access to all of physical memory.
The cause is simply the super dense packing of chips:
This works because DRAM cells have been getting smaller and closer together. As DRAM manufacturing scales down chip features to smaller physical dimensions, to fit more memory capacity onto a chip, it has become harder to prevent DRAM cells from interacting electrically with each other. As a result, accessing one location in memory can disturb neighbouring locations, causing charge to leak into or out of neighbouring cells. With enough accesses, this can change a cell’s value from 1 to 0 or vice versa.
Very clever, and yet another example of the security interplay between hardware and software.
This kind of thing is hard to fix, although the Google team gives some mitigation techniques at the end of their analysis.
EDITED TO ADD (3/12): Good explanation of the vulnerability.
Posted on March 11, 2015 at 6:16 AM •
It’s easier than you think to create your own police department in the United States.
Yosef Maiwandi formed the San Gabriel Valley Transit Authority — a tiny, privately run nonprofit organization that provides bus rides to disabled people and senior citizens. It operates out of an auto repair shop. Then, because the law seems to allow transit companies to form their own police departments, he formed the San Gabriel Valley Transit Authority Police Department. As a thank you, he made Stefan Eriksson a deputy police commissioner of the San Gabriel Transit Authority Police’s anti-terrorism division, and gave him business cards.
Police departments like this don’t have much legal authority, they don’t really need to. My guess is that the name alone is impressive enough.
In the computer security world, privilege escalation means using some legitimately granted authority to secure extra authority that was not intended. This is a real-world counterpart. Even though transit police departments are meant to police their vehicles only, the title — and the ostensible authority that comes along with it — is useful elsewhere. Someone with criminal intent could easily use this authority to evade scrutiny or commit fraud.
Deal said that his agency has discovered that several railroad agencies around California have created police departments — even though the companies have no rail lines in California to patrol. The police certification agency is seeking to decertify those agencies because it sees no reason for them to exist in California.
The issue of private transit firms creating police agencies has in recent years been a concern in Illinois, where several individuals with criminal histories created railroads as a means of forming a police agency.
The real problem is that we’re too deferential to police power. We don’t know the limits of police authority, whether it be an airport policeman or someone with a business card from the “San Gabriel Valley Transit Authority Police Department.”
Posted on March 15, 2006 at 7:47 AM •
I just found an interesting paper: “Windows Access Control Demystified,” by Sudhakar Govindavajhala and Andrew W. Appel. Basically, they show that companies like Adobe, Macromedia, etc., have mistakes in their Access Control Programming that open security holes in Windows XP.
In the Secure Internet Programming laboratory at Princeton University, we have been investigating network security management by using logic programming. We developed a rule based framework — Multihost, Multistage, Vulnerability Analysis(MulVAL) — to perform end-to-end, automatic analysis of multi-host, multi-stage attacks on a large network where hosts run different operating systems. The tool finds attack paths where the adversary will have to use one or more than one weaknesses (buffer overflows) in multiple software to attack the network. The MulVAL framework has been demonstrated to be modular, flexible, scalable and efficient . We applied these techniques to perform security analysis of a single host with commonly used software.
We have constructed a logical model of Windows XP access control, in a declarative but executable (Datalog) format. We have built a scanner that reads access-control conguration information from the Windows registry, file system, and service control manager database, and feeds raw conguration data to the model. Therefore we can reason about such things as the existence of privilege-escalation attacks, and indeed we have found several user-to-administrator vulnerabilities caused by misconfigurations of the access-control lists of commercial software from several major vendors. We propose tools such as ours as a vehicle for software developers and system administrators to model and debug the complex interactions of access control on installations under Windows.
EDITED TO ADD (2/13): Ed Felten has some good commentary about the paper on his blog.
Posted on February 13, 2006 at 12:11 PM •
Sidebar photo of Bruce Schneier by Joe MacInnis.