EDITED TO ADD (9/1): There is criticism in the comments about me calling NSO Group an Israeli company. I was just repeating the news articles, but further research indicates that it is Israeli-founded and Israeli-based, but 100% owned by an American private equity firm.
Blog: August 2016 Archives
Another paper on using Wi-Fi for surveillance. This one is on identifying people by their body shape. "FreeSense:Indoor Human Identification with WiFi Signals":
Abstract: Human identification plays an important role in human-computer interaction. There have been numerous methods proposed for human identification (e.g., face recognition, gait recognition, fingerprint identification, etc.). While these methods could be very useful under different conditions, they also suffer from certain shortcomings (e.g., user privacy, sensing coverage range). In this paper, we propose a novel approach for human identification, which leverages WIFI signals to enable non-intrusive human identification in domestic environments. It is based on the observation that each person has specific influence patterns to the surrounding WIFI signal while moving indoors, regarding their body shape characteristics and motion patterns. The influence can be captured by the Channel State Information (CSI) time series of WIFI. Specifically, a combination of Principal Component Analysis (PCA), Discrete Wavelet Transform (DWT) and Dynamic Time Warping (DTW) techniques is used for CSI waveform-based human identification. We implemented the system in a 6m*5m smart home environment and recruited 9 users for data collection and evaluation. Experimental results indicate that the identification accuracy is about 88.9% to 94.5% when the candidate user set changes from 6 to 2, showing that the proposed human identification method is effective in domestic environments.
EDITED TO ADD (9/13): Related paper.
This is interesting research: "Keystroke Recognition Using WiFi Signals." Basically, the user's hand positions as they type distorts the Wi-Fi signal in predictable ways.
Abstract: Keystroke privacy is critical for ensuring the security of computer systems and the privacy of human users as what being typed could be passwords or privacy sensitive information. In this paper, we show for the first time that WiFi signals can also be exploited to recognize keystrokes. The intuition is that while typing a certain key, the hands and fingers of a user move in a unique formation and direction and thus generate a unique pattern in the time-series of Channel State Information (CSI) values, which we call CSI-waveform for that key. In this paper, we propose a WiFi signal based keystroke recognition system called WiKey. WiKey consists of two Commercial Off-The-Shelf (COTS) WiFi devices, a sender (such as a router) and a receiver (such as a laptop). The sender continuously emits signals and the receiver continuously receives signals. When a human subject types on a keyboard, WiKey recognizes the typed keys based on how the CSI values at the WiFi signal receiver end. We implemented the WiKey system using a TP-Link TL-WR1043ND WiFi router and a Lenovo X200 laptop. WiKey achieves more than 97.5% detection rate for detecting the keystroke and 96.4% recognition accuracy for classifying single keys. In real-world experiments, WiKey can recognize keystrokes in a continuously typed sentence with an accuracy of 93.5%.
Last week, Apple issued a critical security patch for the iPhone: iOS 9.3.5. The incredible story is that this patch is the result of investigative work by Citizen Lab, which uncovered a zero-day exploit being used by the UAE government against a human rights defender. The UAE spyware was provided by the Israeli cyberweapons arms manufacturer NSO Group.
This is a big deal. iOS vulnerabilities are expensive, and can sell for over $1M. That we can find one used in the wild and patch it, rendering it valueless, is a major win and puts a huge dent in the vulnerabilities market. The more we can do this, the less valuable these zero-days will be to both criminals and governments -- and to criminal governments.
Apple applied for a patent earlier this year on collecting biometric information of an unauthorized device user. The obvious application is taking a copy of the fingerprint and photo of someone using a stolen smartphone.
Note that I have no opinion on whether this is a patentable idea or the patent is valid.
EDITED TO ADD (9/13): There is potential prior art in the comments.
As shown in the video below, researchers at Pennsylvania State University recently developed a polyelectrolyte liquid solution made of bacteria and yeast that automatically mends clothes.
It doesn't have a name yet, but it's almost miraculous. Simply douse two halves of a ripped fabric in the stuff, hold them together under warm water for about 60 seconds, and the fabric closes the gaps and clings together once more. Having a bit of extra fabric on hand does seem to help, as the video mainly focuses on patching holes rather than re-knitting two halves of a torn piece.
The team got the idea by observing how proteins in squid teeth and human hair are able to self-replicate. Then, they recreated the process using more readily available materials. Best of all, it works with almost all natural fabrics.
As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.
We've long known that 64 bits is too small for a block cipher these days. That's why new block ciphers like AES have 128-bit, or larger, block sizes. The insecurity of the smaller block is nicely illustrated by a new attack called "Sweet32." It exploits the ability to find block collisions in Internet protocols to decrypt some traffic, even though the attackers never learn the key.
The National Security Agency is lying to us. We know that because data stolen from an NSA server was dumped on the Internet. The agency is hoarding information about security vulnerabilities in the products you use, because it wants to use it to hack others' computers. Those vulnerabilities aren't being reported, and aren't getting fixed, making your computers and networks unsafe.
On August 13, a group calling itself the Shadow Brokers released 300 megabytes of NSA cyberweapon code on the Internet. Near as we experts can tell, the NSA network itself wasn't hacked; what probably happened was that a "staging server" for NSA cyberweapons -- that is, a server the NSA was making use of to mask its surveillance activities -- was hacked in 2013.
The NSA inadvertently resecured itself in what was coincidentally the early weeks of the Snowden document release. The people behind the link used casual hacker lingo, and made a weird, implausible proposal involving holding a bitcoin auction for the rest of the data: "!!! Attention government sponsors of cyber warfare and those who profit from it !!!! How much you pay for enemies cyber weapons?"
Still, most people believe the hack was the work of the Russian government and the data release some sort of political message. Perhaps it was a warning that if the US government exposes the Russians as being behind the hack of the Democratic National Committee -- or other high-profile data breaches -- the Russians will expose NSA exploits in turn.
But what I want to talk about is the data. The sophisticated cyberweapons in the data dump include vulnerabilities and "exploit code" that can be deployed against common Internet security systems. Products targeted include those made by Cisco, Fortinet, TOPSEC, Watchguard, and Juniper -- systems that are used by both private and government organizations around the world. Some of these vulnerabilities have been independently discovered and fixed since 2013, and some had remained unknown until now.
All of them are examples of the NSA -- despite what it and other representatives of the US government say -- prioritizing its ability to conduct surveillance over our security. Here's one example. Security researcher Mustafa al-Bassam found an attack tool codenamed BENIGHCERTAIN that tricks certain Cisco firewalls into exposing some of their memory, including their authentication passwords. Those passwords can then be used to decrypt virtual private network, or VPN, traffic, completely bypassing the firewalls' security. Cisco hasn't sold these firewalls since 2009, but they're still in use today.
Vulnerabilities like that one could have, and should have, been fixed years ago. And they would have been, if the NSA had made good on its word to alert American companies and organizations when it had identified security holes.
Over the past few years, different parts of the US government have repeatedly assured us that the NSA does not hoard "zero days" the term used by security experts for vulnerabilities unknown to software vendors. After we learned from the Snowden documents that the NSA purchases zero-day vulnerabilities from cyberweapons arms manufacturers, the Obama administration announced, in early 2014, that the NSA must disclose flaws in common software so they can be patched (unless there is "a clear national security or law enforcement" use).
Later that year, National Security Council cybersecurity coordinator and special adviser to the president on cybersecurity issues Michael Daniel insisted that US doesn't stockpile zero-days (except for the same narrow exemption). An official statement from the White House in 2014 said the same thing.
The Shadow Brokers data shows this is not true. The NSA hoards vulnerabilities.
Hoarding zero-day vulnerabilities is a bad idea. It means that we're all less secure. When Edward Snowden exposed many of the NSA's surveillance programs, there was considerable discussion about what the agency does with vulnerabilities in common software products that it finds. Inside the US government, the system of figuring out what to do with individual vulnerabilities is called the Vulnerabilities Equities Process (VEP). It's an inter-agency process, and it's complicated.
There is a fundamental tension between attack and defense. The NSA can keep the vulnerability secret and use it to attack other networks. In such a case, we are all at risk of someone else finding and using the same vulnerability. Alternatively, the NSA can disclose the vulnerability to the product vendor and see it gets fixed. In this case, we are all secure against whoever might be using the vulnerability, but the NSA can't use it to attack other systems.
There are probably some overly pedantic word games going on. Last year, the NSA said that it discloses 91 percent of the vulnerabilities it finds. Leaving aside the question of whether that remaining 9 percent represents 1, 10, or 1,000 vulnerabilities, there's the bigger question of what qualifies in the NSA's eyes as a "vulnerability."
Not all vulnerabilities can be turned into exploit code. The NSA loses no attack capabilities by disclosing the vulnerabilities it can't use, and doing so gets its numbers up; it's good PR. The vulnerabilities we care about are the ones in the Shadow Brokers data dump. We care about them because those are the ones whose existence leaves us all vulnerable.
Because everyone uses the same software, hardware, and networking protocols, there is no way to simultaneously secure our systems while attacking their systems whoever "they" are. Either everyone is more secure, or everyone is more vulnerable.
Pretty much uniformly, security experts believe we ought to disclose and fix vulnerabilities. And the NSA continues to say things that appear to reflect that view, too. Recently, the NSA told everyone that it doesn't rely on zero days -- very much, anyway.
Earlier this year at a security conference, Rob Joyce, the head of the NSA's Tailored Access Operations (TAO) organization -- basically the country's chief hacker -- gave a rare public talk, in which he said that credential stealing is a more fruitful method of attack than are zero days: "A lot of people think that nation states are running their operations on zero days, but it's not that common. For big corporate networks, persistence and focus will get you in without a zero day; there are so many more vectors that are easier, less risky, and more productive."
The distinction he's referring to is the one between exploiting a technical hole in software and waiting for a human being to, say, get sloppy with a password.
A phrase you often hear in any discussion of the Vulnerabilities Equities Process is NOBUS, which stands for "nobody but us." Basically, when the NSA finds a vulnerability, it tries to figure out if it is unique in its ability to find it, or whether someone else could find it, too. If it believes no one else will find the problem, it may decline to make it public. It's an evaluation prone to both hubris and optimism, and many security experts have cast doubt on the very notion that there is some unique American ability to conduct vulnerability research.
The vulnerabilities in the Shadow Brokers data dump are definitely not NOBUS-level. They are run-of-the-mill vulnerabilities that anyone -- another government, cybercriminals, amateur hackers -- could discover, as evidenced by the fact that many of them were discovered between 2013, when the data was stolen, and this summer, when it was published. They are vulnerabilities in common systems used by people and companies all over the world.
So what are all these vulnerabilities doing in a secret stash of NSA code that was stolen in 2013? Assuming the Russians were the ones who did the stealing, how many US companies did they hack with these vulnerabilities? This is what the Vulnerabilities Equities Process is designed to prevent, and it has clearly failed.
If there are any vulnerabilities that -- according to the standards established by the White House and the NSA -- should have been disclosed and fixed, it's these. That they have not been during the three-plus years that the NSA knew about and exploited them -- despite Joyce's insistence that they're not very important -- demonstrates that the Vulnerable Equities Process is badly broken.
We need to fix this. This is exactly the sort of thing a congressional investigation is for. This whole process needs a lot more transparency, oversight, and accountability. It needs guiding principles that prioritize security over surveillance. A good place to start are the recommendations by Ari Schwartz and Rob Knake in their report: these include a clearly defined and more public process, more oversight by Congress and other independent bodies, and a strong bias toward fixing vulnerabilities instead of exploiting them.
And as long as I'm dreaming, we really need to separate our nation's intelligence-gathering mission from our computer security mission: we should break up the NSA. The agency's mission should be limited to nation state espionage. Individual investigation should be part of the FBI, cyberwar capabilities should be within US Cyber Command, and critical infrastructure defense should be part of DHS's mission.
I doubt we're going to see any congressional investigations this year, but we're going to have to figure this out eventually. In my 2014 book Data and Goliath, I write that "no matter what cybercriminals do, no matter what other countries do, we in the US need to err on the side of security by fixing almost all the vulnerabilities we find..." Our nation's cybersecurity is just too important to let the NSA sacrifice it in order to gain a fleeting advantage over a foreign adversary.
This essay previously appeared on Vox.com.
EDITED TO ADD (8/27): The vulnerabilities were seen in the wild within 24 hours, demonstrating how important they were to disclose and patch.
James Bamford thinks this is the work of an insider. I disagree, but he's right that the TAO catalog was not a Snowden document.
Interesting research that shows we exaggerate the risks of something when we find it morally objectionable.
From an article about and interview with the researchers:
To get at this question experimentally, Thomas and her collaborators created a series of vignettes in which a parent left a child unattended for some period of time, and participants indicated the risk of harm to the child during that period. For example, in one vignette, a 10-month-old was left alone for 15 minutes, asleep in the car in a cool, underground parking garage. In another vignette, an 8-year-old was left for an hour at a Starbucks, one block away from her parent's location.
To experimentally manipulate participants' moral attitude toward the parent, the experimenters varied the reason the child was left unattended across a set of six experiments with over 1,300 online participants. In some cases, the child was left alone unintentionally (for example, in one case, a mother is hit by a car and knocked unconscious after buckling her child into her car seat, thereby leaving the child unattended in the car seat). In other cases, the child was left unattended so the parent could go to work, do some volunteering, relax or meet a lover.
Not surprisingly, the parent's reason for leaving a child unattended affected participants' judgments of whether the parent had done something immoral: Ratings were over 3 on a 10-point scale even when the child was left unattended unintentionally, but they skyrocketed to nearly 8 when the parent left to meet a lover. Ratings for the other cases fell in between.
The more surprising result was that perceptions of risk followed precisely the same pattern. Although the details of the cases were otherwise the same - that is, the age of the child, the duration and location of the unattended period, and so on - participants thought children were in significantly greater danger when the parent left to meet a lover than when the child was left alone unintentionally. The ratings for the other cases, once again, fell in between. In other words, participants' factual judgments of how much danger the child was in while the parent was away varied according to the extent of their moral outrage concerning the parent's reason for leaving.
In this article, detailing the Australian and then worldwide investigation of a particularly heinous child-abuse ring, there are a lot of details of the pedophile security practices and the police investigative techniques. The abusers had a detailed manual on how to scrub metadata and avoid detection, but not everyone was perfect. The police used information from a single camera to narrow down the suspects. They also tracked a particular phrase one person used to find him.
This story shows an increasing sophistication of the police using small technical clues combined with standard detective work to investigate crimes on the Internet. A highly painful read, but interesting nonetheless.
fMRI experiments show that we are more likely to ignore security warnings when they interrupt other tasks.
A new study from BYU, in collaboration with Google Chrome engineers, finds the status quo of warning messages appearing haphazardly -- while people are typing, watching a video, uploading files, etc. -- results in up to 90 percent of users disregarding them.
Researchers found these times are less effective because of "dual task interference," a neural limitation where even simple tasks can't be simultaneously performed without significant performance loss. Or, in human terms, multitasking.
"We found that the brain can't handle multitasking very well," said study coauthor and BYU information systems professor Anthony Vance. "Software developers categorically present these messages without any regard to what the user is doing. They interrupt us constantly and our research shows there's a high penalty that comes by presenting these messages at random times."
For part of the study, researchers had participants complete computer tasks while an fMRI scanner measured their brain activity. The experiment showed neural activity was substantially reduced when security messages interrupted a task, as compared to when a user responded to the security message itself.
The BYU researchers used the functional MRI data as they collaborated with a team of Google Chrome security engineers to identify better times to display security messages during the browsing experience.
The detailed accounts of the terrorist-shooter false-alarm at Kennedy Airport in New York last week illustrate how completely and totally unprepared the airport authorities are for any real such event.
I have two reactions to this. On the one hand, this is a movie-plot threat -- the sort of overly specific terrorist scenario that doesn't make sense to defend against. On the other hand, police around the world need training in these types of scenarios in general. Panic can easily cause more deaths than terrorists themselves, and we need to think about what responsibilities police and other security guards have in these situations.
And three organizations -- Verified Voting, EPIC, and Common Cause -- have published a report on the risks of Internet voting. The report is primarily concerned with privacy, and the threats to a secret ballot.
If you've read my book Liars and Outliers, you know I like the prisoner's dilemma as a way to think about trust and security. There is an enormous amount of research -- both theoretical and experimental -- about the dilemma, which is why I found this new research so interesting. Here's a decent summary:
The question is not just how people play these games -- there are hundreds of research papers on that -- but instead whether people fall into behavioral types that explain their behavior across different games. Using standard statistical methods, the researchers identified four such player types: optimists (20 percent), who always go for the highest payoff, hoping the other player will coordinate to achieve that goal; pessimists (30 percent), who act according to the opposite assumption; the envious (21 percent), who try to score more points than their partners; and the trustful (17 percent), who always cooperate. The remaining 12 percent appeared to make their choices completely at random.
The NSA was badly hacked in 2013, and we're just now learning about it.
A group of hackers called "The Shadow Brokers" claim to have hacked the NSA, and are posting data to prove it. The data is source code from "The Equation Group," which is a sophisticated piece of malware exposed last year and attributed to the NSA. Some details:
The Shadow Brokers claimed to have hacked the Equation Group and stolen some of its hacking tools. They publicized the dump on Saturday, tweeting a link to the manifesto to a series of media companies.
The dumped files mostly contain installation scripts, configurations for command and control servers, and exploits targeted to specific routers and firewalls. The names of some of the tools correspond with names used in Snowden documents, such as "BANANAGLEE" or "EPICBANANA."
Nicholas Weaver has analyzed the data and believes it real:
But the proof itself, appear to be very real. The proof file is 134 MB of data compressed, expanding out to a 301 MB archive. This archive appears to contain a large fraction of the NSA's implant framework for firewalls, including what appears to be several versions of different implants, server side utility scripts, and eight apparent exploits for a variety of targets.
The exploits themselves appear to target Fortinet, Cisco, Shaanxi Networkcloud Information Technology (sxnc.com.cn) Firewalls, and similar network security systems. I will leave it to others to analyze the reliability, versions supported, and other details. But nothing I've found in either the exploits or elsewhere is newer than 2013.
Because of the sheer volume and quality, it is overwhelmingly likely this data is authentic. And it does not appear to be information taken from comprised systems. Instead the exploits, binaries with help strings, server configuration scripts, 5 separate versions of one implant framework, and all sort of other features indicate that this is analyst-side code -- the kind that probably never leaves the NSA.
I agree with him. This just isn't something that can be faked in this way. (Good proof would be for The Intercept to run the code names in the new leak against their database, and confirm that some of the previously unpublished ones are legitimate.)
This is definitely not Snowden stuff. This isn't the sort of data he took, and the release mechanism is not one that any of the reporters with access to the material would use. This is someone else, probably an outsider...probably a government.
But the big picture is a far scarier one. Somebody managed to steal 301 MB of data from a TS//SCI system at some point between 2013 and today. Possibly, even probably, it occurred in 2013. But the theft also could have occurred yesterday with a simple utility run to scrub all newer documents. Relying on the file timestamps -- which are easy to modify -- the most likely date of acquisition was June 11, 2013. That is two weeks after Snowden fled to Hong Kong and six days after the first Guardian publication. That would make sense, since in the immediate response to the leaks as the NSA furiously ran down possibly sources, it may have accidentally or deliberately eliminated this adversary's access.
Okay, so let's think about the game theory here. Some group stole all of this data in 2013 and kept it secret for three years. Now they want the world to know it was stolen. Which governments might behave this way? The obvious list is short: China and Russia. Were I betting, I would bet Russia, and that it's a signal to the Obama Administration: "Before you even think of sanctioning us for the DNC hack, know where we've been and what we can do to you."
They claim to be auctioning off the rest of the data to the highest bidder. I think that's PR nonsense. More likely, that second file is random nonsense, and this is all we're going to get. It's a lot, though. Yesterday was a very bad day for the NSA.
EDITED TO ADD: Snowden's comments. He thinks it's an "NSA malware staging server" that was hacked.
EDITED TO ADD (8/18): Dave Aitel also thinks it's Russia.
Cisco has analyzed the vulnerabilities for their products found in the data. They found several that they patched years ago, and one new one they didn't know about yet. See also this about the vulnerabilities.
Previously unreleased material from the Snowden archive proves that this data dump is real, and that the Equation Group is the NSA.
EDITED TO ADD (8/26): I wrote an essay about this here.
EDITED TO ADD (9/13): Someone who < a href="http://xorcat.net/2016/08/16/equationgroup-tool-leak-extrabacon-demo/">played with some of the vulnerabilities.
New research: "Flip Feng Shui: Hammering a Needle in the Software Stack," by Kaveh Razavi, Ben Gras, Erik Bosman Bart Preneel, Cristiano Giuffrida, and Herbert Bos.
Abstract: We introduce Flip Feng Shui (FFS), a new exploitation vector which allows an attacker to induce bit flips over arbitrary physical memory in a fully controlled way. FFS relies on hardware bugs to induce bit flips over memory and on the ability to surgically control the physical memory layout to corrupt attacker-targeted data anywhere in the software stack. We show FFS is possible today with very few constraints on the target data, by implementing an instance using the Rowhammer bug and memory deduplication (an OS feature widely deployed in production). Memory deduplication allows an attacker to reverse-map any physical page into a virtual page she owns as long as the page's contents are known. Rowhammer, in turn, allows an attacker to flip bits in controlled (initially unknown) locations in the target page.
We show FFS is extremely powerful: a malicious VM in a practical cloud setting can gain unauthorized access to a co-hosted victim VM running OpenSSH. Using FFS, we exemplify end-to-end attacks breaking OpenSSH public-key authentication, and forging GPG signatures from trusted keys, thereby compromising the Ubuntu/Debian update mechanism. We conclude by discussing mitigations and future directions for FFS attacks.
The malware -- known alternatively as "ProjectSauron" by researchers from Kaspersky Lab and "Remsec" by their counterparts from Symantec -- has been active since at least 2011 and has been discovered on 30 or so targets. Its ability to operate undetected for five years is a testament to its creators, who clearly studied other state-sponsored hacking groups in an attempt to replicate their advances and avoid their mistakes.
Part of what makes ProjectSauron so impressive is its ability to collect data from computers considered so sensitive by their operators that they have no Internet connection. To do this, the malware uses specially prepared USB storage drives that have a virtual file system that isn't viewable by the Windows operating system. To infected computers, the removable drives appear to be approved devices, but behind the scenes are several hundred megabytes reserved for storing data that is kept on the "air-gapped" machines. The arrangement works even against computers in which data-loss prevention software blocks the use of unknown USB drives.
Kaspersky researchers still aren't sure precisely how the USB-enabled exfiltration works. The presence of the invisible storage area doesn't in itself allow attackers to seize control of air-gapped computers. The researchers suspect the capability is used only in rare cases and requires use of a zero-day exploit that has yet to be discovered. In all, Project Sauron is made up of at least 50 modules that can be mixed and matched to suit the objectives of each individual infection.
"Once installed, the main Project Sauron modules start working as 'sleeper cells,' displaying no activity of their own and waiting for 'wake-up' commands in the incoming network traffic," Kaspersky researchers wrote in a separate blog post. "This method of operation ensures Project Sauron's extended persistence on the servers of targeted organizations."
We don't know who designed this, but it certainly seems likely to be a country with a serious cyberespionage budget.
EDITED TO ADD (8/15): Nicholas Weaver comment on the malware and what it means.
As we all know, the problems with backdoors are less the cryptography and more the systems surrounding the cryptography.
Experts are blaming bacteria, not squid nets.
As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.
Nice attack against electronic safes:
Plore used side-channel attacks to pull it off. These are ways of exploiting physical indicators from a cryptographic system to get around its protections. Here, all Plore had to do was monitor power consumption in the case of one safe, and the amount of time operations took in other, and voila, he was able to figure out the keycodes for locks that are designated by independent third-party testing company Underwriter's Laboratory as Type 1 High Security. These aren't the most robust locks on the market by any means, but they are known to be pretty secure. Safes with these locks are the kind of thing you might have in your house.
Here's an interesting hack against a computer's monitor:
A group of researchers has found a way to hack directly into the tiny computer that controls your monitor without getting into your actual computer, and both see the pixels displayed on the monitor -- effectively spying on you -- and also manipulate the pixels to display different images.
I've written a lot about the Internet of Things, and how everything is now a computer. But while it's true for cars and refrigerators and thermostats, it's also true for all the parts of your computer. Your keyboard, hard drives, and monitor are all individual computers, and what you think of as your computer is actually a collection of computers working together. So just as the NSA directly attacks the computer that is the hard drive, this attack targets the computer that is your monitor.
We're seeing car thefts in the wild accomplished through hacking:
Houston police have arrested two men for a string of high-tech thefts of trucks and SUVs in the Houston area. The Houston Chronicle reports that Michael Armando Arce and Jesse Irvin Zelaya were charged on August 4th, and are believed to be responsible for more than 100 auto thefts. Police said Arce and Zelaya were shuttling the stolen vehicles across the Mexican border.
The July video shows the thief connecting a laptop to the Jeep before driving away in it. A Fiat-Chrysler spokesman told ABC News that the thieves used software intended to be used by dealers and locksmiths to reprogram the vehicle's keyless entry and ignition system.
Scott Atran has done some really interesting research on why ordinary people become terrorists.
Academics who study warfare and terrorism typically don't conduct research just kilometers from the front lines of battle. But taking the laboratory to the fight is crucial for figuring out what impels people to make the ultimate sacrifice to, for example, impose Islamic law on others, says Atran, who is affiliated with the National Center for Scientific Research in Paris.
Atran's war zone research over the last few years, and interviews during the last decade with members of various groups engaged in militant jihad (or holy war in the name of Islamic law), give him a gritty perspective on this issue. He rejects popular assumptions that people frequently join up, fight and die for terrorist groups due to mental problems, poverty, brainwashing or savvy recruitment efforts by jihadist organizations.
Instead, he argues, young people adrift in a globalized world find their own way to ISIS, looking to don a social identity that gives their lives significance. Groups of dissatisfied young adult friends around the world often with little knowledge of Islam but yearning for lives of profound meaning and glory typically choose to become volunteers in the Islamic State army in Syria and Iraq, Atran contends. Many of these individuals connect via the internet and social media to form a global community of alienated youth seeking heroic sacrifice, he proposes.
Preliminary experimental evidence suggests that not only global terrorism, but also festering state and ethnic conflicts, revolutions and even human rights movements -- think of the U.S. civil rights movement in the 1960s -- depend on what Atran refers to as devoted actors. These individuals, he argues, will sacrifice themselves, their families and anyone or anything else when a volatile mix of conditions are in play. First, devoted actors adopt values they regard as sacred and nonnegotiable, to be defended at all costs. Then, when they join a like-minded group of nonkin that feels like a family a band of brothers a collective sense of invincibility and special destiny overwhelms feelings of individuality. As members of a tightly bound group that perceives its sacred values under attack, devoted actors will kill and die for each other.
EDITED TO ADD (8/13): Related paper, also by Atran.
Citizen Lab has a new report on an Iranian government hacking program that targets dissidents. From a Washington Post op-ed by Ron Deibert:
Al-Ameer is a net savvy activist, and so when she received a legitimate looking email containing a PowerPoint attachment addressed to her and purporting to detail "Assad Crimes," she could easily have opened it. Instead, she shared it with us at the Citizen Lab.
As we detail in a new report, the attachment led our researchers to uncover an elaborate cyberespionage campaign operating out of Iran. Among the malware was a malicious spyware, including a remote access tool called "Droidjack," that allows attackers to silently control a mobile device. When Droidjack is installed, a remote user can turn on the microphone and camera, remove files, read encrypted messages, and send spoofed instant messages and emails. Had she opened it, she could have put herself, her friends, her family and her associates back in Syria in mortal danger.
I've been saying for years that it's bad security advice, that it encourages poor passwords. Lorrie Cranor, now the FTC's chief technologist, agrees:
By studying the data, the researchers identified common techniques account holders used when they were required to change passwords. A password like "tarheels#1", for instance (excluding the quotation marks) frequently became "tArheels#1" after the first change, "taRheels#1" on the second change and so on. Or it might be changed to "tarheels#11" on the first change and "tarheels#111" on the second. Another common technique was to substitute a digit to make it "tarheels#2", "tarheels#3", and so on.
"The UNC researchers said if people have to change their passwords every 90 days, they tend to use a pattern and they do what we call a transformation," Cranor explained. "They take their old passwords, they change it in some small way, and they come up with a new password."
The researchers used the transformations they uncovered to develop algorithms that were able to predict changes with great accuracy. Then they simulated real-world cracking to see how well they performed. In online attacks, in which attackers try to make as many guesses as possible before the targeted network locks them out, the algorithm cracked 17 percent of the accounts in fewer than five attempts. In offline attacks performed on the recovered hashes using superfast computers, 41 percent of the changed passwords were cracked within three seconds.
That data refers to this study.
My advice for choosing a secure password is here.
The Open Technology Institute of the New America Foundation has released a policy paper on the vulnerabilities equities process: "Bugs in the System: A Primer on the Software Vulnerability Ecosystem and its Policy Implications."
Their policy recommendations:
- Minimize participation in the vulnerability black market.
- Establish strong, clear procedures for disclosure when it discovers and acquires vulnerability.
- Establish rules for government hacking.
- Support bug bounty programs.
- Reform the DMCA and CFAA so they encourage responsible vulnerability disclosure.
It's a good document, and worth reading.
[Out of band verification] using SMS is deprecated, and will no longer be allowed in future releases of this guidance.
Last week, President Obama issued a policy directive (PPD-41) on cyber-incident response coordination. The FBI is in charge, which is no surprise. Actually, there's not much surprising in the document. I suppose it's important to formalize this stuff, but I think it's what happens now.
Most of them are unencrypted, which makes them vulnerable to all sorts of attacks:
On Tuesday Bastille's research team revealed a new set of wireless keyboard attacks they're calling Keysniffer. The technique, which they're planning to detail at the Defcon hacker conference in two weeks, allows any hacker with a $12 radio device to intercept the connection between any of eight wireless keyboards and a computer from 250 feet away. What's more, it gives the hacker the ability to both type keystrokes on the victim machine and silently record the target's typing.
This is a continuation of their previous work
Russia has attacked the US in cyberspace in an attempt to influence our national election, many experts have concluded. We need to take this national security threat seriously and both respond and defend, despite the partisan nature of this particular attack.
There is virtually no debate about that, either from the technical experts who analyzed the attack last month or the FBI which is analyzing it now. The hackers have already released DNC e-mails and voicemails, and promise more data dumps.
While their motivation remains unclear, they could continue to attack our election from now to November -- and beyond.
Like everything else in society, elections have gone digital. And just as we've seen cyberattacks affecting all aspects of society, we're going to see them affecting elections as well.
What happened to the DNC is an example of organizational doxing -- the publishing of private information -- an increasingly popular tactic against both government and private organizations. There are other ways to influence elections: denial-of-service attacks against candidate and party networks and websites, attacks against campaign workers and donors, attacks against voter rolls or election agencies, hacks of the candidate websites and social media accounts, and -- the one that scares me the most -- manipulation of our highly insecure but increasingly popular electronic voting machines.
On the one hand, this attack is a standard intelligence gathering operation, something the NSA does against political targets all over the world and other countries regularly do to us. The only thing different between this attack and the more common Chinese and Russian attacks against our government networks is that the Russians apparently decided to publish selected pieces of what they stole in an attempt to influence our election, and to use WikiLeaks as a way to both hide their origin and give them a veneer of respectability.
All of the attacks listed above can be perpetrated by other countries and by individuals as well. They've been done in elections in other countries. They've been done in other contexts. The Internet broadly distributes power, and what was once the sole purview of nation states is now in the hands of the masses. We're living in a world where disgruntled people with the right hacking skills can influence our elections, wherever they are in the world.
The Snowden documents have shown the world how aggressive our own intelligence agency is in cyberspace. But despite all of the policy analysis that has gone into our own national cybersecurity, we seem perpetually taken by surprise when we are attacked. While foreign interference in national elections isn't new, and something the US has repeatedly done, electronic interference is a different animal.
The Obama administration is considering how to respond, but politics will get in the way. Were this an attack against a popular Internet company, or a piece of our physical infrastructure, we would all be together in response. But because these attacks affect one political party, the other party benefits. Even worse, the benefited candidate is actively inviting more foreign attacks against his opponent, though he now says he was just being sarcastic. Any response from the Obama administration or the FBI will be viewed through this partisan lens, especially because the president is a Democrat.
We need to rise above that. These threats are real and they affect us all, regardless of political affiliation. That this particular attack targeted the DNC is no indication of who the next attack might target. We need to make it clear to the world that we will not accept interference in our political process, whether by foreign countries or lone hackers.
However we respond to this act of aggression, we also need to increase the security of our election systems against all threats -- and quickly.
We tend to underestimate threats that haven't happened -- we discount them as "theoretical" -- and overestimate threats that have happened at least once. The terrorist attacks of 9/11 are a showcase example of that: administration officials ignored all the warning signs, and then drastically overreacted after the fact. These Russian attacks against our voting system have happened. And they will happen again, unless we take action.
If a foreign country attacked US critical infrastructure, we would respond as a nation against the threat. But if that attack falls along political lines, the response is more complicated. It shouldn't be. This is a national security threat against our democracy, and needs to be treated as such.
This essay previously appeared on CNN.com.
Photo of Bruce Schneier by Per Ervland.
Schneier on Security is a personal website. Opinions expressed are not necessarily those of IBM Resilient.