Crypto-Gram

November 15, 2016

by Bruce Schneier
CTO, Resilient, an IBM Company
schneier@schneier.com
https://www.schneier.com

A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.

For back issues, or to subscribe, visit <https://www.schneier.com/crypto-gram.html>.

You can read this issue on the web at <https://www.schneier.com/crypto-gram/archives/2016/…>. These same essays and news items appear in the “Schneier on Security” blog at <http://www.schneier.com/>, along with a lively and intelligent comment section. An RSS feed is available.


In this issue:


Election Security

It’s over. The voting went smoothly. As of the time of writing, there are no serious fraud allegations, nor credible evidence that anyone tampered with voting rolls or voting machines. And most important, the results are not in doubt.

While we may breathe a collective sigh of relief about that, we can’t ignore the issue until the next election. The risks remain.

As computer security experts have been saying for years, our newly computerized voting systems are vulnerable to attack by both individual hackers and government-sponsored cyberwarriors. It is only a matter of time before such an attack happens.

Electronic voting machines can be hacked, and those machines that do not include a paper ballot that can verify each voter’s choice can be hacked undetectably. Voting rolls are also vulnerable; they are all computerized databases whose entries can be deleted or changed to sow chaos on Election Day.

The largely ad hoc system in states for collecting and tabulating individual voting results is vulnerable as well. While the difference between theoretical if demonstrable vulnerabilities and an actual attack on Election Day is considerable, we got lucky this year. Not just presidential elections are at risk, but state and local elections, too.

To be very clear, this is not about voter fraud. The risks of ineligible people voting, or people voting twice, have been repeatedly shown to be virtually nonexistent, and “solutions” to this problem are largely voter-suppression measures. Election fraud, however, is both far more feasible and much more worrisome.

Here’s my worry. On the day after an election, someone claims that a result was hacked. Maybe one of the candidates points to a wide discrepancy between the most recent polls and the actual results. Maybe an anonymous person announces that he hacked a particular brand of voting machine, describing in detail how. Or maybe it’s a system failure during Election Day: voting machines recording significantly fewer votes than there were voters, or zero votes for one candidate or another. (These are not theoretical occurrences; they have both happened in the United States before, though because of error, not malice.)

We have no procedures for how to proceed if any of these things happen. There’s no manual, no national panel of experts, no regulatory body to steer us through this crisis. How do we figure out if someone hacked the vote? Can we recover the true votes, or are they lost? What do we do then?

First, we need to do more to secure our elections system. We should declare our voting systems to be critical national infrastructure. This is largely symbolic, but it demonstrates a commitment to secure elections and makes funding and other resources available to states.

We need national security standards for voting machines, and funding for states to procure machines that comply with those standards. Voting-security experts can deal with the technical details, but such machines must include a paper ballot that provides a record verifiable by voters. The simplest and most reliable way to do that is already practiced in 37 states: optical-scan paper ballots, marked by the voters, counted by computer but recountable by hand. And we need a system of pre-election and postelection security audits to increase confidence in the system.

Second, election tampering, either by a foreign power or by a domestic actor, is inevitable, so we need detailed procedures to follow—both technical procedures to figure out what happened, and legal procedures to figure out what to do—that will efficiently get us to a fair and equitable election resolution. There should be a board of independent computer-security experts to unravel what happened, and a board of independent election officials, either at the Federal Election Commission or elsewhere, empowered to determine and put in place an appropriate response.

In the absence of such impartial measures, people rush to defend their candidate and their party. Florida in 2000 was a perfect example. What could have been a purely technical issue of determining the intent of every voter became a battle for who would win the presidency. The debates about hanging chads and spoiled ballots and how broad the recount should be were contested by people angling for a particular outcome. In the same way, after a hacked election, partisan politics will place tremendous pressure on officials to make decisions that override fairness and accuracy.

That is why we need to agree on policies to deal with future election fraud. We need procedures to evaluate claims of voting-machine hacking. We need a fair and robust vote-auditing process. And we need all of this in place before an election is hacked and battle lines are drawn.

In response to Florida, the Help America Vote Act of 2002 required each state to publish its own guidelines on what constitutes a vote. Some states—Indiana, in particular—set up a “war room” of public and private cybersecurity experts ready to help if anything did occur. While the Department of Homeland Security is assisting some states with election security, and the F.B.I. and the Justice Department made some preparations this year, the approach is too piecemeal.

Elections serve two purposes. First, and most obvious, they are how we choose a winner. But second, and equally important, they convince the loser—and all the supporters—that he or she lost. To achieve the first purpose, the voting system must be fair and accurate. To achieve the second one, it must be *shown* to be fair and accurate.

We need to have these conversations before something happens, when everyone can be calm and rational about the issues. The integrity of our elections is at stake, which means our democracy is at stake.

This essay previously appeared in the New York Times.
http://www.nytimes.com/2016/11/09/opinion/…

Election-machine vulnerabilities:
https://www.washingtonpost.com/posteverything/wp/…

Elections are hard to rig:
https://www.washingtonpost.com/news/the-fix/wp/2016/…

Voting systems as critical infrastructure:
https://papers.ssrn.com/sol3/papers.cfm?…

Voting machine security:
https://www.verifiedvoting.org/
http://votingmachines.procon.org/view.answers.php?…
http://votingmachines.procon.org/view.answers.php?…

Election-defense preparations for 2016:
http://www.usatoday.com/story/tech/news/2016/11/05/…
http://www.nbcnews.com/storyline/2016-election-day/…


News

Lance Spitzner looks at the safety features of a power saw and tries to apply them to Internet security.
https://securingthehuman.sans.org//…

Researchers discover a clever attack that bypasses the address space layout randomization (ALSR) on Intel’s CPUs.
http://arstechnica.com/security/2016/10/…
http://www.cs.ucr.edu/~nael/pubs/micro16.pdf

In an interviw in Wired, President Obama talks about AI risk, cybersecurity, and more.
https://www.wired.com/2016/10/…

Privacy makes workers more productive. Interesting research.
https://www.psychologytoday.com//…

News about the DDOS attacks against Dyn.
https://motherboard.vice.com/read/…
https://krebsonsecurity.com/2016/10/…
https://motherboard.vice.com/read/…

Josephine Wolff examines different Internet governance stakeholders and how they frame security debates.
https://policyreview.info/articles/analysis/…

The UK is admitting “offensive cyber” operations against ISIS/Daesh. I think this might be the first time it has been openly acknowledged.
https://www.theguardian.com/politics//live/2016/…

It’s not hard to imagine the criminal possibilities of automation, autonomy, and artificial intelligence. But the imaginings are becoming mainstream—and the future isn’t too far off.
http://www.nytimes.com/2016/10/24/technology/…

Along similar lines, computers are able to predict court verdicts. My guess is that the real use here isn’t to predict actual court verdicts, but for well-paid defense teams to test various defensive tactics.
http://www.telegraph.co.uk/science/2016/10/23/…

Good long article on the 2015 attack against the US Office of Personnel Management.
https://www.wired.com/2016/10/…

How Powell’s and Podesta’s e-mail accounts were hacked. It was phishing.
https://motherboard.vice.com/read/…

A year and a half ago, I wrote about hardware bit-flipping attacks, which were then largely theoretical. Now, they can be used to root Android phones.
http://arstechnica.com/security/2016/10/…
https://vvdveen.com/publications/drammer.pdf
https://www.vusec.net/projects/drammer/

Eavesdropping on typing while connected over VoIP.
https://arxiv.org/pdf/1609.09359.pdf
https://news.uci.edu/research/…

An impressive Chinese device that automatically reads marked cards in order to cheat at poker and other card games.
https://www.elie.net//security/…

A useful guide on how to avoid kidnapping children on Halloween.
http://reductress.com/post/…

A card game based on the iterated prisoner’s dilemma.
https://opinionatedgamers.com/2016/10/26/…

There’s another leak of NSA hacking tools and data from the Shadow Brokers. This one includes a list of hacked sites. The data is old, but you can see if you’ve been hacked.
http://arstechnica.co.uk/security/2016/10/…
Honestly, I am surprised by this release. I thought that the original Shadow Brokers dump was everything. Now that we know they held things back, there could easily be more releases.
http://www.networkworld.com/article/3137065/…
Note that the Hague-based Organization for the Prohibition of Chemical Weapons is on the list, hacked in 2000.
https://boingboing.net/2016/11/06/…

Free cybersecurity MOOC from F-Secure and the University of Finland.
http://mooc.fi/courses/2016/cybersecurity/

Researchers have trained a neural network to encrypt its communications. This story is more about AI and neural networks than it is about cryptography. The algorithm isn’t any good, but is a perfect example of what I’ve heard called “Schneier’s Law”: Anyone can design a cipher that they themselves cannot break.
https://www.newscientist.com/article/…
http://arstechnica.com/information-technology/2016/…
https://www.engadget.com/2016/10/28/…
https://arxiv.org/pdf/1610.06918v1.pdf
Schneier’s Law:
https://www.schneier.com/blog/archives/2011/04/…

Google now links anonymous browser tracking with identifiable tracking. The article also explains how to opt out.
https://www.propublica.org/article/…

New Atlas has a great three-part feature on the history of hacking as portrayed in films, including video clips. The 1980s. The 1990s. The 2000s.
http://newatlas.com/history-hollywood-hacking-1980s/…
http://newatlas.com/hollywood-hacking-movies-1990s/…
http://newatlas.com/hollywood-hacking-2000s/45965

For years, the DMCA has been used to stifle legitimate research into the security of embedded systems. Finally, the research exemption to the DMCA is in effect (for two years, but we can hope it’ll be extended forever).
https://www.wired.com/2016/10/…
https://www.eff.org/deeplinks/2016/10/…

Firefox is removing the battery status API, citing privacy concerns.
https://www.fxsitecompat.com/en-CA/docs/2016/…
https://eprint.iacr.org/2015/616.pdf
W3C is updating the spec.
https://www.w3.org/TR/battery-status/#acknowledgements
Here’s a battery tracker found in the wild.
http://randomwalker.info/publications/…

Election-day humor from 2004, but still relevent.
http://www.ganssle.com/tem/tem316.html#article2

A self-propagating smart light bulb worm.
http://iotworm.eyalro.net/
https://boingboing.net/2016/11/09/…
https://tech.slashdot.org/story/16/11/09/0041201/…
This is exactly the sort of Internet-of-Things attack that has me worried.

Ad networks are surreptitiously using ultrasonic communications to jump from device to device. It should come as no surprise that this communications channel can be used to hack devices as well.
https://www.newscientist.com/article/…
https://www.schneier.com/blog/archives/2015/11/…

This is some interesting research. You can fool facial recognition systems by wearing glasses printed with elements of other peoples’ faces.
https://www.cs.cmu.edu/~sbhagava/papers/…
http://qz.com/823820/…
https://boingboing.net/2016/11/02/…

Interesting research: “Using Artificial Intelligence to Identify State Secrets,” https://arxiv.org/abs/1611.00356

There’s a Kickstarter for a sticker that you can stick on a glove and then register with a biometric access system like an iPhone. It’s an interesting security trade-off: swapping something you are (the biometric) with something you have (the glove).
https://www.kickstarter.com/projects/nanotips/…
https://gizmodo.com/…

Julian Oliver has designed and built a cellular eavesdropping device that’s disguised as an old HP printer. It’s more of a conceptual art piece than an actual piece of eavesdropping equipment, but it still makes the point.
https://julianoliver.com/output/stealth-cell-tower
https://www.wired.com/2016/11/…
https://boingboing.net/2016/11/03/…


Lessons From the Dyn DDoS Attack

A week ago Friday, someone took down numerous popular websites in a massive distributed denial-of-service (DDoS) attack against the domain name provider Dyn. DDoS attacks are neither new nor sophisticated. The attacker sends a massive amount of traffic, causing the victim’s system to slow to a crawl and eventually crash. There are more or less clever variants, but basically, it’s a datapipe-size battle between attacker and victim. If the defender has a larger capacity to receive and process data, he or she will win. If the attacker can throw more data than the victim can process, he or she will win.

The attacker can build a giant data cannon, but that’s expensive. It is much smarter to recruit millions of innocent computers on the internet. This is the “distributed” part of the DDoS attack, and pretty much how it’s worked for decades. Cybercriminals infect innocent computers around the internet and recruit them into a botnet. They then target that botnet against a single victim.

You can imagine how it might work in the real world. If I can trick tens of thousands of others to order pizzas to be delivered to your house at the same time, I can clog up your street and prevent any legitimate traffic from getting through. If I can trick many millions, I might be able to crush your house from the weight. That’s a DDoS attack—it’s simple brute force.

As you’d expect, DDoSers have various motives. The attacks started out as a way to show off, then quickly transitioned to a method of intimidation—or a way of just getting back at someone you didn’t like. More recently, they’ve become vehicles of protest. In 2013, the hacker group Anonymous petitioned the White House to recognize DDoS attacks as a legitimate form of protest. Criminals have used these attacks as a means of extortion, although one group found that just the fear of attack was enough. Military agencies are also thinking about DDoS as a tool in their cyberwar arsenals. A 2007 DDoS attack against Estonia was blamed on Russia and widely called an act of cyberwar.

The DDoS attack against Dyn two weeks ago was nothing new, but it illustrated several important trends in computer security.

These attack techniques are broadly available. Fully capable DDoS attack tools are available for free download. Criminal groups offer DDoS services for hire. The particular attack technique used against Dyn was first used a month earlier. It’s called Mirai, and since the source code was released four weeks ago, over a dozen botnets have incorporated the code.

The Dyn attacks were probably not originated by a government. The perpetrators were most likely hackers mad at Dyn for helping Brian Krebs identify—and the FBI arrest—two Israeli hackers who were running a DDoS-for-hire ring. Recently I have written about probing DDoS attacks against internet infrastructure companies that appear to be perpetrated by a nation-state. But, honestly, we don’t know for sure.

This is important. Software spreads capabilities. The smartest attacker needs to figure out the attack and write the software. After that, anyone can use it. There’s not even much of a difference between government and criminal attacks. In December 2014, there was a legitimate debate in the security community as to whether the massive attack against Sony had been perpetrated by a nation-state with a $20 billion military budget or a couple of guys in a basement somewhere. The internet is the only place where we can’t tell the difference. Everyone uses the same tools, the same techniques and the same tactics.

These attacks are getting larger. The Dyn DDoS attack set a record at 1.2 Tbps. The previous record holder was the attack against cybersecurity journalist Brian Krebs a month prior at 620 Gbps. This is much larger than required to knock the typical website offline. A year ago, it was unheard of. Now it occurs regularly.

The botnets attacking Dyn and Brian Krebs consisted largely of unsecure Internet of Things (IoT) devices—webcams, digital video recorders, routers and so on. This isn’t new, either. We’ve already seen internet-enabled refrigerators and TVs used in DDoS botnets. But again, the scale is bigger now. In 2014, the news was hundreds of thousands of IoT devices—the Dyn attack used millions. Analysts expect the IoT to increase the number of things on the internet by a factor of 10 or more. Expect these attacks to similarly increase.

The problem is that these IoT devices are unsecure and likely to remain that way. The economics of internet security don’t trickle down to the IoT. Commenting on the Krebs attack last month, I wrote:

The market can’t fix this because neither the buyer nor the seller cares. Think of all the CCTV cameras and DVRs used in the attack against Brian Krebs. The owners of those devices don’t care. Their devices were cheap to buy, they still work, and they don’t even know Brian. The sellers of those devices don’t care: They’re now selling newer and better models, and the original buyers only cared about price and features. There is no market solution because the insecurity is what economists call an externality: It’s an effect of the purchasing decision that affects other people. Think of it kind of like invisible pollution.

To be fair, one company that made some of the unsecure things used in these attacks recalled its unsecure webcams. But this is more of a publicity stunt than anything else. I would be surprised if the company got many devices back. We already know that the reputational damage from having your unsecure software made public isn’t large and doesn’t last. At this point, the market still largely rewards sacrificing security in favor of price and time-to-market.

DDoS prevention works best deep in the network, where the pipes are the largest and the capability to identify and block the attacks is the most evident. But the backbone providers have no incentive to do this. They don’t feel the pain when the attacks occur and they have no way of billing for the service when they provide it. So they let the attacks through and force the victims to defend themselves. In many ways, this is similar to the spam problem. It, too, is best dealt with in the backbone, but similar economics dump the problem onto the endpoints.

We’re unlikely to get any regulation forcing backbone companies to clean up either DDoS attacks or spam, just as we are unlikely to get any regulations forcing IoT manufacturers to make their systems secure. This is me again:

What this all means is that the IoT will remain insecure unless government steps in and fixes the problem. When we have market failures, government is the only solution. The government could impose security regulations on IoT manufacturers, forcing them to make their devices secure even though their customers don’t care. They could impose liabilities on manufacturers, allowing people like Brian Krebs to sue them. Any of these would raise the cost of insecurity and give companies incentives to spend money making their devices secure.

That leaves the victims to pay. This is where we are in much of computer security. Because the hardware, software and networks we use are so unsecure, we have to pay an entire industry to provide after-the-fact security.

There are solutions you can buy. Many companies offer DDoS protection, although they’re generally calibrated to the older, smaller attacks. We can safely assume that they’ll up their offerings, although the cost might be prohibitive for many users. Understand your risks. Buy mitigation if you need it, but understand its limitations. Know the attacks are possible and will succeed if large enough. And the attacks are getting larger all the time. Prepare for that.

This essay previously appeared on the SecurityIntelligence website.
https://securityintelligence.com/…

https://securityintelligence.com/news/…
http://arstechnica.com/information-technology/2016/…
https://www.theguardian.com/technology/2016/oct/26/…
http://searchsecurity.techtarget.com/news/450401962/…
http://hub.dyn.com/static/hub.dyn.com/dyn-blog/…

DDoS petition:
http://www.huffingtonpost.com/2013/01/12/…

DDoS extortion:
https://securityintelligence.com/…
http://www.computerworld.com/article/3061813/…

DDoS against Estonia:
http://www.iar-gwu.org/node/65

DDoS for hire:
http://www.forbes.com/sites/thomasbrewster/2016/10/…

Mirai:
https://www.arbornetworks.com//asert/…
https://krebsonsecurity.com/2016/10/…
https://threatpost.com/…

Krebs:
http://krebsonsecurity.com/2016/09/…
http://www.theverge.com/2016/9/11/12878692/…
https://krebsonsecurity.com/2016/09/…
http://www.businessinsider.com/…

Nation-state DDoS Attacks:
https://www.schneier.com/blog/archives/2016/09/…

North Korea and Sony:
https://www.theatlantic.com/international/archive/…

Internet of Things (IoT) security:
https://securityintelligence.com/…
https://thehackernews.com/2014/01/…

Ever larger DDoS Attacks:
http://www.ibtimes.co.uk/…

My previous essay on this:
https://www.schneier.com/essays/archives/2016/10/…

recalled:
http://www.zdnet.com/article/…

identify and block the attacks:
http://www.ibm.com/security/threat-protection/


Regulation of the Internet of Things

Late last month, popular websites like Twitter, Pinterest, Reddit and PayPal went down for most of a day. The distributed denial-of-service attack that caused the outages, and the vulnerabilities that made the attack possible, was as much a failure of market and policy as it was of technology. If we want to secure our increasingly computerized and connected world, we need more government involvement in the security of the “Internet of Things” and increased regulation of what are now critical and life-threatening technologies. It’s no longer a question of if, it’s a question of when.

First, the facts. Those websites went down because their domain name provider—a company named Dyn—was forced offline. We don’t know who perpetrated that attack, but it could have easily been a lone hacker. Whoever it was launched a distributed denial-of-service attack against Dyn by exploiting a vulnerability in large numbers—possibly millions—of Internet-of-Things devices like webcams and digital video recorders, then recruiting them all into a single botnet. The botnet bombarded Dyn with traffic, so much that it went down. And when it went down, so did dozens of websites.

Your security on the Internet depends on the security of millions of Internet-enabled devices, designed and sold by companies you’ve never heard of to consumers who don’t care about your security.

The technical reason these devices are insecure is complicated, but there is a market failure at work. The Internet of Things is bringing computerization and connectivity to many tens of millions of devices worldwide. These devices will affect every aspect of our lives, because they’re things like cars, home appliances, thermostats, lightbulbs, fitness trackers, medical devices, smart streetlights and sidewalk squares. Many of these devices are low-cost, designed and built offshore, then rebranded and resold. The teams building these devices don’t have the security expertise we’ve come to expect from the major computer and smartphone manufacturers, simply because the market won’t stand for the additional costs that would require. These devices don’t get security updates like our more expensive computers, and many don’t even have a way to be patched. And, unlike our computers and phones, they stay around for years and decades.

An additional market failure illustrated by the Dyn attack is that neither the seller nor the buyer of those devices cares about fixing the vulnerability. The owners of those devices don’t care. They wanted a webcam—or thermostat, or refrigerator—with nice features at a good price. Even after they were recruited into this botnet, they still work fine—you can’t even tell they were used in the attack. The sellers of those devices don’t care: They’ve already moved on to selling newer and better models. There is no market solution because the insecurity primarily affects other people. It’s a form of invisible pollution.

And, like pollution, the only solution is to regulate. The government could impose minimum security standards on IoT manufacturers, forcing them to make their devices secure even though their customers don’t care. They could impose liabilities on manufacturers, allowing companies like Dyn to sue them if their devices are used in DDoS attacks. The details would need to be carefully scoped, but either of these options would raise the cost of insecurity and give companies incentives to spend money making their devices secure.

It’s true that this is a domestic solution to an international problem and that there’s no U.S. regulation that will affect, say, an Asian-made product sold in South America, even though that product could still be used to take down U.S. websites. But the main costs in making software come from development. If the United States and perhaps a few other major markets implement strong Internet-security regulations on IoT devices, manufacturers will be forced to upgrade their security if they want to sell to those markets. And any improvements they make in their software will be available in their products wherever they are sold, simply because it makes no sense to maintain two different versions of the software. This is truly an area where the actions of a few countries can drive worldwide change.

Regardless of what you think about regulation vs. market solutions, I believe there is no choice. Governments will get involved in the IoT, because the risks are too great and the stakes are too high. Computers are now able to affect our world in a direct and physical manner.

Security researchers have demonstrated the ability to remotely take control of Internet-enabled cars. They’ve demonstrated ransomware against home thermostats and exposed vulnerabilities in implanted medical devices. They’ve hacked voting machines and power plants. In one recent paper, researchers showed how a vulnerability in smart lightbulbs could be used to start a chain reaction, resulting in them *all* being controlled by the attackers—that’s every one in a city. Security flaws in these things could mean people dying and property being destroyed.

Nothing motivates the U.S. government like fear. Remember 2001? A small-government Republican president created the Department of Homeland Security in the wake of the Sept. 11 terrorist attacks: a rushed and ill-thought-out decision that we’ve been trying to fix for more than a decade. A fatal IoT disaster will similarly spur our government into action, and it’s unlikely to be well-considered and thoughtful action. Our choice isn’t between government involvement and no government involvement. Our choice is between smarter government involvement and stupider government involvement. We have to start thinking about this now. Regulations are necessary, important and complex—and they’re coming. We can’t afford to ignore these issues until it’s too late.

In general, the software market demands that products be fast and cheap and that security be a secondary consideration. That was okay when software didn’t matter—it was okay that your spreadsheet crashed once in a while. But a software bug that literally crashes your car is another thing altogether. The security vulnerabilities in the Internet of Things are deep and pervasive, and they won’t get fixed if the market is left to sort it out for itself. We need to proactively discuss good regulatory solutions; otherwise, a disaster will impose bad ones on us.

This essay previously appeared in the Washington Post.
https://www.washingtonpost.com/posteverything/wp/…

DDoS:
https://www.washingtonpost.com/news/the-switch/wp/…

IoT and DDoS:
https://krebsonsecurity.com/2016/10/…

The IoT market failure and regulation:
https://www.schneier.com/essays/archives/2016/10/…
https://www.wired.com/2014/01/…
http://www.computerworld.com/article/3136650/…

IoT ransomware:
https://motherboard.vice.com/read/…
medical:

Hacking medical devices:
http://motherboard.vice.com/read/…
http://abcnews.go.com/US/…

Hacking voting machines:
http://www.politico.com/magazine/story/2016/08/…

Hacking power plants:
https://www.wired.com/2016/01/…

Hacking light bulbs:
http://iotworm.eyalro.net


Schneier News

I am speaking in Cambridge, MA on November 15 at the Harvard Big-Data Club.
http://harvardbigdata.com/event/…

I am speaking in Palm Springs, CA on November 30 at the TEDMED Conference.
http://www.tedmed.com/speakers/show?id=627300

I am participating in the Resilient end-of-year webinar on December 8.
http://info.resilientsystems.com/…

I am speaking on December 14 in Accra at the University of Ghana.


Virtual Kidnapping

This is a harrowing story of a scam artist that convinced a mother that her daughter had been kidnapped. It’s unclear if these virtual kidnappers use data about their victims, or just call people at random and hope to get lucky. Still, it’s a new criminal use of smartphones and ubiquitous information. Reminds me of the scammers who call low-wage workers at retail establishments late at night and convince them to do outlandish and occasionally dangerous things.
https://www.washingtonpost.com/local/…
More stories are here.
http://www.nbcwashington.com/investigations/…


Intelligence Oversight and How It Can Fail

Former NSA attorneys John DeLong and Susan Hennessay have written a fascinating article describing a particular incident of oversight failure inside the NSA. Technically, the story hinges on a definitional difference between the NSA and the FISA court meaning of the word “archived.” (For the record, I would have defaulted to the NSA’s interpretation, which feels more accurate technically.) But while the story is worth reading, what’s especially interesting are the broader issues about how a nontechnical judiciary can provide oversight over a very technical data collection-and-analysis organization—especially if the oversight must largely be conducted in secret.

In many places I have separated different kinds of oversight: are we doing things right versus are we doing the right things? This is very much about the first: is the NSA complying with the rules the courts impose on them? I believe that the NSA tries very hard to follow the rules it’s given, while at the same time being very aggressive about how it interprets any kind of ambiguities and using its nonadversarial relationship with its overseers to its advantage.

The only possible solution I can see to all of this is more public scrutiny. Secrecy is toxic here.

https://www.lawfareblog.com/…


Whistleblower Investigative Report on NSA Suite B Cryptography

The NSA has been abandoning secret and proprietary cryptographic algorithms in favor of commercial public algorithms, generally known as “Suite B.” In 2010, an NSA employee filed some sort of whistleblower complaint, alleging that this move is both insecure and wasteful. The US DoD Inspector General investigated and wrote a report in 2011.

The report—slightly redacted and declassified—found that there was no wrongdoing. But the report is an interesting window into the NSA’s system of algorithm selection and testing (pages 5 and 6), as well as how they investigate whistleblower complaints.

http://www.dodig.mil/FOIA/err/…

Suite B Cryptography:
http://csrc.nist.gov/groups/SMA/ispab/documents/…


Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <https://www.schneier.com/crypto-gram.html>. Back issues are also available at that URL.

Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.

CRYPTO-GRAM is written by Bruce Schneier. Bruce Schneier is an internationally renowned security technologist, called a “security guru” by The Economist. He is the author of 13 books—including his latest, “Data and Goliath”—as well as hundreds of articles, essays, and academic papers. His influential newsletter “Crypto-Gram” and his blog “Schneier on Security” are read by over 250,000 people. He has testified before Congress, is a frequent guest on television and radio, has served on several government committees, and is regularly quoted in the press. Schneier is a fellow at the Berkman Center for Internet and Society at Harvard Law School, a program fellow at the New America Foundation’s Open Technology Institute, a board member of the Electronic Frontier Foundation, an Advisory Board Member of the Electronic Privacy Information Center, and the Chief Technology Officer at Resilient, an IBM Company. See <https://www.schneier.com>.

Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of Resilient, an IBM Company.

Copyright (c) 2016 by Bruce Schneier.

Sidebar photo of Bruce Schneier by Joe MacInnis.