March 15, 2017

by Bruce Schneier
CTO, IBM Resilient

A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.

For back issues, or to subscribe, visit <>.

You can read this issue on the web at <>. These same essays and news items appear in the "Schneier on Security" blog at <>, along with a lively and intelligent comment section. An RSS feed is available.

In this issue:

WikiLeaks Releases CIA Hacking Tools

Last Tuesday, WikiLeaks released a cache of 8,761 classified CIA documents from 2012 to 2016, including details of its offensive Internet operations. These are initial reactions and links from two blog posts last week.

There's a lot in here. Many of the hacking tools are redacted, with the tar files and zip archives replaced with messages like:


WikiLeaks has said that they're contacting companies who have zero-day vulnerabilities in the archive before releasing them.

The documents say that the CIA -- and other intelligence services -- can bypass Signal, WhatsApp and Telegram. It seems to be by hacking the end-user devices and grabbing the traffic before and after encryption, not by breaking the encryption.

This is on the WikiLeaks page:

Recently, the CIA lost control of the majority of its hacking arsenal including malware, viruses, trojans, weaponized "zero day" exploits, malware remote control systems and associated documentation. This extraordinary collection, which amounts to more than several hundred million lines of code, gives its possessor the entire hacking capacity of the CIA. The archive appears to have been circulated among former U.S. government hackers and contractors in an unauthorized manner, one of whom has provided WikiLeaks with portions of the archive.

So it sounds like this cache of documents wasn't taken from the CIA and given to WikiLeaks for publication, but has been passed around the community for a while -- and incidentally some part of the cache was passed to WikiLeaks. So there are more documents out there, and others may release them in unredacted form. WikiLeaks said that they have published less than 1% of what they have.

One document talks about Comodo version 5.X and version 6.X. Version 6 was released in Feb 2013. Version 7 was released in Apr 2014. This gives us a time window of that page, and the cache in general. (WikiLeaks says that the documents cover 2013 to 2016.) If these tools are a few years out of date, it's similar to the NSA tools released by the "Shadow Brokers." Most of us thought the Shadow Brokers were the Russians, specifically releasing older NSA tools that had diminished value as secrets.

If I had to guess, I'd say the documents came from an outsider and not an insider. My reasoning: One, there is absolutely nothing illegal in the contents of any of this stuff. It's exactly what you'd expect the CIA to be doing in cyberspace. That makes the whistleblower motive less likely. And two, the documents are a few years old, making this more like the Shadow Brokers than Edward Snowden. An internal leaker would leak quickly. A foreign intelligence agency -- like the Russians -- would use the documents while they were fresh and valuable, and only expose them when the embarrassment value was greater. But, to be sure, I have no idea. I'm just guessing. And as of last week, the CIA believes that it is an insider.

To the documents themselves, I really liked the best practice coding guidelines for malware, and the NOD cryptographic requirements. I am mentioned in the latter document:

Cryptographic jargon is utilized throughout this document. This jargon has precise and subtle meaning and should not be interpreted without careful understanding of the subject matter. Suggested reading includes "Practical Cryptography" by Schneier and Ferguson, RFCs 4251 and 4253, RFCs 5246 and 5430, and "Handbook of Applied Cryptography" by Menezes, van Oorschot, and Vanstone.

The most damning thing I've seen so far is yet more evidence that -- despite assurances to the contrary -- the US intelligence community hoards vulnerabilities in common Internet products and uses them for offensive purposes.

Malware coding guidelines:

NOD crypto requirements:


Botnets have existed for at least a decade. As early as 2000, hackers were breaking into computers over the Internet and controlling them en masse from centralized systems. Among other things, the hackers used the combined computing power of these botnets to launch distributed denial-of-service attacks, which flood websites with traffic to take them down.

But now the problem is getting worse, thanks to a flood of cheap webcams, digital video recorders, and other gadgets in the "Internet of things." Because these devices typically have little or no security, hackers can take them over with little effort. And that makes it easier than ever to build huge botnets that take down much more than one site at a time.

In October, a botnet made up of 100,000 compromised gadgets knocked an Internet infrastructure provider partially offline. Taking down that provider, Dyn, resulted in a cascade of effects that ultimately caused a long list of high-profile websites, including Twitter and Netflix, to temporarily disappear from the Internet. More attacks are sure to follow: the botnet that attacked Dyn was created with publicly available malware called Mirai that largely automates the process of co-opting computers.

The best defense would be for everything online to run only secure software, so botnets couldn't be created in the first place. This isn't going to happen anytime soon. Internet of things devices are not designed with security in mind and often have no way of being patched. The things that have become part of Mirai botnets, for example, will be vulnerable until their owners throw them away. Botnets will get larger and more powerful simply because the number of vulnerable devices will go up by orders of magnitude over the next few years.

What do hackers do with them? Many things.

Botnets are used to commit click fraud. Click fraud is a scheme to fool advertisers into thinking that people are clicking on, or viewing, their ads. There are lots of ways to commit click fraud, but the easiest is probably for the attacker to embed a Google ad in a Web page he owns. Google ads pay a site owner according to the number of people who click on them. The attacker instructs all the computers on his botnet to repeatedly visit the Web page and click on the ad. Dot, dot, dot, PROFIT! If the botnet makers figure out more effective ways to siphon revenue from big companies online, we could see the whole advertising model of the Internet crumble.

Similarly, botnets can be used to evade spam filters, which work partly by knowing which computers are sending millions of e-mails. They can speed up password guessing to break into online accounts, mine bitcoins, and do anything else that requires a large network of computers. This is why botnets are big businesses. Criminal organizations rent time on them.

But the botnet activities that most often make headlines are denial-of-service attacks. Dyn seems to have been the victim of some angry hackers, but more financially motivated groups use these attacks as a form of extortion. Political groups use them to silence websites they don't like. Such attacks will certainly be a tactic in any future cyberwar.

Once you know a botnet exists, you can attack its command-and-control system. When botnets were rare, this tactic was effective. As they get more common, this piecemeal defense will become less so. You can also secure yourself against the effects of botnets. For example, several companies sell defenses against denial-of-service attacks. Their effectiveness varies, depending on the severity of the attack and the type of service.

But overall, the trends favor the attacker. Expect more attacks like the one against Dyn in the coming year.

This essay previously appeared in the MIT Technology Review.

Dyn attack:

Insecurities and the Internet of Things:

What a hacked PC is used for:


How clickfraud could kill the online ad industry:

Renting time on the Murai botnet:

DDoS extortion:


Duqu 2.0 is a really impressive piece of malware, related to Stuxnet and probably written by the NSA. One of its security features is that it stays resident in its host's memory without ever writing persistent files to the system's drives. Now, this same technique is being used by criminals.

Verizon's Data Brief Digest 2017 describes an attack against an unnamed university by attackers who hacked a variety of IoT devices and had them spam network targets and slow them down.

The German government has classified an Internet-connected doll as illegal spyware. "Under German law it is illegal to manufacture, sell or possess surveillance devices disguised as another object."

These days, it's rare that we learn something new from the Snowden documents. But Ben Buchanan found something interesting. The NSA penetrates enemy networks in order to enhance our defensive capabilities.

The first collision in the SHA-1 hash function has been found. This is not a surprise. We've all expected this for over a decade, watching computing power increase. This is why NIST standardized SHA-3 in 2012.
This 2012 analysis was pretty accurate:

The Intercept has a long article on the relationship between Palantir Technologies and the NSA, based on the Snowden documents.

This is an excellent survey article on modern propaganda techniques, how they work, and how we might defend ourselves against them. Cory Doctorow summarizes the techniques on BoingBoing: " Russia, it's about flooding the channel with a mix of lies and truth, crowding out other stories; in China, it's about suffocating arguments with happy-talk distractions, and for trolls like Milo Yiannopoulos, it's weaponizing hate, outraging people so they spread your message to the small, diffused minority of broken people who welcome your message and would otherwise be uneconomical to reach." As to defense: "Debunking doesn't work: provide an alternative narrative."

At a talk last week, the head of US Cyber Command and the NSA Mike Rogers talked about the US buying cyberweapons from arms manufacturers.
Already, Third World countries are buying from cyberweapons arms manufacturers. My guess is that he's right and the US will be doing that in the future, too.

We all should be concerned about the privacy settings in Windows 10. And we should be glad that the EU has the regulatory authority to do something about it.

ProofMode is an app for your smartphone that adds data to the photos you take to prove that they are real and unaltered. This doesn't solve all the problems with fake photos, but it's a good step in the right direction.

Researchers have demonstrated how a malicious piece of software in an air-gapped computer can communicate with a nearby drone using a blinking LED on the computer.
I have mixed feelings about research like this. On the one hand, it's pretty cool. On the other hand, there's not really anything new or novel, and it's kind of a movie-plot threat.
Here's a 2002 paper on this idea:

The delightful story of hacking Marconi's wireless in 1903:

The New York Times reports that Uber developed apps that identified and blocked government regulators using the app to find evidence of illegal behavior. It's a complicated app that relies on Uber's ability to surveil its customers.
When Edward Snowden exposed the fact that the NSA does this sort of thing, I commented that these technologies will eventually become cheap enough for corporations to do it. Now, it has. One discussion we need to have is whether or not this behavior is legal. But another, more important, discussion is whether or not it is ethical. Do we want to live in a society where corporations wield this sort of power against government? Against individuals? Because if we don't align government against this kind of behavior, it'll become the norm.

Longtime Internet security-policy pioneer Howard Schmidt died earlier this month. He will be missed.

Matthew Green and students speculate on what truly well-designed ransomware system could look like:
One of the reasons society hasn't destroyed itself is that people with intelligence and skills tend to not be criminals for a living. If it ever became a viable career path, we're doomed.

The New York Times is reporting that the US has been conducting offensive cyberattacks against North Korea, in an effort to delay its nuclear weapons program.

Google's Project Zero is serious about releasing the details of security vulnerabilities 90 days after they alert the vendors, even if they're unpatched. It just exposed a nasty vulnerability in Microsoft's browsers. This is the second unpatched Microsoft vulnerability it exposed last week. I'm a big fan of responsible disclosure. The threat to publish vulnerabilities is what puts pressure on vendors to patch their systems. But I wonder what competitive pressure is on the Google team to find embarrassing vulnerabilities in competitors' products.

The Department of Justice is dropping all charges in a child-porn case rather than release the details of a hack against Tor.

Brian Krebs posts a video advertisement for Philadelphia, a ransomware package that you can purchase.

I am part of this very interesting project: the Digital Security Exchange. Our goal is to provide better security for high-risk communities.

Some good election security news for a change: France is dropping its plans for remote Internet voting, because it's concerned about hacking.

CloudPets are an Internet-connected stuffed animals that allow children and parents to send each other voice messages. Last week, we learned that Spiral Toys had such poor security that it exposed 800,000 customer credentials, and two million audio recordings.

Defense Against Doxing

A decade ago, I wrote about the death of ephemeral conversation. As computers were becoming ubiquitous, some unintended changes happened, too. Before computers, what we said disappeared once we'd said it. Neither face-to-face conversations nor telephone conversations were routinely recorded. A permanent communication was something different and special; we called it correspondence.

The Internet changed this. We now chat by text message and e-mail, on Facebook and on Instagram. These conversations -- with friends, lovers, colleagues, fellow employees -- all leave electronic trails. And while we know this intellectually, we haven't truly internalized it. We still think of conversation as ephemeral, forgetting that we're being recorded and what we say has the permanence of correspondence.

That our data is used by large companies for psychological manipulation -- we call this advertising -- is well known. So is its use by governments for law enforcement and, depending on the country, social control. What made the news over the past year were demonstrations of how vulnerable all of this data is to hackers and the effects of having it hacked, copied, and then published online. We call this doxing.

Doxing isn't new, but it has become more common. It's been perpetrated against corporations, law firms, individuals, the NSA and -- just this week -- the CIA. It's largely harassment and not whistleblowing, and it's not going to change anytime soon. The data in your computer and in the cloud are, and will continue to be, vulnerable to hacking and publishing online. Depending on your prominence and the details of this data, you may need some new strategies to secure your private life.

There are two basic ways hackers can get at your e-mail and private documents. One way is to guess your password. That's how hackers got their hands on personal photos of celebrities from iCloud in 2014.

How to protect yourself from this attack is pretty obvious. First, don't choose a guessable password. This is more than not using "password1" or "qwerty"; most easily memorizable passwords are guessable. My advice is to generate passwords you have to remember by using either the XKCD scheme or the Schneier scheme, and to use large random passwords stored in a password manager for everything else.

Second, turn on two-factor authentication where you can, like Google's 2-Step Verification. This adds another step besides just entering a password, such as having to type in a one-time code that's sent to your mobile phone. And third, don't reuse the same password on any sites you actually care about.

You're not done, though. Hackers have accessed accounts by exploiting the "secret question" feature and resetting the password. That was how Sarah Palin's e-mail account was hacked in 2008. The problem with secret questions is that they're not very secret and not very random. My advice is to refuse to use those features. Type randomness into your keyboard, or choose a really random answer and store it in your password manager.

Finally, you also have to stay alert to phishing attacks, where a hacker sends you an enticing e-mail with a link that sends you to a web page that looks *almost* like the expected page, but which actually isn't. This sort of thing can bypass two-factor authentication, and is almost certainly what tricked John Podesta and Colin Powell.

The other way hackers can get at your personal stuff is by breaking in to the computers the information is stored on. This is how the Russians got into the Democratic National Committee's network and how a lone hacker got into the Panamanian law firm Mossack Fonseca. Sometimes individuals are targeted, as when China hacked Google in 2010 to access the e-mail accounts of human rights activists. Sometimes the whole network is the target, and individuals are inadvertent victims, as when thousands of Sony employees had their e-mails published by North Korea in 2014.

Protecting yourself is difficult, because it often doesn't matter what you do. If your e-mail is stored with a service provider in the cloud, what matters is the security of that network and that provider. Most users have no control over that part of the system. The only way to truly protect yourself is to not keep your data in the cloud where someone could get to it. This is hard. We like the fact that all of our e-mail is stored on a server somewhere and that we can instantly search it. But that convenience comes with risk. Consider deleting old e-mail, or at least downloading it and storing it offline on a portable hard drive. In fact, storing data offline is one of the best things you can do to protect it from being hacked and exposed. If it's on your computer, what matters is the security of your operating system and network, not the security of your service provider.

Consider this for files on your own computer. The more things you can move offline, the safer you'll be.

E-mail, no matter how you store it, is vulnerable. If you're worried about your conversations becoming public, think about an encrypted chat program instead, such as Signal, WhatsApp or Off-the-Record Messaging. Consider using communications systems that don't save everything by default.

None of this is perfect, of course. Portable hard drives are vulnerable when you connect them to your computer. There are ways to jump air gaps and access data on computers not connected to the Internet. Communications and data files you delete might still exist in backup systems somewhere -- either yours or those of the various cloud providers you're using. And always remember that there's always another copy of any of your conversations stored with the person you're conversing with. Even with these caveats, though, these measures will make a big difference.

When secrecy is truly paramount, go back to communications systems that are still ephemeral. Pick up the telephone and talk. Meet face to face. We don't yet live in a world where everything is recorded and everything is saved, although that era is coming. Enjoy the last vestiges of ephemeral conversation while you still can.

This essay originally appeared in the Washington Post.

Me on ephemeral conversation:

Why this is largely not whistleblowing:

New straetegies:


Google's 2-Step Verification:

How hackers tricked Podesta and Powell:

China and Google:

North Korea and Sony:

Off-the-Record Messaging:

Schneier News

Last November, I gave a talk at the TEDMED Conference on health and medical data privacy. The talk is now online.

This is my talk at the RSA Conference last month.
It's on regulation and the Internet of Things, along the lines of this essay.
I am slowly meandering around this as a book topic. It hasn't quite solidified yet.

I'll be speaking at IBM InterConnect in Las Vegas on March 21.

I'll be speaking at RightsCon in Brussels on March 30-31.

Buzzword Watch: Prosilience

Summer Fowler at CMU has invented a new word: prosilience:

I propose that we build operationally PROSILIENT organizations. If operational resilience, as we like to say, is risk management "all grown up," then prosilience is resilience with consciousness of environment, self-awareness, and the capacity to evolve. It is not about being able to operate through disruption, it is about anticipating disruption and adapting before it even occurs--a proactive version of resilience. Nascent prosilient capabilities include exercises (tabletop or technical) that simulate how organizations would respond to a scenario. The goal, however, is to automate, expand, and perform continuous exercises based on real-world indicators rather than on scenarios.

I have long been a big fan of resilience as a security concept, and the property we should be aiming for. I'm not sure prosilience buys me anything new, but this is my first encounter with this new buzzword. It would certainly make for a best-selling business-book title.

Me on resilience:

The CIA's "Development Tradecraft DOs and DON'Ts"

Useful best practices for malware writers, courtesy of the CIA. Seems like a lot of good advice.


* DO obfuscate or encrypt all strings and configuration data that directly relate to tool functionality. Consideration should be made to also only de-obfuscating strings in-memory at the moment the data is needed. When a previously de-obfuscated value is no longer needed, it should be wiped from memory.

Rationale: String data and/or configuration data is very useful to analysts and reverse-engineers.

* DO NOT decrypt or de-obfuscate all string data or configuration data immediately upon execution.

Rationale: Raises the difficulty for automated dynamic analysis of the binary to find sensitive data.

* DO explicitly remove sensitive data (encryption keys, raw collection data, shellcode, uploaded modules, etc) from memory as soon as the data is no longer needed in plain-text form. DO NOT RELY ON THE OPERATING SYSTEM TO DO THIS UPON TERMINATION OF EXECUTION.

Rationale: Raises the difficulty for incident response and forensics review.

* DO utilize a deployment-time unique key for obfuscation/de-obfuscation of sensitive strings and configuration data.

Rationale: Raises the difficulty of analysis of multiple deployments of the same tool.

* DO strip all debug symbol information, manifests(MSVC artifact), build paths, developer usernames from the final build of a binary.

Rationale: Raises the difficulty for analysis and reverse-engineering, and removes artifacts used for attribution/origination.

* DO strip all debugging output (e.g. calls to printf(), OutputDebugString(), etc) from the final build of a tool.

Rationale: Raises the difficulty for analysis and reverse-engineering.

* DO NOT explicitly import/call functions that is not consistent with a tool's overt functionality (i.e. WriteProcessMemory, VirtualAlloc, CreateRemoteThread, etc - for binary that is supposed to be a notepad replacement).

Rationale: Lowers potential scrutiny of binary and slightly raises the difficulty for static analysis and reverse-engineering.

* DO NOT export sensitive function names; if having exports are required for the binary, utilize an ordinal or a benign function name.

Rationale: Raises the difficulty for analysis and reverse-engineering.

* DO NOT generate crashdump files, coredump files, "Blue" screens, Dr Watson or other dialog pop-ups and/or other artifacts in the event of a program crash. DO attempt to force a program crash during unit testing in order to properly verify this.

Rationale: Avoids suspicion by the end user and system admins, and raises the difficulty for incident response and reverse-engineering.

* DO NOT perform operations that will cause the target computer to be unresponsive to the user (e.g. CPU spikes, screen flashes, screen "freezing", etc).

Rationale: Avoids unwanted attention from the user or system administrator to tool's existence and behavior.

* DO make all reasonable efforts to minimize binary file size for all binaries that will be uploaded to a remote target (without the use of packers or compression). Ideal binary file sizes should be under 150KB for a fully featured tool.

Rationale: Shortens overall "time on air" not only to get the tool on target, but to time to execute functionality and clean-up.

* DO provide a means to completely "uninstall"/"remove" implants, function hooks, injected threads, dropped files, registry keys, services, forked processes, etc whenever possible. Explicitly document (even if the documentation is "There is no uninstall for this <feature>") the procedures, permissions required and side effects of removal.

Rationale: Avoids unwanted data left on target. Also, proper documentation allows operators to make better operational risk assessment and fully understand the implications of using a tool or specific feature of a tool.

* DO NOT leave dates/times such as compile timestamps, linker timestamps, build times, access times, etc. that correlate to general US core working hours (i.e. 8am-6pm Eastern time)

Rationale: Avoids direct correlation to origination in the United States.

* DO NOT leave data in a binary file that demonstrates CIA, USG, or its witting partner companies involvement in the creation or use of the binary/tool.

Rationale: Attribution of binary/tool/etc by an adversary can cause irreversible impacts to past, present and future USG operations and equities.

* DO NOT have data that contains CIA and USG cover terms, compartments, operation code names or other CIA and USG specific terminology in the binary.

Rationale: Attribution of binary/tool/etc by an adversary can cause irreversible impacts to past, present and future USG operations and equities.

* DO NOT have "dirty words" (see dirty word list - TBD) in the binary.

Rationale: Dirty words, such as hacker terms, may cause unwarranted scrutiny of the binary file in question.


* DO use end-to-end encryption for all network communications. NEVER use networking protocols which break the end-to-end principle with respect to encryption of payloads.

Rationale: Stifles network traffic analysis and avoids exposing operational/collection data.

* DO NOT solely rely on SSL/TLS to secure data in transit.

Rationale: Numerous man-in-middle attack vectors and publicly disclosed flaws in the protocol.

* DO NOT allow network traffic, such as C2 packets, to be re-playable.

Rationale: Protects the integrity of operational equities.

* DO use ITEF RFC compliant network protocols as a blending layer. The actual data, which must be encrypted in transit across the network, should be tunneled through a well known and standardized protocol (e.g. HTTPS)

Rationale: Custom protocols can stand-out to network analysts and IDS filters.

* DO NOT break compliance of an RFC protocol that is being used as a blending layer. (i.e. Wireshark should not flag the traffic as being broken or mangled)

Rationale: Broken network protocols can easily stand-out in IDS filters and network analysis.

* DO use variable size and timing (aka jitter) of beacons/network communications. DO NOT predicatively send packets with a fixed size and timing.

Rationale: Raises the difficulty of network analysis and correlation of network activity.

* DO proper cleanup of network connections. DO NOT leave around stale network connections.

Rationale: Raises the difficulty of network analysis and incident response.

Disk I/O:

* DO explicitly document the "disk forensic footprint" that could be potentially created by various features of a binary/tool on a remote target.

Rationale: Enables better operational risk assessments with knowledge of potential file system forensic artifacts.

* DO NOT read, write and/or cache data to disk unnecessarily. Be cognizant of 3rd party code that may implicitly write/cache data to disk.

Rationale: Lowers potential for forensic artifacts and potential signatures.

* DO NOT write plain-text collection data to disk.

Rationale: Raises difficulty of incident response and forensic analysis.

* DO encrypt all data written to disk.

Rationale: Disguises intent of file (collection, sensitive code, etc) and raises difficulty of forensic analysis and incident response.

* DO utilize a secure erase when removing a file from disk that wipes at a minimum the file's filename, datetime stamps (create, modify and access) and its content. (Note: The definition of "secure erase" varies from filesystem to filesystem, but at least a single pass of zeros of the data should be performed. The emphasis here is on removing all filesystem artifacts that could be useful during forensic analysis)

Rationale: Raises difficulty of incident response and forensic analysis.

* DO NOT perform Disk I/O operations that will cause the system to become unresponsive to the user or alerting to a System Administrator.

Rationale: Avoids unwanted attention from the user or system administrator to tool's existence and behavior.

* DO NOT use a "magic header/footer" for encrypted files written to disk. All encrypted files should be completely opaque data files.

Rationale: Avoids signature of custom file format's magic values.

* DO NOT use hard-coded filenames or filepaths when writing files to disk. This must be configurable at deployment time by the operator.

Rationale: Allows operator to choose the proper filename that fits with in the operational target.

* DO have a configurable maximum size limit and/or output file count for writing encrypted output files.

Rationale: Avoids situations where a collection task can get out of control and fills the target's disk; which will draw unwanted attention to the tool and/or the operation.


* DO use GMT/UTC/Zulu as the time zone when comparing date/time.

Rationale: Provides consistent behavior and helps ensure "triggers/beacons/etc" fire when expected.

* DO NOT use US-centric timestamp formats such as MM-DD-YYYY. YYYYMMDD is generally preferred.

Rationale: Maintains consistency across tools, and avoids associations with the United States.


* DO NOT assume a "free" PSP product is the same as a "retail" copy. Test on all SKUs where possible.

Rationale: While the PSP/AV product may come from the same vendor and appear to have the same features despite having different SKUs, they are not. Test on all SKUs where possible.

* DO test PSPs with live (or recently live) internet connection where possible. NOTE: This can be a risk vs gain balance that requires careful consideration and should not be haphazardly done with in-development software. It is well known that PSP/AV products with a live internet connection can and do upload samples software based varying criteria.

Rationale: PSP/AV products exhibit significant differences in behavior and detection when connected to the internet vise not.

Encryption: NOD publishes a Cryptography standard: NOD Cryptographic Requirements v1.1 TOP SECRET.pdf. Besides the guidance provided here, the requirements in that document should also be met.

News article:

Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <>. Back issues are also available at that URL.

Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.

CRYPTO-GRAM is written by Bruce Schneier. Bruce Schneier is an internationally renowned security technologist, called a "security guru" by The Economist. He is the author of 12 books -- including "Liars and Outliers: Enabling the Trust Society Needs to Survive" -- as well as hundreds of articles, essays, and academic papers. His influential newsletter "Crypto-Gram" and his blog "Schneier on Security" are read by over 250,000 people. He has testified before Congress, is a frequent guest on television and radio, has served on several government committees, and is regularly quoted in the press. Schneier is a fellow at the Berkman Center for Internet and Society at Harvard Law School, a program fellow at the New America Foundation's Open Technology Institute, a board member of the Electronic Frontier Foundation, an Advisory Board Member of the Electronic Privacy Information Center, and CTO of IBM Resilient and Special Advisor to IBM Security. See <>.

Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of IBM Resilient.

Copyright (c) 2017 by Bruce Schneier.

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of IBM Resilient.