For years, Humble Bundle has been selling great books at a "pay what you can afford" model. This month, they're featuring as many as nineteen cybersecurity books for as little as $1, including four of mine. These are digital copies, all DRM-free. Part of the money goes to support the EFF or Let's Encrypt. (The default is 15%, and you can change that.) As an EFF board member, I know that we've received a substantial amount from this program in previous years.
Google presented its system of using deep-learning techniques to identify malicious email attachments:
At the RSA security conference in San Francisco on Tuesday, Google's security and anti-abuse research lead Elie Bursztein will present findings on how the new deep-learning scanner for documents is faring against the 300 billion attachments it has to process each week. It's challenging to tell the difference between legitimate documents in all their infinite variations and those that have specifically been manipulated to conceal something dangerous. Google says that 63 percent of the malicious documents it blocks each day are different than the ones its systems flagged the day before. But this is exactly the type of pattern-recognition problem where deep learning can be helpful.
The document analyzer looks for common red flags, probes files if they have components that may have been purposefully obfuscated, and does other checks like examining macros -- the tool in Microsoft Word documents that chains commands together in a series and is often used in attacks. The volume of malicious documents that attackers send out varies widely day to day. Bursztein says that since its deployment, the document scanner has been particularly good at flagging suspicious documents sent in bursts by malicious botnets or through other mass distribution methods. He was also surprised to discover how effective the scanner is at analyzing Microsoft Excel documents, a complicated file format that can be difficult to assess.
This is the sort of thing that's pretty well optimized for machine-learning techniques.
This law journal article discusses the role of class-action litigation to secure the Internet of Things.
Basically, the article postulates that (1) market realities will produce insecure IoT devices, and (2) political failures will leave that industry unregulated. Result: insecure IoT. It proposes proactive class action litigation against manufacturers of unsafe and unsecured IoT devices before those devices cause unnecessary injury or death. It's a lot to read, but it's an interesting take on how to secure this otherwise disastrously insecure world.
And it was inspired by my book, Click Here to Kill Everybody.
A National Security Agency system that analyzed logs of Americans' domestic phone calls and text messages cost $100 million from 2015 to 2019, but yielded only a single significant investigation, according to a newly declassified study.
Moreover, only twice during that four-year period did the program generate unique information that the F.B.I. did not already possess, said the study, which was produced by the Privacy and Civil Liberties Oversight Board and briefed to Congress on Tuesday.
The privacy board, working with the intelligence community, got several additional salient facts declassified as part of the rollout of its report. Among them, it officially disclosed that the system has gained access to Americans' cellphone records, not just logs of landline phone calls.
It also disclosed that in the four years the Freedom Act system was operational, the National Security Agency produced 15 intelligence reports derived from it. The other 13, however, contained information the F.B.I. had already collected through other means, like ordinary subpoenas to telephone companies.
The report cited two investigations in which the National Security Agency produced reports derived from the program: its analysis of the Pulse nightclub mass shooting in Orlando, Fla., in June 2016 and of the November 2016 attack at Ohio State University by a man who drove his car into people and slashed at them with a machete. But it did not say whether the investigations into either of those attacks were connected to the two intelligence reports that provided unique information not already in the possession of the F.B.I.
This is good news:
Whenever you visit a website -- even if it's HTTPS enabled -- the DNS query that converts the web address into an IP address that computers can read is usually unencrypted. DNS-over-HTTPS, or DoH, encrypts the request so that it can't be intercepted or hijacked in order to send a user to a malicious site.
But the move is not without controversy. Last year, an internet industry group branded Mozilla an "internet villain" for pressing ahead the security feature. The trade group claimed it would make it harder to spot terrorist materials and child abuse imagery. But even some in the security community are split, amid warnings that it could make incident response and malware detection more difficult.
The move to enable DoH by default will no doubt face resistance, but browser makers have argued it's not a technology that browser makers have shied away from. Firefox became the first browser to implement DoH -- with others, like Chrome, Edge, and Opera -- quickly following suit.
I think DoH is a great idea, and long overdue.
The Times of London is reporting that Russian agents are in Ireland probing transatlantic communications cables.
Ireland is the landing point for undersea cables which carry internet traffic between America, Britain and Europe. The cables enable millions of people to communicate and allow financial transactions to take place seamlessly.
Garda and military sources believe the agents were sent by the GRU, the military intelligence branch of the Russian armed forces which was blamed for the nerve agent attack in Britain on Sergei Skripal, a former Russian intelligence officer.
Boing Boing post.
It's probably a juvenile:
Researchers aboard the New Zealand-based National Institute of Water and Atmospheric Research Ltd (NIWA) research vessel Tangaroa were on an expedition to survey hoki, New Zealand's most valuable commercial fish, in the Chatham Rise an area of ocean floor to the east of New Zealand that makes up part of the "lost continent" of Zealandia.
At 7.30am on the morning of January 21, scientists were hauling up their trawler net from a depth of 442 meters (1,450 feet) when they were surprised to spot tentacles in amongst their catch. Large tentacles.
According to voyage leader and NIWA fisheries scientist Darren Stevens, who was on watch, it took six members of staff to lift the giant squid out of the net. Despite the squid being 4 meters long and weighing about 110 kilograms (240 pounds), Stevens said he thought the squid was "on the smallish side," compared to other behemoths caught.
As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.
Read my blog posting guidelines here.
For decades, I have been talking about the importance of individual privacy. For almost as long, I have been using the metaphor of digital feudalism to describe how large companies have become central control points for our data. And for maybe half a decade, I have been talking about the world-sized robot that is the Internet of Things, and how digital security is now a matter of public safety. And most recently, I have been writing and speaking about how technologists need to get involved with public policy.
All of this is a long-winded way of saying that I have joined a company called Inrupt that is working to bring Tim Berners-Lee's distributed data ownership model that is Solid into the mainstream. (I think of Inrupt basically as the Red Hat of Solid.) I joined the Inrupt team last summer as its Chief of Security Architecture, and have been in stealth mode until now.
The idea behind Solid is both simple and extraordinarily powerful. Your data lives in a pod that is controlled by you. Data generated by your things -- your computer, your phone, your IoT whatever -- is written to your pod. You authorize granular access to that pod to whoever you want for whatever reason you want. Your data is no longer in a bazillion places on the Internet, controlled by you-have-no-idea-who. It's yours. If you want your insurance company to have access to your fitness data, you grant it through your pod. If you want your friends to have access to your vacation photos, you grant it through your pod. If you want your thermostat to share data with your air conditioner, you give both of them access through your pod.
The ideal would be for this to be completely distributed. Everyone's pod would be on a computer they own, running on their network. But that's not how it's likely to be in real life. Just as you can theoretically run your own email server but in reality you outsource it to Google or whoever, you are likely to outsource your pod to those same sets of companies. But maybe pods will come standard issue in home routers. Even if you do hand your pod over to some company, it'll be like letting them host your domain name or manage your cell phone number. If you don't like what they're doing, you can always move your pod -- just like you can take your cell phone number and move to a different carrier. This will give users a lot more power.
I believe this will fundamentally alter the balance of power in a world where everything is a computer, and everything is producing data about you. Either IoT companies are going to enter into individual data sharing agreements, or they'll all use the same language and protocols. Solid has a very good chance of being that protocol. And security is critical to making all of this work. Just trying to grasp what sort of granular permissions are required, and how the authentication flows might work, is mind-altering. We're stretching pretty much every Internet security protocol to its limits and beyond just setting this up.
Building a secure technical infrastructure is largely about policy, but there's also a wave of technology that can shift things in one direction or the other. Solid is one of those technologies. It moves the Internet away from overly-centralized power of big corporations and governments and towards more rational distributions of power; greater liberty, better privacy, and more freedom for everyone.
I've worked with Inrupt's CEO, John Bruce, at both of my previous companies: Counterpane and Resilient. It's a little weird working for a start-up that is not a security company. (While security is essential to making Solid work, the technology is fundamentally about the functionality.) It's also a little surreal working on a project conceived and spearheaded by Tim Berners-Lee. But at this point, I feel that I should only work on things that matter to society. So here I am.
Whatever happens next, it's going to be a really fun ride.
Sometime around 1993 or 1994, during the first Crypto Wars, I was part of a group of cryptography experts that went to Washington to advocate for strong encryption. Matt Blaze and Ron Rivest were with me; I don't remember who else. We met with then Massachusetts Representative Ed Markey. (He didn't become a senator until 2013.) Back then, he and Vermont Senator Patrick Leahy were the most knowledgeable on this issue and our biggest supporters against government backdoors. They still are.
Markey was against forcing encrypted phone providers to implement the NSA's Clipper Chip in their devices, but wanted us to reach a compromise with the FBI regardless. This completely startled us techies, who thought having the right answer was enough. It was at that moment that I learned an important difference between technologists and policy makers. Technologists want solutions; policy makers want consensus.
Since then, I have become more immersed in policy discussions. I have spent more time with legislators, advised advocacy organizations like EFF and EPIC, and worked with policy-minded think tanks in the United States and around the world. I teach cybersecurity policy and technology at the Harvard Kennedy School of Government. My most recent two books, Data and Goliath -- about surveillance -- and Click Here to Kill Everybody -- about IoT security -- are really about the policy implications of technology.
Over that time, I have observed many other differences between technologists and policy makers -- differences that we in cybersecurity need to understand if we are to translate our technological solutions into viable policy outcomes.
Technologists don't try to consider all of the use cases of a given technology. We tend to build something for the uses we envision, and hope that others can figure out new and innovative ways to extend what we created. We love it when there is a new use for a technology that we never considered and that changes the world. And while we might be good at security around the use cases we envision, we are regularly blindsided when it comes to new uses or edge cases. (Authentication risks surrounding someone's intimate partner is a good example.)
Policy doesn't work that way; it's specifically focused on use. It focuses on people and what they do. Policy makers can't create policy around a piece of technology without understanding how it is used -- how all of it's used.
Policy is often driven by exceptional events, like the FBI's desire to break the encryption on the San Bernardino shooter's iPhone. (The PATRIOT Act is the most egregious example I can think of.) Technologists tend to look at more general use cases, like the overall value of strong encryption to societal security. Policy tends to focus on the past, making existing systems work or correcting wrongs that have happened. It's hard to imagine policy makers creating laws around VR systems, because they don't yet exist in any meaningful way. Technology is inherently future focused. Technologists try to imagine better systems, or future flaws in present systems, and work to improve things.
As technologists, we iterate. It's how we write software. It's how we field products. We know we can't get it right the first time, so we have developed all sorts of agile systems to deal with that fact. Policy making is often the opposite. U.S. federal laws take months or years to negotiate and pass, and after that the issue doesn't get addressed again for a decade or more. It is much more critical to get it right the first time, because the effects of getting it wrong are long lasting. (See, for example, parts of the GDPR.) Sometimes regulatory agencies can be more agile. The courts can also iterate policy, but it's slower.
Along similar lines, the two groups work in very different time frames. Engineers, conditioned by Moore's law, have long thought of 18 months as the maximum time to roll out a new product, and now think in terms of continuous deployment of new features. As I said previously, policy makers tend to think in terms of multiple years to get a law or regulation in place, and then more years as the case law builds up around it so everyone knows what it really means. It's like tortoises and hummingbirds.
Technology is inherently global. It is often developed with local sensibilities according to local laws, but it necessarily has global reach. Policy is always jurisdictional. This difference is causing all sorts of problems for the global cloud services we use every day. The providers are unable to operate their global systems in compliance with more than 200 different -- and sometimes conflicting -- national requirements. Policy makers are often unimpressed with claims of inability; laws are laws, they say, and if Facebook can translate its website into French for the French, it can also implement their national laws.
Technology and policy both use concepts of trust, but differently. Technologists tend to think of trust in terms of controls on behavior. We're getting better -- NIST's recent work on trust is a good example -- but we have a long way to go. For example, Google's Trust and Safety Department does a lot of AI and ethics work largely focused on technological controls. Policy makers think of trust in more holistic societal terms: trust in institutions, trust as the ability not to worry about adverse outcomes, consumer confidence. This dichotomy explains how techies can claim bitcoin is trusted because of the strong cryptography, but policy makers can't imagine calling a system trustworthy when you lose all your money if you forget your encryption key.
Policy is how society mediates how individuals interact with society. Technology has the potential to change how individuals interact with society. The conflict between these two causes considerable friction, as technologists want policy makers to get out of the way and not stifle innovation, and policy makers want technologists to stop moving fast and breaking so many things.
Finally, techies know that code is law -- that the restrictions and limitations of a technology are more fundamental than any human-created legal anything. Policy makers know that law is law, and tech is just tech. We can see this in the tension between applying existing law to new technologies and creating new law specifically for those new technologies.
Yes, these are all generalizations and there are exceptions. It's also not all either/or. Great technologists and policy makers can see the other perspectives. The best policy makers know that for all their work toward consensus, they won't make progress by redefining pi as three. Thoughtful technologists look beyond the immediate user demands to the ways attackers might abuse their systems, and design against those adversaries as well. These aren't two alien species engaging in first contact, but cohorts who can each learn and borrow tools from the other. Too often, though, neither party tries.
In October, I attended the first ACM Symposium on Computer Science and the Law. Google counsel Brian Carver talked about his experience with the few computer science grad students who would attend his Intellectual Property and Cyberlaw classes every year at UC Berkeley. One of the first things he would do was give the students two different cases to read. The cases had nearly identical facts, and the judges who'd ruled on them came to exactly opposite conclusions. The law students took this in stride; it's the way the legal system works when it's wrestling with a new concept or idea. But it shook the computer science students. They were appalled that there wasn't a single correct answer.
But that's not how law works, and that's not how policy works. As the technologies we're creating become more central to society, and as we in technology continue to move into the public sphere and become part of the increasingly important policy debates, it is essential that we learn these lessons. Gone are the days when we were creating purely technical systems and our work ended at the keyboard and screen. Now we're building complex socio-technical systems that are literally creating a new world. And while it's easy to dismiss policy makers as doing it wrong, it's important to understand that they're not. Policy making has been around a lot longer than the Internet or computers or any technology. And the essential challenges of this century will require both groups to work together.
This essay previously appeared in IEEE Security & Privacy.
Sidebar photo of Bruce Schneier by Joe MacInnis.