Friday Squid Blogging: Squid Bikes
Squid Bikes is a California brand. Article from Velo News.
As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.
Squid Bikes is a California brand. Article from Velo News.
As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.
Interesting paper: “Security Collapse of the HTTPS Market.” From the conclusion:
Recent breaches at CAs have exposed several systemic vulnerabilities and market failures inherent in the current HTTPS authentication model: the security of the entire ecosystem suffers if any of the hundreds of CAs is compromised (weakest link); browsers are unable to revoke trust in major CAs (“too big to fail”); CAs manage to conceal security incidents (information asymmetry); and ultimately customers and end users bear the liability and damages of security incidents (negative externalities).
Understanding the market and value chain for HTTPS is essential to address these systemic vulnerabilities. The market is highly concentrated, with very large price differences among suppliers and limited price competition. Paradoxically, the current vulnerabilities benefit rather than hurt the dominant CAs, because among others, they are too big to fail.
This is an interesting paper—the full version is behind a paywall—about how we as humans can motivate people to cooperate with future generations.
Abstract: Overexploitation of renewable resources today has a high cost on the welfare of future generations. Unlike in other public goods games, however, future generations cannot reciprocate actions made today. What mechanisms can maintain cooperation with the future? To answer this question, we devise a new experimental paradigm, the ‘Intergenerational Goods Game’. A line-up of successive groups (generations) can each either extract a resource to exhaustion or leave something for the next group. Exhausting the resource maximizes the payoff for the present generation, but leaves all future generations empty-handed. Here we show that the resource is almost always destroyed if extraction decisions are made individually. This failure to cooperate with the future is driven primarily by a minority of individuals who extract far more than what is sustainable. In contrast, when extractions are democratically decided by vote, the resource is consistently sustained. Voting is effective for two reasons. First, it allows a majority of cooperators to restrain defectors. Second, it reassures conditional cooperators that their efforts are not futile. Voting, however, only promotes sustainability if it is binding for all involved. Our results have implications for policy interventions designed to sustain intergenerational public goods.
Here’s a Q&A with and essay by the author. Article on the research.
EDITED TO ADD (12/10): A low-res version of the full article can be viewed here.
A new story based on the Snowden documents and published in the German newspaper Süddeutsche Zeitung shows how the GCHQ worked with Cable & Wireless—acquired by Vodafone in 2012—to eavesdrop on Internet and telecommunications traffic. New documents on the page, and here.
This is a creepy story. The FBI wanted access to a hotel guest’s room without a warrant. So agents broke his Internet connection, and then posed as Internet technicians to gain access to his hotel room without a warrant.
From the motion to suppress:
The next time you call for assistance because the internet service in your home is not working, the “technician” who comes to your door may actually be an undercover government agent. He will have secretly disconnected the service, knowing that you will naturally call for help and—when he shows up at your door, impersonating a technician—let him in. He will walk through each room of your house, claiming to diagnose the problem. Actually, he will be videotaping everything (and everyone) inside. He will have no reason to suspect you have broken the law, much less probable cause to obtain a search warrant. But that makes no difference, because by letting him in, you will have “consented” to an intrusive search of your home.
Basically, the agents snooped around the hotel room, and gathered evidence that they submitted to a magistrate to get a warrant. Of course, they never told the judge that they had engineered the whole outage and planted the fake technicians.
More coverage of the case here.
This feels like an important case to me. We constantly allow repair technicians into our homes to fix this or that technological thingy. If we can’t be sure they are not government agents in disguise, then we’ve lost quite a lot of our freedom and liberty.
Regin is another military–grade surveillance malware (tech details from Symantec and Kaspersky). It seems to have been in operation between 2008 and 2011. The Intercept has linked it to NSA/GCHQ operations, although I am still skeptical of the NSA/GCHQ hacking Belgian cryptographer Jean-Jacques Quisquater.
EDITED TO ADD (12/10): More information.
Nice article on some of the security assumptions we rely on in cryptographic algorithms.
Jim Sanborn has given the world another clue to the fourth cyphertext in his Kryptos sculpture at the CIA headquarters.
Tales of cephalopod behavior, including octopuses, squid, cuttlefish and nautiluses.
Cephalopod Cognition, published by Cambridge University Press, is currently available in hardcover, and the paperback edition will be available next week.
As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.
AP is reporting that in 2009, several senior NSA officials objected to the NSA call-records collection program.
The now-retired NSA official, a longtime code-breaker who rose to top management, had just learned in 2009 about the top secret program that was created shortly after the Sept. 11, 2001, attacks. He says he argued to then-NSA Director Keith Alexander that storing the calling records of nearly every American fundamentally changed the character of the agency, which is supposed to eavesdrop on foreigners, not Americans.
Hacker News thread.
Citadel is the first piece of malware I know of that specifically steals master passwords from password managers. Note that my own Password Safe is a target.
Announcing Let’s Encrypt, a new free certificate authority. This is a joint project of EFF, Mozilla, Cisco, Akamai, and the University of Michigan.
This is an absolutely fantastic idea.
The anchor for any TLS-protected communication is a public-key certificate which demonstrates that the server you’re actually talking to is the server you intended to talk to. For many server operators, getting even a basic server certificate is just too much of a hassle. The application process can be confusing. It usually costs money. It’s tricky to install correctly. It’s a pain to update.
Let’s Encrypt is a new free certificate authority, built on a foundation of cooperation and openness, that lets everyone be up and running with basic server certificates for their domains through a simple one-click process.
[…]
The key principles behind Let’s Encrypt are:
- Free: Anyone who owns a domain can get a certificate validated for that domain at zero cost.
- Automatic: The entire enrollment process for certificates occurs painlessly during the server’s native installation or configuration process, while renewal occurs automatically in the background.
- Secure: Let’s Encrypt will serve as a platform for implementing modern security techniques and best practices.
- Transparent: All records of certificate issuance and revocation will be available to anyone who wishes to inspect them.
- Open: The automated issuance and renewal protocol will be an open standard and as much of the software as possible will be open source.
- Cooperative: Much like the underlying Internet protocols themselves, Let’s Encrypt is a joint effort to benefit the entire community, beyond the control of any one organization.
Slashdot thread. Hacker News thread.
Whatsapp is now offering end-to-end message encryption:
Whatsapp will integrate the open-source software Textsecure, created by privacy-focused non-profit Open Whisper Systems, which scrambles messages with a cryptographic key that only the user can access and never leaves his or her device.
I don’t know the details, but the article talks about perfect forward secrecy. Moxie Marlinspike is involved, which gives me some confidence that it’s a robust implementation.
EDITED TO ADD (11/20): Slashdot thread.
The NSA recently declassified a report on the Eurocrypt ’92 conference. Honestly, I share some of the writer’s opinions on the more theoretical stuff. I know it’s important, but it’s not something I care all that much about.
New article on the NSA’s efforts to control academic cryptographic research in the 1970s. It includes new interviews with public-key cryptography inventor Martin Hellman and then NSA-director Bobby Inman.
The interesting story of how engineers at Ford Motor Co. invented the superconducting quantum interference device, or SQUID.
As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.
Last month, for the first time since US export restrictions on cryptography were relaxed over a decade ago, the US government has fined a company for exporting crypto software without a license.
News article.
No one knows what this means.
Pew Research has released a new survey on American’s perceptions of privacy. The results are pretty much in line with all the other surveys on privacy I’ve read. As Cory Doctorow likes to say, we’ve reached “peak indifference to surveillance.”
It’s not happening often, but it seems that some ISPs are blocking STARTTLS messages and causing web encryption to fail. EFF has the story.
Orin Kerr has a new article that argues for narrowly constructing national security law:
This Essay argues that Congress should adopt a rule of narrow construction of the national security surveillance statutes. Under this interpretive rule, which the Essay calls a “rule of lenity,” ambiguity in the powers granted to the executive branch in the sections of the United States Code on national security surveillance should trigger a narrow judicial interpretation in favor of the individual and against the State. A rule of lenity would push Congress to be the primary decision maker to balance privacy and security when technology changes, limiting the rulemaking power of the secret Foreign Intelligence Surveillance Court. A rule of lenity would help restore the power over national security surveillance law to where it belongs: The People.
This is certainly not a panacea. As Jack Goldsmith rightly points out, more Congressional oversight over NSA surveillance during the last decade would have gained us more NSA surveillance. But it’s certainly better than having secret courts make the rules after only hearing one side of the argument.
Good paper, and layman’s explanation.
Internet voting scares me. It gives hackers the potential to seriously disrupt our democratic processes.
EDITED TO ADD (11/14): Another article.
Kaspersky Labs is reporting (detailed report here, technical details here) on a sophisticated hacker group that is targeting specific individuals around the world. “Darkhotel” is the name the group and its techniques has been given.
This APT precisely drives its campaigns by spear-phishing targets with highly advanced Flash zero-day exploits that effectively evade the latest Windows and Adobe defenses, and yet they also imprecisely spread among large numbers of vague targets with peer-to-peer spreading tactics. Moreover, this crew’s most unusual characteristic is that for several years the Darkhotel APT has maintained a capability to use hotel networks to follow and hit selected targets as they travel around the world. These travelers are often top executives from a variety of industries doing business and outsourcing in the APAC region. Targets have included CEOs, senior vice presidents, sales and marketing directors and top R&D staff. This hotel network intrusion set provides the attackers with precise global scale access to high value targets. From our observations, the highest volume of offensive activity on hotel networks started in August 2010 and continued through 2013, and we are investigating some 2014 hotel network events.
Good article. This seems pretty obviously a nation-state attack. It’s anyone’s guess which country is behind it, though.
Targets in the spear—phishing attacks include high-profile executives—among them a media executive from Asiaas well as government agencies and NGOs and U.S. executives. The primary targets, however, appear to be in North Korea, Japan, and India. “All nuclear nations in Asia,” Raiu notes. “Their targeting is nuclear themed, but they also target the defense industry base in the U.S. and important executives from around the world in all sectors having to do with economic development and investments.” Recently there has been a spike in the attacks against the U.S. defense industry.
We usually infer the attackers from the target list. This one isn’t that helpful. Pakistan? China? South Korea? I’m just guessing.
Security is a combination of protection, detection, and response. It’s taken the industry a long time to get to this point, though. The 1990s was the era of protection. Our industry was full of products that would protect your computers and network. By 2000, we realized that detection needed to be formalized as well, and the industry was full of detection products and services.
This decade is one of response. Over the past few years, we’ve started seeing incident response (IR) products and services. Security teams are incorporating them into their arsenal because of three trends in computing. One, we’ve lost control of our computing environment. More of our data is held in the cloud by other companies, and more of our actual networks are outsourced. This makes response more complicated, because we might not have visibility into parts of our critical network infrastructures.
Two, attacks are getting more sophisticated. The rise of APT (advanced persistent threat)—attacks that specifically target for reasons other than simple financial theft—brings with it a new sort of attacker, which requires a new threat model. Also, as hacking becomes a more integral part of geopolitics, unrelated networks are increasingly collateral damage in nation-state fights.
And three, companies continue to under-invest in protection and detection, both of which are imperfect even under the best of circumstances, obliging response to pick up the slack.
Way back in the 1990s, I used to say that “security is a process, not a product.” That was a strategic statement about the fallacy of thinking you could ever be done with security; you need to continually reassess your security posture in the face of an ever-changing threat landscape.
At a tactical level, security is both a product and a process. Really, it’s a combination of people, process, and technology. What changes are the ratios. Protection systems are almost technology, with some assistance from people and process. Detection requires more-or-less equal proportions of people, process, and technology. Response is mostly done by people, with critical assistance from process and technology.
Usability guru Lorrie Faith Cranor once wrote, “Whenever possible, secure system designers should find ways of keeping humans out of the loop.” That’s sage advice, but you can’t automate IR. Everyone’s network is different. All attacks are different. Everyone’s security environments are different. The regulatory environments are different. All organizations are different, and political and economic considerations are often more important than technical considerations. IR needs people, because successful IR requires thinking.
This is new for the security industry, and it means that response products and services will look different. For most of its life, the security industry has been plagued with the problems of a lemons market. That’s a term from economics that refers to a market where buyers can’t tell the difference between good products and bad. In these markets, mediocre products drive good ones out of the market; price is the driver, because there’s no good way to test for quality. It’s been true in anti-virus, it’s been true in firewalls, it’s been true in IDSs, and it’s been true elsewhere. But because IR is people-focused in ways protection and detection are not, it won’t be true here. Better products will do better because buyers will quickly be able to determine that they’re better.
The key to successful IR is found in Cranor’s next sentence: “However, there are some tasks for which feasible, or cost effective, alternatives to humans are not available. In these cases, system designers should engineer their systems to support the humans in the loop, and maximize their chances of performing their security-critical functions successfully.” What we need is technology that aids people, not technology that supplants them.
The best way I’ve found to think about this is OODA loops. OODA stands for “observe, orient, decide, act,” and it’s a way of thinking about real-time adversarial situations developed by US Air Force military strategist John Boyd. He was thinking about fighter jets, but the general idea has been applied to everything from contract negotiations to boxing—and computer and network IR.
Speed is essential. People in these situations are constantly going through OODA loops in their head. And if you can do yours faster than the other guy—if you can “get inside his OODA loop”—then you have an enormous advantage.
We need tools to facilitate all of these steps:
Pulling all of these tools together under a unified framework will make IR work. And making IR work is the ultimate key to making security work. The goal here is to bring people, process and, technology together in a way we haven’t seen before in network security. It’s something we need to do to continue to defend against the threats.
This essay originally appeared in IEEE Security & Privacy.
I’m not sure why this is news, except that it makes for a startling headline. (Is the New York Times now into clickbait?) It’s not as if people are throwing squid onto the field, as Detroit hockey fans do with octopus.
As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.
My company, Co3 Systems, is hiring both technical and nontechnical positions. If you live in the Boston area, click through and take a look.
Chicago is doing random explosives screenings at random L stops in the Chicago area. Compliance is voluntary:
Police made no arrests but one rider refused to submit to the screening and left the station without incident, Maloney said.
[…]
Passengers can decline the screening, but will not be allowed to board a train at that station. Riders can leave that station and board a train at a different station.
I have to wonder what would happen if someone who looks Arab refused to be screened. And what possible value this procedure has. Anyone who has a bomb in their bag would see the screening point well before approaching it, and be able to walk to the next stop without potentially arousing suspicion.
Robert Lee and Thomas Rid have a new paper: “OMG Cyber! Thirteen Reasons Why Hype Makes for Bad Policy.”
EDITED TO ADD (11/13): Another essay on the same topic.
Interesting paper by Melissa Hathaway: “Connected Choices: How the Internet Is Challenging Sovereign Decisions.”
Abstract: Modern societies are in the middle of a strategic, multidimensional competition for money, power, and control over all aspects of the Internet and the Internet economy. This article discusses the increasing pace of discord and the competing interests that are unfolding in the current debate concerning the control and governance of the Internet and its infrastructure. Some countries are more prepared for and committed to winning tactical battles than are others on the road to asserting themselves as an Internet power. Some are acutely aware of what is at stake; the question is whether they will be the master or the victim of these multilayered power struggles as subtle and not-so-subtle connected choices are being made. Understanding this debate requires an appreciation of the entangled economic, technical, regulatory, political, and social interests implicated by the Internet. Those states that are prepared for and understand the many facets of the Internet will likely end up on top.
Verizon is tracking the Internet use of its phones by surreptitiously modifying URLs. This is a good description of how it works.
Probably the best IT security book of the year is Adam Shostack’s Threat Modeling (Amazon page).
The book is an honorable mention finalist for “The Best Books” of the past 12 months. This is the first time a security book has been on the list since my Applied Cryptography (first edition) won in 1994 and my Secrets and Lies won in 2001.
Anyway, Shostack’s book is really good, and I strongly recommend it. He blogs about the topic here.
Sidebar photo of Bruce Schneier by Joe MacInnis.