Entries Tagged "security standards"

Page 1 of 2

WPA3

Everyone is writing about the new WPA3 Wi-Fi security standard, and how it improves security over the current WPA2 standard.

This summary is as good as any other:

The first big new feature in WPA3 is protection against offline, password-guessing attacks. This is where an attacker captures data from your Wi-Fi stream, brings it back to a private computer, and guesses passwords over and over again until they find a match. With WPA3, attackers are only supposed to be able to make a single guess against that offline data before it becomes useless; they’ll instead have to interact with the live Wi-Fi device every time they want to make a guess. (And that’s harder since they need to be physically present, and devices can be set up to protect against repeat guesses.)

WPA3’s other major addition, as highlighted by the Alliance, is forward secrecy. This is a privacy feature that prevents older data from being compromised by a later attack. So if an attacker captures an encrypted Wi-Fi transmission, then cracks the password, they still won’t be able to read the older data — they’d only be able to see new information currently flowing over the network.

Note that we’re just getting the new standard this week. Actual devices that implement the standard are still months away.

Posted on July 12, 2018 at 6:11 AMView Comments

Securing Elections

Technology can do a lot more to make our elections more secure and reliable, and to ensure that participation in the democratic process is available to all. There are three parts to this process.

First, the voter registration process can be improved. The whole process can be streamlined. People should be able to register online, just as they can register for other government services. The voter rolls need to be protected from tampering, as that’s one of the major ways hackers can disrupt the election.

Second, the voting process can be significantly improved. Voting machines need to be made more secure. There are a lot of technical details best left to the voting-security experts who can deal with them, but such machines must include a paper ballot that provides a record verifiable by voters. The simplest and most reliable way to do that is already practiced in 37 states: optical-scan paper ballots, marked by the voters and counted by computer, but recountable by hand.

We need national security standards for voting machines, and funding for states to procure machines that comply with those standards.

This means no Internet voting. While that seems attractive, and certainly a way technology can improve voting, we don’t know how to do it securely. We simply can’t build an Internet voting system that is secure against hacking because of the requirement for a secret ballot. This makes voting different from banking and anything else we do on the Internet, and it makes security much harder. Even allegations of vote hacking would be enough to undermine confidence in the system, and we simply cannot afford that. We need a system of pre-election and post-election security audits of these voting machines to increase confidence in the system.

The third part of the voting process we need to secure is the tabulation system. After the polls close, we aggregate votes — ­from individual machines, to polling places, to precincts, and finally to totals. This system is insecure as well, and we can do a lot more to make it reliable. Similarly, our system of recounts can be made more secure and efficient.

We have the technology to do all of this. The problem is political will. We have to decide that the goal of our election system is for the most people to be able to vote with the least amount of effort. If we continue to enact voter suppression measures like ID requirements, barriers to voter registration, limitations on early voting, reduced polling place hours, and faulty machines, then we are harming democracy more than we are by allowing our voting machines to be hacked.

We have already declared our election system to be critical national infrastructure. This is largely symbolic, but it demonstrates a commitment to secure elections and makes funding and other resources available to states. We can do much more. We owe it to democracy to do it.

This essay previously appeared on TheAtlantic.com.

Posted on May 10, 2017 at 2:14 PMView Comments

Security and Privacy Guidelines for the Internet of Things

Lately, I have been collecting IoT security and privacy guidelines. Here’s everything I’ve found:

  1. Internet of Things (IoT) Broadband Internet Technical Advisory Group, Broadband Internet Technical Advisory Group, Nov 2016.
  2. IoT Security Guidance,” Open Web Application Security Project (OWASP), May 2016.
  3. Strategic Principles for Securing the Internet of Things (IoT),” US Department of Homeland Security, Nov 2016.
  4. Security,” OneM2M Technical Specification, Aug 2016.
  5. Security Solutions,” OneM2M Technical Specification, Aug 2016.
  6. IoT Security Guidelines Overview Document,” GSM Alliance, Feb 2016.
  7. IoT Security Guidelines For Service Ecosystems,” GSM Alliance, Feb 2016.
  8. IoT Security Guidelines for Endpoint Ecosystems,” GSM Alliance, Feb 2016.
  9. IoT Security Guidelines for Network Operators,” GSM Alliance, Feb 2016.
  10. Establishing Principles for Internet of Things Security,” IoT Security Foundation, undated.
  11. IoT Design Manifesto,” www.iotmanifesto.com, May 2015.
  12. NYC Guidelines for the Internet of Things,” City of New York, undated.
  13. IoT Security Compliance Framework,” IoT Security Foundation, 2016.
  14. Principles, Practices and a Prescription for Responsible IoT and Embedded Systems Development,” IoTIAP, Nov 2016.
  15. IoT Trust Framework,” Online Trust Alliance, Jan 2017.
  16. Five Star Automotive Cyber Safety Framework,” I am the Cavalry, Feb 2015.
  17. Hippocratic Oath for Connected Medical Devices,” I am the Cavalry, Jan 2016.
  18. Industrial Internet of Things Volume G4: Security Framework,” Industrial Internet Consortium, 2016.
  19. Future-proofing the Connected World: 13 Steps to Developing Secure IoT Products,” Cloud Security Alliance, 2016.

Other, related, items:

  1. We All Live in the Computer Now,” The Netgain Partnership, Oct 2016.
  2. Comments of EPIC to the FTC on the Privacy and Security Implications of the Internet of Things,” Electronic Privacy Information Center, Jun 2013.
  3. Internet of Things Software Update Workshop (IoTSU),” Internet Architecture Board, Jun 2016.
  4. Multistakeholder Process; Internet of Things (IoT) Security Upgradability and Patching,” National Telecommunications & Information Administration, Jan 2017.

They all largely say the same things: avoid known vulnerabilities, don’t have insecure defaults, make your systems patchable, and so on.

My guess is that everyone knows that IoT regulation is coming, and is either trying to impose self-regulation to forestall government action or establish principles to influence government action. It’ll be interesting to see how the next few years unfold.

If there are any IoT security or privacy guideline documents that I’m missing, please tell me in the comments.

EDITED TO ADD: Documents added to the list, above.

Posted on February 9, 2017 at 7:14 AMView Comments

How Different Stakeholders Frame Security

Josephine Wolff examines different Internet governance stakeholders and how they frame security debates.

Her conclusion:

The tensions that arise around issues of security among different groups of internet governance stakeholders speak to the many tangled notions of what online security is and whom it is meant to protect that are espoused by the participants in multistakeholder governance forums. What makes these debates significant and unique in the context of internet governance is not that the different stakeholders often disagree (indeed, that is a common occurrence), but rather that they disagree while all using the same vocabulary of security to support their respective stances. Government stakeholders advocate for limitations on WHOIS privacy/proxy services in order to aid law enforcement and protect their citizens from crime and fraud. Civil society stakeholders advocate against those limitations in order to aid activists and minorities and protect those online users from harassment. Both sides would claim that their position promotes a more secure internet and a more secure society — ­and in a sense, both would be right, except that each promotes a differently secure internet and society, protecting different classes of people and behaviour from different threats.

While vague notions of security may be sufficiently universally accepted as to appear in official documents and treaties, the specific details of individual decisions­ — such as the implementation of dotless domains, changes to the WHOIS database privacy policy, and proposals to grant government greater authority over how their internet traffic is routed­ — require stakeholders to disentangle the many different ideas embedded in that language. For the idea of security to truly foster cooperation and collaboration as a boundary object in internet governance circles, the participating stakeholders will have to more concretely agree on what their vision of a secure internet is and how it will balance the different ideas of security espoused by different groups. Alternatively, internet governance stakeholders may find it more useful to limit their discussions on security, as a whole, and try to force their discussions to focus on more specific threats and issues within that space as a means of preventing themselves from succumbing to a façade of agreement without grappling with the sources of disagreement that linger just below the surface.

The intersection of multistakeholder internet governance and definitional issues of security is striking because of the way that the multistakeholder model both reinforces and takes advantage of the ambiguity surrounding the idea of security explored in the security studies literature. That ambiguity is a crucial component of maintaining a functional multistakeholder model of governance because it lends itself well to high-level agreements and discussions, contributing to the sense of consensus building across stakeholders. At the same time, gathering those different stakeholders together to decide specific issues related to the internet and its infrastructure brings to a fore the vast variety of definitions of security they employ and forces them to engage in security-versus-security fights, with each trying to promote their own particular notion of security. Security has long been a contested concept, but rarely do these contestations play out as directly and dramatically as in the multistakeholder arena of internet governance, where all parties are able to face off on what really constitutes security in a digital world.

We certainly saw this in the “going dark” debate: e.g. the FBI vs. Apple and their iPhone security.

Posted on October 24, 2016 at 6:03 AMView Comments

The Cost of Cyberattacks Is Less than You Might Think

Interesting research from Sasha Romanosky at RAND:

Abstract: In 2013, the US President signed an executive order designed to help secure the nation’s critical infrastructure from cyberattacks. As part of that order, he directed the National Institute for Standards and Technology (NIST) to develop a framework that would become an authoritative source for information security best practices. Because adoption of the framework is voluntary, it faces the challenge of incentivizing firms to follow along. Will frameworks such as that proposed by NIST really induce firms to adopt better security controls? And if not, why? This research seeks to examine the composition and costs of cyber events, and attempts to address whether or not there exist incentives for firms to improve their security practices and reduce the risk of attack. Specifically, we examine a sample of over 12 000 cyber events that include data breaches, security incidents, privacy violations, and phishing crimes. First, we analyze the characteristics of these breaches (such as causes and types of information compromised). We then examine the breach and litigation rate, by industry, and identify the industries that incur the greatest costs from cyber events. We then compare these costs to bad debts and fraud within other industries. The findings suggest that public concerns regarding the increasing rates of breaches and legal actions may be excessive compared to the relatively modest financial impact to firms that suffer these events. Public concerns regarding the increasing rates of breaches and legal actions, conflict, however, with our findings that show a much smaller financial impact to firms that suffer these events. Specifically, we find that the cost of a typical cyber incident in our sample is less than $200 000 (about the same as the firm’s annual IT security budget), and that this represents only 0.4% of their estimated annual revenues.

The result is that it often makes business sense to underspend on cybersecurity and just pay the costs of breaches:

Romanosky analyzed 12,000 incident reports and found that typically they only account for 0.4 per cent of a company’s annual revenues. That compares to billing fraud, which averages at 5 per cent, or retail shrinkage (ie, shoplifting and insider theft), which accounts for 1.3 per cent of revenues.

As for reputational damage, Romanosky found that it was almost impossible to quantify. He spoke to many executives and none of them could give a reliable metric for how to measure the PR cost of a public failure of IT security systems.

He also noted that the effects of a data incident typically don’t have many ramifications on the stock price of a company in the long term. Under the circumstances, it doesn’t make a lot of sense to invest too much in cyber security.

What’s being left out of these costs are the externalities. Yes, the costs to a company of a cyberattack are low to them, but there are often substantial additional costs borne by other people. The way to look at this is not to conclude that cybersecurity isn’t really a problem, but instead that there is a significant market failure that governments need to address.

Posted on September 29, 2016 at 6:51 AMView Comments

How NIST Develops Cryptographic Standards

This document gives a good overview of how NIST develops cryptographic standards and guidelines. It’s still in draft, and comments are appreciated.

Given that NIST has been tainted by the NSA’s actions to subvert cryptographic standards and protocols, more transparency in this process is appreciated. I think NIST is doing a fine job and that it’s not shilling for the NSA, but it needs to do more to convince the world of that.

Posted on March 4, 2014 at 6:41 AMView Comments

Will Keccak = SHA-3?

Last year, NIST selected Keccak as the winner of the SHA-3 hash function competition. Yes, I would have rather my own Skein had won, but it was a good choice.

But last August, John Kelsey announced some changes to Keccak in a talk (slides 44-48 are relevant). Basically, the security levels were reduced and some internal changes to the algorithm were made, all in the name of software performance.

Normally, this wouldn’t be a big deal. But in light of the Snowden documents that reveal that the NSA has attempted to intentionally weaken cryptographic standards, this is a huge deal. There is too much mistrust in the air. NIST risks publishing an algorithm that no one will trust and no one (except those forced) will use.

At this point, they simply have to standardize on Keccak as submitted and as selected.

CDT has a great post about this.

Also this Slashdot thread.

EDITED TO ADD (10/5): It’s worth reading the response from the Keccak team on this issue.

I misspoke when I wrote that NIST made “internal changes” to the algorithm. That was sloppy of me. The Keccak permutation remains unchanged. What NIST proposed was reducing the hash function’s capacity in the name of performance. One of Keccak’s nice features is that it’s highly tunable.

I do not believe that the NIST changes were suggested by the NSA. Nor do I believe that the changes make the algorithm easier to break by the NSA. I believe NIST made the changes in good faith, and the result is a better security/performance trade-off. My problem with the changes isn’t cryptographic, it’s perceptual. There is so little trust in the NSA right now, and that mistrust is reflecting on NIST. I worry that the changed algorithm won’t be accepted by an understandably skeptical security community, and that no one will use SHA-3 as a result.

This is a lousy outcome. NIST has done a great job with cryptographic competitions: both a decade ago with AES and now with SHA-3. This is just another effect of the NSA’s actions draining the trust out of the Internet.

Posted on October 1, 2013 at 10:50 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.