Latest

Page 5

US Army Researching Bot Swarms

The US Army Research Agency is funding research into autonomous bot swarms. From the announcement:

The objective of this CRA is to perform enabling basic and applied research to extend the reach, situational awareness, and operational effectiveness of large heterogeneous teams of intelligent systems and Soldiers against dynamic threats in complex and contested environments and provide technical and operational superiority through fast, intelligent, resilient and collaborative behaviors. To achieve this, ARL is requesting proposals that address three key Research Areas (RAs):

RA1: Distributed Intelligence: Establish the theoretical foundations of multi-faceted distributed networked intelligent systems combining autonomous agents, sensors, tactical super-computing, knowledge bases in the tactical cloud, and human experts to acquire and apply knowledge to affect and inform decisions of the collective team.

RA2: Heterogeneous Group Control: Develop theory and algorithms for control of large autonomous teams with varying levels of heterogeneity and modularity across sensing, computing, platforms, and degree of autonomy.

RA3: Adaptive and Resilient Behaviors: Develop theory and experimental methods for heterogeneous teams to carry out tasks under the dynamic and varying conditions in the physical world.

Slashdot thread.

And while we’re on the subject, this is an excellent report on AI and national security.

Posted on July 24, 2017 at 6:39 AMView Comments

The Future of Forgeries

This article argues that AI technologies will make image, audio, and video forgeries much easier in the future.

Combined, the trajectory of cheap, high-quality media forgeries is worrying. At the current pace of progress, it may be as little as two or three years before realistic audio forgeries are good enough to fool the untrained ear, and only five or 10 years before forgeries can fool at least some types of forensic analysis. When tools for producing fake video perform at higher quality than today’s CGI and are simultaneously available to untrained amateurs, these forgeries might comprise a large part of the information ecosystem. The growth in this technology will transform the meaning of evidence and truth in domains across journalism, government communications, testimony in criminal justice, and, of course, national security.

I am not worried about fooling the “untrained ear,” and more worried about fooling forensic analysis. But there’s an arms race here. Recording technologies will get more sophisticated, too, making their outputs harder to forge. Still, I agree that the advantage will go to the forgers and not the forgery detectors.

Posted on July 10, 2017 at 6:04 AMView Comments

Commentary on US Election Security

Good commentaries from Ed Felten and Matt Blaze.

Both make a point that I have also been saying: hacks can undermine the legitimacy of an election, even if there is no actual voter or vote manipulation.

Felten:

The second lesson is that we should be paying more attention to attacks that aim to undermine the legitimacy of an election rather than changing the election’s result. Election-stealing attacks have gotten most of the attention up to now—­and we are still vulnerable to them in some places—­but it appears that external threat actors may be more interested in attacking legitimacy.

Attacks on legitimacy could take several forms. An attacker could disrupt the operation of the election, for example, by corrupting voter registration databases so there is uncertainty about whether the correct people were allowed to vote. They could interfere with post-election tallying processes, so that incorrect results were reported­ an attack that might have the intended effect even if the results were eventually corrected. Or the attacker might fabricate evidence of an attack, and release the false evidence after the election.

Legitimacy attacks could be easier to carry out than election-stealing attacks, as well. For one thing, a legitimacy attacker will typically want the attack to be discovered, although they might want to avoid having the culprit identified. By contrast, an election-stealing attack must avoid detection in order to succeed. (If detected, it might function as a legitimacy attack.)

Blaze:

A hostile state actor who can compromise a handful of county networks might not even need to alter any actual votes to create considerable uncertainty about an election’s legitimacy. It may be sufficient to simply plant some suspicious software on back end networks, create some suspicious audit files, or add some obviously bogus names to to the voter rolls. If the preferred candidate wins, they can quietly do nothing (or, ideally, restore the compromised networks to their original states). If the “wrong” candidate wins, however, they could covertly reveal evidence that county election systems had been compromised, creating public doubt about whether the election had been “rigged”. This could easily impair the ability of the true winner to effectively govern, at least for a while.

In other words, a hostile state actor interested in disruption may actually have an easier task than someone who wants to undetectably steal even a small local office. And a simple phishing and trojan horse email campaign like the one in the NSA report is potentially all that would be needed to carry this out.

Me:

Democratic elections serve two purposes. The first is to elect the winner. But the second is to convince the loser. After the votes are all counted, everyone needs to trust that the election was fair and the results accurate. Attacks against our election system, even if they are ultimately ineffective, undermine that trust and ­ by extension ­ our democracy.

And, finally, a report from the Brennan Center for Justice on how to secure elections.

Posted on July 5, 2017 at 6:58 AMView Comments

Securing Elections

Technology can do a lot more to make our elections more secure and reliable, and to ensure that participation in the democratic process is available to all. There are three parts to this process.

First, the voter registration process can be improved. The whole process can be streamlined. People should be able to register online, just as they can register for other government services. The voter rolls need to be protected from tampering, as that’s one of the major ways hackers can disrupt the election.

Second, the voting process can be significantly improved. Voting machines need to be made more secure. There are a lot of technical details best left to the voting-security experts who can deal with them, but such machines must include a paper ballot that provides a record verifiable by voters. The simplest and most reliable way to do that is already practiced in 37 states: optical-scan paper ballots, marked by the voters and counted by computer, but recountable by hand.

We need national security standards for voting machines, and funding for states to procure machines that comply with those standards.

This means no Internet voting. While that seems attractive, and certainly a way technology can improve voting, we don’t know how to do it securely. We simply can’t build an Internet voting system that is secure against hacking because of the requirement for a secret ballot. This makes voting different from banking and anything else we do on the Internet, and it makes security much harder. Even allegations of vote hacking would be enough to undermine confidence in the system, and we simply cannot afford that. We need a system of pre-election and post-election security audits of these voting machines to increase confidence in the system.

The third part of the voting process we need to secure is the tabulation system. After the polls close, we aggregate votes—­from individual machines, to polling places, to precincts, and finally to totals. This system is insecure as well, and we can do a lot more to make it reliable. Similarly, our system of recounts can be made more secure and efficient.

We have the technology to do all of this. The problem is political will. We have to decide that the goal of our election system is for the most people to be able to vote with the least amount of effort. If we continue to enact voter suppression measures like ID requirements, barriers to voter registration, limitations on early voting, reduced polling place hours, and faulty machines, then we are harming democracy more than we are by allowing our voting machines to be hacked.

We have already declared our election system to be critical national infrastructure. This is largely symbolic, but it demonstrates a commitment to secure elections and makes funding and other resources available to states. We can do much more. We owe it to democracy to do it.

This essay previously appeared on TheAtlantic.com.

Posted on May 10, 2017 at 2:14 PMView Comments

Automatically Identifying Government Secrets

Interesting research: “Using Artificial Intelligence to Identify State Secrets,” by Renato Rocha Souza, Flavio Codeco Coelho, Rohan Shah, and Matthew Connelly.

Abstract: Whether officials can be trusted to protect national security information has become a matter of great public controversy, reigniting a long-standing debate about the scope and nature of official secrecy. The declassification of millions of electronic records has made it possible to analyze these issues with greater rigor and precision. Using machine-learning methods, we examined nearly a million State Department cables from the 1970s to identify features of records that are more likely to be classified, such as international negotiations, military operations, and high-level communications. Even with incomplete data, algorithms can use such features to identify 90% of classified cables with <11% false positives. But our results also show that there are longstanding problems in the identification of sensitive information. Error analysis reveals many examples of both overclassification and underclassification. This indicates both the need for research on inter-coder reliability among officials as to what constitutes classified material and the opportunity to develop recommender systems to better manage both classification and declassification.

Posted on November 11, 2016 at 1:18 PMView Comments

Fooling Facial Recognition Systems

This is some interesting research. You can fool facial recognition systems by wearing glasses printed with elements of other people’s faces.

Mahmood Sharif, Sruti Bhagavatula, Lujo Bauer, and Michael K. Reiter, “Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition“:

ABSTRACT: Machine learning is enabling a myriad innovations, including new algorithms for cancer diagnosis and self-driving cars. The broad use of machine learning makes it important to understand the extent to which machine-learning algorithms are subject to attack, particularly when used in applications where physical security or safety is at risk. In this paper, we focus on facial biometric systems, which are widely used in surveillance and access control. We define and investigate a novel class of attacks: attacks that are physically realizable and inconspicuous, and allow an attacker to evade recognition or impersonate another individual. We develop a systematic method to automatically generate such attacks, which are realized through printing a pair of eyeglass frames. When worn by the attacker whose image is supplied to a state-of-the-art face-recognition algorithm, the eyeglasses allow her to evade being recognized or to impersonate another individual. Our investigation focuses on white-box face-recognition systems, but we also demonstrate how similar techniques can be used in black-box scenarios, as well as to avoid face detection.

News articles.

Posted on November 11, 2016 at 7:31 AMView Comments

Teaching a Neural Network to Encrypt

Researchers have trained a neural network to encrypt its communications.

In their experiment, computers were able to make their own form of encryption using machine learning, without being taught specific cryptographic algorithms. The encryption was very basic, especially compared to our current human-designed systems. Even so, it is still an interesting step for neural nets, which the authors state “are generally not meant to be great at cryptography:.

This story is more about AI and neural networks than it is about cryptography. The algorithm isn’t any good, but is a perfect example of what I’ve heard called “Schneier’s Law“: Anyone can design a cipher that they themselves cannot break.

Research paper. Note that the researchers work at Google.

Posted on November 3, 2016 at 6:05 AMView Comments

Malicious AI

It’s not hard to imagine the criminal possibilities of automation, autonomy, and artificial intelligence. But the imaginings are becoming mainstream—and the future isn’t too far off.

Along similar lines, computers are able to predict court verdicts. My guess is that the real use here isn’t to predict actual court verdicts, but for well-paid defense teams to test various defensive tactics.

Posted on October 26, 2016 at 6:38 AMView Comments

I’m Writing a Book on Security

I’m writing a book on security in the highly connected Internet-of-Things world. Tentative title:

Click Here to Kill Everybody
Peril and Promise in a Hyper-Connected World

There are two underlying metaphors in the book. The first is what I have called the World-Sized Web, which is that combination of mobile, cloud, persistence, personalization, agents, cyber-physical systems, and the Internet of Things. The second is what I’m calling the “war of all against all,” which is the recognition that security policy is a series of “wars” between various interests, and that any policy decision in any one of the wars affects all the others. I am not wedded to either metaphor at this point.

This is the current table of contents, with three of the chapters broken out into sub-chapters:

  • Introduction
  • The World-Sized Web
  • The Coming Threats
    • Privacy Threats
    • Availability and Integrity Threats
    • Threats from Software-Controlled Systems
    • Threats from Interconnected Systems
    • Threats from Automatic Algorithms
    • Threats from Autonomous Systems
    • Other Threats of New Technologies
    • Catastrophic Risk
    • Cyberwar
  • The Current Wars
    • The Copyright Wars
    • The US/EU Data Privacy Wars
    • The War for Control of the Internet
    • The War of Secrecy
  • The Coming Wars
    • The War for Your Data
    • The War Against Your Computers
    • The War for Your Embedded Computers
    • The Militarization of the Internet
    • The Powerful vs. the Powerless
    • The Rights of the Individual vs. the Rights of Society
  • The State of Security
  • Near-Term Solutions
  • Security for an Empowered World
  • Conclusion

That will change, of course. If the past is any guide, everything will change.

Questions: Am I missing any threats? Am I missing any wars?

Current schedule is for me to finish writing this book by the end of September, and have it published at the end of April 2017. I hope to have pre-publication copies available for sale at the RSA Conference next year. As with my previous book, Norton is the publisher.

So if you notice me blogging less this summer, this is why.

Posted on April 29, 2016 at 1:02 PMView Comments

Integrity and Availability Threats

Cyberthreats are changing. We’re worried about hackers crashing airplanes by hacking into computer networks. We’re worried about hackers remotely disabling cars. We’re worried about manipulated counts from electronic voting booths, remote murder through hacked medical devices and someone hacking an Internet thermostat to turn off the heat and freeze the pipes.

The traditional academic way of thinking about information security is as a triad: confidentiality, integrity, and availability. For years, the security industry has been trying to prevent data theft. Stolen data is used for identity theft and other frauds. It can be embarrassing, as in the Ashley Madison breach. It can be damaging, as in the Sony data theft. It can even be a national security threat, as in the case of the Office of Personal Management data breach. These are all breaches of privacy and confidentiality.

As bad as these threats are, they seem abstract. It’s been hard to craft public policy around them. But this is all changing. Threats to integrity and availability are much more visceral and much more devastating. And they will spur legislative action in a way that privacy risks never have.

Take one example: driverless cars and smart roads.

We’re heading toward a world where driverless cars will automatically communicate with each other and the roads, automatically taking us where we need to go safely and efficiently. The confidentiality threats are real: Someone who can eavesdrop on those communications can learn where the cars are going and maybe who is inside them. But the integrity threats are much worse.

Someone who can feed the cars false information can potentially cause them to crash into each other or nearby walls. Someone could also disable your car so it can’t start. Or worse, disable the entire system so that no one’s car can start.

This new rise in integrity and availability threats is a result of the Internet of Things. The objects we own and interact with will all become computerized and on the Internet. It’s actually more complicated.

What I’m calling the “World Sized Web” is a combination of these Internet-enabled things, cloud computing, mobile computing and the pervasiveness that comes from these systems being always on all the time. Together this means that computers and networks will be much more embedded in our daily lives. Yes, there will be more need for confidentiality, but there is a newfound need to ensure that these systems can’t be subverted to do real damage.

It’s one thing if your smart door lock can be eavesdropped to know who is home. It’s another thing entirely if it can be hacked to prevent you from opening your door or allow a burglar to open the door.

In separate testimonies before different House and Senate committees last year, both the Director of National Intelligence James Clapper and NSA Director Mike Rogers warned of these threats. They both consider them far larger and more important than the confidentiality threat and believe that we are vulnerable to attack.

And once the attacks start doing real damage—once someone dies from a hacked car or medical device, or an entire city’s 911 services go down for a day—there will be a real outcry to do something.

Congress will be forced to act. They might authorize more surveillance. They might authorize more government involvement in private-sector cybersecurity. They might try to ban certain technologies or certain uses. The results won’t be well-thought-out, and they probably won’t mitigate the actual risks. If we’re lucky, they won’t cause even more problems.

I worry that we’re rushing headlong into the World-Sized Web, and not paying enough attention to the new threats that it brings with it. Again and again, we’ve tried to retrofit security in after the fact.

It would be nice if we could do it right from the beginning this time. That’s going to take foresight and planning. The Obama administration just proposed spending $4 billion to advance the engineering of driverless cars.

How about focusing some of that money on the integrity and availability threats from that and similar technologies?

This essay previously appeared on CNN.com.

Posted on January 29, 2016 at 7:29 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.