Entries Tagged "drones"

Page 6 of 7

U.S. Drones Have a Computer Virus

You’d think we would be more careful than this:

A computer virus has infected the cockpits of America’s Predator and Reaper drones, logging pilots’ every keystroke as they remotely fly missions over Afghanistan and other warzones.

[…]

“We keep wiping it off, and it keeps coming back,” says a source familiar with the network infection, one of three that told Danger Room about the virus. “We think it’s benign. But we just don’t know.”

EDITED TO ADD (10/13): No one told the IT department for two weeks.

Posted on October 10, 2011 at 6:38 AMView Comments

Predator Software Pirated?

This isn’t good:

Intelligent Integration Systems (IISi), a small Boston-based software development firm, alleges that their Geospatial Toolkit and Extended SQL Toolkit were pirated by Massachusetts-based Netezza for use by a government client. Subsequent evidence and court proceedings revealed that the “government client” seeking assistance with Predator drones was none other than the Central Intelligence Agency.

IISi is seeking an injunction that would halt the use of their two toolkits by Netezza for three years. Most importantly, IISi alleges in court papers that Netezza used a “hack” version of their software with incomplete targeting functionality in response to rushed CIA deadlines. As a result, Predator drones could be missing their targets by as much as 40 feet.

The obvious joke is that this is what you get when you go with the low bidder, but it doesn’t have to be that way. And there’s nothing special about this being a government procurement; any bespoke IT procurement needs good contractual oversight.

EDITED TO ADD (11/10): Another article.

Posted on October 20, 2010 at 7:21 AMView Comments

More Surveillance in the UK

This seems like a bad idea:

Police in the UK are planning to use unmanned spy drones, controversially deployed in Afghanistan, for the “routine” monitoring of antisocial motorists, protesters, agricultural thieves and fly-tippers, in a significant expansion of covert state surveillance.

Once again, laws and technologies deployed against terrorism are used against much more mundane crimes.

Posted on January 26, 2010 at 7:16 AMView Comments

Intercepting Predator Video

Sometimes mediocre encryption is better than strong encryption, and sometimes no encryption is better still.

The Wall Street Journal reported this week that Iraqi, and possibly also Afghan, militants are using commercial software to eavesdrop on U.S. Predators, other unmanned aerial vehicles, or UAVs, and even piloted planes. The systems weren’t “hacked”—the insurgents can’t control them—but because the downlink is unencrypted, they can watch the same video stream as the coalition troops on the ground.

The naive reaction is to ridicule the military. Encryption is so easy that HDTVs do it—just a software routine and you’re done—and the Pentagon has known about this flaw since Bosnia in the 1990s. But encrypting the data is the easiest part; key management is the hard part. Each UAV needs to share a key with the ground station. These keys have to be produced, guarded, transported, used and then destroyed. And the equipment, both the Predators and the ground terminals, needs to be classified and controlled, and all the users need security clearance.

The command and control channel is, and always has been, encrypted—because that’s both more important and easier to manage. UAVs are flown by airmen sitting at comfortable desks on U.S. military bases, where key management is simpler. But the video feed is different. It needs to be available to all sorts of people, of varying nationalities and security clearances, on a variety of field terminals, in a variety of geographical areas, in all sorts of conditions—with everything constantly changing. Key management in this environment would be a nightmare.

Additionally, how valuable is this video downlink is to the enemy? The primary fear seems to be that the militants watch the video, notice their compound being surveilled and flee before the missiles hit. Or notice a bunch of Marines walking through a recognizable area and attack them. This might make a great movie scene, but it’s not very realistic. Without context, and just by peeking at random video streams, the risk caused by eavesdropping is low.

Contrast this with the additional risks if you encrypt: A soldier in the field doesn’t have access to the real-time video because of a key management failure; a UAV can’t be quickly deployed to a new area because the keys aren’t in place; we can’t share the video information with our allies because we can’t give them the keys; most soldiers can’t use this technology because they don’t have the right clearances. Given this risk analysis, not encrypting the video is almost certainly the right decision.

There is another option, though. During the Cold War, the NSA’s primary adversary was Soviet intelligence, and it developed its crypto solutions accordingly. Even though that level of security makes no sense in Bosnia, and certainly not in Iraq and Afghanistan, it is what the NSA had to offer. If you encrypt, they said, you have to do it “right.”

The problem is, the world has changed. Today’s insurgent adversaries don’t have KGB-level intelligence gathering or cryptanalytic capabilities. At the same time, computer and network data gathering has become much cheaper and easier, so they have technical capabilities the Soviets could only dream of. Defending against these sorts of adversaries doesn’t require military-grade encryption only where it counts; it requires commercial-grade encryption everywhere possible.

This sort of solution would require the NSA to develop a whole new level of lightweight commercial-grade security systems for military applications—not just office-data “Sensitive but Unclassified” or “For Official Use Only” classifications. It would require the NSA to allow keys to be handed to uncleared UAV operators, and perhaps read over insecure phone lines and stored in people’s back pockets. It would require the sort of ad hoc key management systems you find in internet protocols, or in DRM systems. It wouldn’t be anywhere near perfect, but it would be more commensurate with the actual threats.

And it would help defend against a completely different threat facing the Pentagon: The PR threat. Regardless of whether the people responsible made the right security decision when they rushed the Predator into production, or when they convinced themselves that local adversaries wouldn’t know how to exploit it, or when they forgot to update their Bosnia-era threat analysis to account for advances in technology, the story is now being played out in the press. The Pentagon is getting beaten up because it’s not protecting against the threat—because it’s easy to make a sound bite where the threat sounds really dire. And now it has to defend against the perceived threat to the troops, regardless of whether the defense actually protects the troops or not. Reminds me of the TSA, actually.

So the military is now committed to encrypting the video … eventually. The next generation Predators, called Reapers—Who names this stuff? Second-grade boys?—will have the same weakness. Maybe we’ll have encrypted video by 2010, or 2014, but I don’t think that’s even remotely possible unless the NSA relaxes its key management and classification requirements and embraces a lightweight, less secure encryption solution for these sorts of situations. The real failure here is the failure of the Cold War security model to deal with today’s threats.

This essay originally appeared on Wired.com.

EDITED TO ADD (12/24): Good article from The New Yorker on the uses—and politics—of these UAVs.

EDITED TO ADD (12/30): Error corrected—”uncleared UAV operators” should have read “uncleared UAV viewers.” The point is that the operators in the U.S. are cleared and their communications are encrypted, but the viewers in Asia are uncleared and the data is unencrypted.

Posted on December 24, 2009 at 5:24 AMView Comments

History and Ethics of Military Robots

This article gives an overview of U.S. military robots, and discusses a bit around the issues regarding their use in war:

As military robots gain more and more autonomy, the ethical questions involved will become even more complex. The U.S. military bends over backwards to figure out when it is appropriate to engage the enemy and how to limit civilian casualties. Autonomous robots could, in theory, follow the rules of engagement; they could be programmed with a list of criteria for determining appropriate targets and when shooting is permissible. The robot might be programmed to require human input if any civilians were detected. An example of such a list at work might go as follows: “Is the target a Soviet-made T-80 tank? Identification confirmed. Is the target located in an authorized free-fire zone? Location confirmed. Are there any friendly units within a 200-meter radius? No friendlies detected. Are there any civilians within a 200-meter radius? No civilians detected. Weapons release authorized. No human command authority required.”

Such an “ethical” killing machine, though, may not prove so simple in the reality of war. Even if a robot has software that follows all the various rules of engagement, and even if it were somehow absolutely free of software bugs and hardware failures (a big assumption), the very question of figuring out who an enemy is in the first place—that is, whether a target should even be considered for the list of screening questions—is extremely complicated in modern war. It essentially is a judgment call. It becomes further complicated as the enemy adapts, changes his conduct, and even hides among civilians. If an enemy is hiding behind a child, is it okay to shoot or not? Or what if an enemy is plotting an attack but has not yet carried it out? Politicians, pundits, and lawyers can fill pages arguing these points. It is unreasonable to expect robots to find them any easier.

The legal questions related to autonomous systems are also extremely sticky. In 2002, for example, an Air National Guard pilot in an F-16 saw flashing lights underneath him while flying over Afghanistan at twenty-three thousand feet and thought he was under fire from insurgents. Without getting required permission from his commanders, he dropped a 500-pound bomb on the lights. They instead turned out to be troops from Canada on a night training mission. Four were killed and eight wounded. In the hearings that followed, the pilot blamed the ubiquitous “fog of war” for his mistake. It didn’t matter and he was found guilty of dereliction of duty.

Change this scenario to an unmanned system and military lawyers aren’t sure what to do. Asks a Navy officer, “If these same Canadian forces had been attacked by an autonomous UCAV, determining who is accountable proves difficult. Would accountability lie with the civilian software programmers who wrote the faulty target identification software, the UCAV squadron’s Commanding Officer, or the Combatant Commander who authorized the operational use of the UCAV? Or are they collectively held responsible and accountable?”

The article was adapted from his book Wired for War: The Robotics Revolution and Conflict in the 21st Century, published this year. I bought the book, but I have not read it yet.

Related is this paper on the ethics of autonomous military robots.

Posted on March 9, 2009 at 6:59 AMView Comments

Killing Robot Being Tested by Lockheed Martin

Wow:

The frightening, but fascinatingly cool hovering robot – MKV (Multiple Kill Vehicle), is designed to shoot down enemy ballistic missiles.

A video released by the Missile Defense Agency (MDA) shows the MKV being tested at the National Hover Test Facility at Edwards Air Force Base, in California.

Inside a large steel cage, Lockheed’s MKV lifts off the ground, moves left and right, rapidly firing as flames shoot out of its bottom and sides. This description doesn’t do it any justice really, you have to see the video yourself.

During the test, the MKV is shown to lift off under its own propulsion, and remains stationary, using it’s on board retro-rockets. The potential of this drone is nothing short of science-fiction.

When watching the video, you can’t help but be reminded of post-apocalyptic killing machines, seen in such films as The Terminator and The Matrix.

Okay, people. Now is the time to start discussing the rules of war for autonomous robots. Now, when it’s still theoretical.

Posted on December 15, 2008 at 6:07 AMView Comments

Ethics of Autonomous Military Robots

Ronald C. Arkin, “Governing Lethal Behavior: Embedding Ethics in a Hybrid Deliberative/Reactive Robot Architecture,” Technical Report GIT-GVU-07011. Fascinating (and long: 117-page) paper on ethical implications of robots in war.

Summary, Conclusions, and Future Work

This report has provided the motivation, philosophy, formalisms, representational requirements, architectural design criteria, recommendations, and test scenarios to design and construct an autonomous robotic system architecture capable of the ethical use of lethal force. These first steps toward that goal are very preliminary and subject to major revision, but at the very least they can be viewed as the beginnings of an ethical robotic warfighter. The primary goal remains to enforce the International Laws of War in the battlefield in a manner that is believed achievable, by creating a class of robots that not only conform to International Law but outperform human soldiers in their ethical capacity.

It is too early to tell whether this venture will be successful. There are daunting problems
remaining:

  • The transformation of International Protocols and battlefield ethics into machine usable representations and real-time reasoning capabilities for bounded morality using modal logics.
  • Mechanisms to ensure that the design of intelligent behaviors only provide responses within rigorously defined ethical boundaries.
  • The creation of techniques to permit the adaptation of an ethical constraint set and underlying behavioral control parameters that will ensure moral performance, should those norms be violated in any way, involving reflective and affective processing.
  • A means to make responsibility assignment clear and explicit for all concerned parties regarding the deployment of a machine with a lethal potential on its mission.

Over the next two years, this architecture will be slowly fleshed out in the context of the specific test scenarios outlined in this article. Hopefully the goals of this effort, will fuel other scientists’ interest to assist in ensuring that the machines that we as roboticists create fit within international and societal expectations and requirements.

My personal hope would be that they will never be needed in the present or the future. But mankind’s tendency toward war seems overwhelming and inevitable. At the very least, if we can reduce civilian casualties according to what the Geneva Conventions have promoted and the Just War tradition subscribes to, the result will have been a humanitarian effort, even while staring directly at the face of war.

Posted on January 28, 2008 at 7:12 AMView Comments

The Technology of Homeland Security

Reuters has an article on future security technologies. I’ve already talked about automatic license-plate-capture cameras and aerial surveillance (drones and satellites), but there’s some new stuff:

Resembling the seed of a silver maple tree, the single-winged device would pack a tiny two-stage rocket thruster along with telemetry, communications, navigation, imaging sensors and a power source.

The nano air vehicle, or NAV, is designed to carry interchangeable payload modules—the size of an aspirin tablet. It could be used for chemical and biological detection or finding a “needle in a haystack,” according to Ned Allen, chief scientist at Lockheed’s fabled Skunk Works research arm.

Released in organized swarms to fly low over a disaster area, the NAV sensors could detect human body heat and signs of breathing, Allen said.

And this:

Airport screening is another area that could be transformed within 10 years, using scanning wizardry to pinpoint a suspected security threat through biometrics—based on one or more physical or behavioral traits.

“We can read fingerprints from about five meters…all 10 prints,” said Bruce Walker, vice president of homeland security for Northrop Grumman Corp (NOC.N). “We can also do an iris scan at the same distance.”

For a while I’ve been saying that this whole national ID debate will be irrelevant soon. In the future you won’t have to show ID; they’ll already know who you are.

Posted on September 26, 2007 at 6:13 AMView Comments

Robotic Guns

Scary, but philosophically no different than land mines:

Developed by state-owned Rafael, See-Shoot consists of a series of remotely controlled weapon stations which receive fire-control information from ground sensors and manned and unmanned aircraft. Once a target is verified and authorized for destruction, operators sitting safely behind command center computers push a button to fire the weapon.

Posted on July 2, 2007 at 8:42 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.