Entries Tagged "robotics"

Page 1 of 2

AIs as Computer Hackers

Hacker “Capture the Flag” has been a mainstay at hacker gatherings since the mid-1990s. It’s like the outdoor game, but played on computer networks. Teams of hackers defend their own computers while attacking other teams’. It’s a controlled setting for what computer hackers do in real life: finding and fixing vulnerabilities in their own systems and exploiting them in others’. It’s the software vulnerability lifecycle.

These days, dozens of teams from around the world compete in weekend-long marathon events held all over the world. People train for months. Winning is a big deal. If you’re into this sort of thing, it’s pretty much the most fun you can possibly have on the Internet without committing multiple felonies.

In 2016, DARPA ran a similarly styled event for artificial intelligence (AI). One hundred teams entered their systems into the Cyber Grand Challenge. After completing qualifying rounds, seven finalists competed at the DEFCON hacker convention in Las Vegas. The competition occurred in a specially designed test environment filled with custom software that had never been analyzed or tested. The AIs were given 10 hours to find vulnerabilities to exploit against the other AIs in the competition and to patch themselves against exploitation. A system called Mayhem, created by a team of Carnegie-Mellon computer security researchers, won. The researchers have since commercialized the technology, which is now busily defending networks for customers like the U.S. Department of Defense.

There was a traditional human–team capture-the-flag event at DEFCON that same year. Mayhem was invited to participate. It came in last overall, but it didn’t come in last in every category all of the time.

I figured it was only a matter of time. It would be the same story we’ve seen in so many other areas of AI: the games of chess and go, X-ray and disease diagnostics, writing fake news. AIs would improve every year because all of the core technologies are continually improving. Humans would largely stay the same because we remain humans even as our tools improve. Eventually, the AIs would routinely beat the humans. I guessed that it would take about a decade.

But now, five years later, I have no idea if that prediction is still on track. Inexplicably, DARPA never repeated the event. Research on the individual components of the software vulnerability lifecycle does continue. There’s an enormous amount of work being done on automatic vulnerability finding. Going through software code line by line is exactly the sort of tedious problem at which machine learning systems excel, if they can only be taught how to recognize a vulnerability. There is also work on automatic vulnerability exploitation and lots on automatic update and patching. Still, there is something uniquely powerful about a competition that puts all of the components together and tests them against others.

To see that in action, you have to go to China. Since 2017, China has held at least seven of these competitions—called Robot Hacking Games—many with multiple qualifying rounds. The first included one team each from the United States, Russia, and Ukraine. The rest have been Chinese only: teams from Chinese universities, teams from companies like Baidu and Tencent, teams from the military. Rules seem to vary. Sometimes human–AI hybrid teams compete.

Details of these events are few. They’re Chinese language only, which naturally limits what the West knows about them. I didn’t even know they existed until Dakota Cary, a research analyst at the Center for Security and Emerging Technology and a Chinese speaker, wrote a report about them a few months ago. And they’re increasingly hosted by the People’s Liberation Army, which presumably controls how much detail becomes public.

Some things we can infer. In 2016, none of the Cyber Grand Challenge teams used modern machine learning techniques. Certainly most of the Robot Hacking Games entrants are using them today. And the competitions encourage collaboration as well as competition between the teams. Presumably that accelerates advances in the field.

None of this is to say that real robot hackers are poised to attack us today, but I wish I could predict with some certainty when that day will come. In 2018, I wrote about how AI could change the attack/defense balance in cybersecurity. I said that it is impossible to know which side would benefit more but predicted that the technologies would benefit the defense more, at least in the short term. I wrote: “Defense is currently in a worse position than offense precisely because of the human components. Present-day attacks pit the relative advantages of computers and humans against the relative weaknesses of computers and humans. Computers moving into what are traditionally human areas will rebalance that equation.”

Unfortunately, it’s the People’s Liberation Army and not DARPA that will be the first to learn if I am right or wrong and how soon it matters.

This essay originally appeared in the January/February 2022 issue of IEEE Security & Privacy.

Posted on February 2, 2023 at 6:59 AMView Comments

Friday Squid Blogging: Underwater Robot Uses Squid-Like Propulsion

This is neat:

By generating powerful streams of water, UCSD’s squid-like robot can swim untethered. The “squidbot” carries its own power source, and has the room to hold more, including a sensor or camera for underwater exploration.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Posted on November 13, 2020 at 4:09 PMView Comments

Friday Squid Blogging: Robot Squid Propulsion

Interesting research:

The squid robot is powered primarily by compressed air, which it stores in a cylinder in its nose (do squids have noses?). The fins and arms are controlled by pneumatic actuators. When the robot wants to move through the water, it opens a value to release a modest amount of compressed air; releasing the air all at once generates enough thrust to fire the robot squid completely out of the water.

The jumping that you see at the end of the video is preliminary work; we’re told that the robot squid can travel between 10 and 20 meters by jumping, whereas using its jet underwater will take it just 10 meters. At the moment, the squid can only fire its jet once, but the researchers plan to replace the compressed air with something a bit denser, like liquid CO2, which will allow for extended operation and multiple jumps. There’s also plenty of work to do with using the fins for dynamic control, which the researchers say will “reveal the superiority of the natural flying squid movement.”

I can’t find the paper online.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Posted on August 16, 2019 at 4:05 PMView Comments

Military Robots as a Nature Analog

This very interesting essay looks at the future of military robotics and finds many analogs in nature:

Imagine a low-cost drone with the range of a Canada goose, a bird that can cover 1,500 miles in a single day at an average speed of 60 miles per hour. Planet Earth profiled a single flock of snow geese, birds that make similar marathon journeys, albeit slower. The flock of six-pound snow geese was so large it formed a sky-darkening cloud 12 miles long. How would an aircraft carrier battlegroup respond to an attack from millions of aerial kamikaze explosive drones that, like geese, can fly hundreds of miles? A single aircraft carrier costs billions of dollars, and the United States relies heavily on its ten aircraft carrier strike groups to project power around the globe. But as military robots match more capabilities found in nature, some of the major systems and strategies upon which U.S. national security currently relies—perhaps even the fearsome aircraft carrier strike group—might experience the same sort of technological disruption that the smartphone revolution brought about in the consumer world.

Posted on August 25, 2017 at 6:34 AMView Comments

Robot Safecracking

Robots can crack safes faster than humans—and differently:

So Seidle started looking for shortcuts. First he found that, like many safes, his SentrySafe had some tolerance for error. If the combination includes a 12, for instance, 11 or 13 would work, too. That simple convenience measure meant his bot could try every third number instead of every single number, immediately paring down the total test time to just over four days. Seidle also realized that the bot didn’t actually need to return the dial to its original position before trying every combination. By making attempts in a certain careful order, it could keep two of the three rotors in place, while trying new numbers on just the last, vastly cutting the time to try new combinations to a maximum of four seconds per try. That reduced the maximum bruteforcing time to about one day and 16 hours, or under a day on average.

But Seidle found one more clever trick, this time taking advantage of a design quirk in the safe intended to prevent traditional safecracking. Because the safe has a rod that slips into slots in the three rotors when they’re aligned to the combination’s numbers, a human safecracker can apply light pressure to the safe’s handle, turn its dial, and listen or feel for the moment when that rod slips into those slots. To block that technique, the third rotor of Seidle’s SentrySafe is indented with twelve notches that catch the rod if someone turns the dial while pulling the handle.

Seidle took apart the safe he and his wife had owned for years, and measured those twelve notches. To his surprise, he discovered the one that contained the slot for the correct combination was about a hundredth of an inch narrower than the other eleven. That’s not a difference any human can feel or listen for, but his robot can easily detect it with a few automated measurements that take seconds. That discovery defeated an entire rotor’s worth of combinations, dividing the possible solutions by a factor of 33, and reducing the total cracking time to the robot’s current hour-and-13 minute max.

We’re going to have to start thinking about robot adversaries as we design our security systems.

Posted on July 31, 2017 at 12:19 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.