AI Hacking Village at DEF CON This Year

At DEF CON this year, Anthropic, Google, Hugging Face, Microsoft, NVIDIA, OpenAI and Stability AI will all open up their models for attack.

The DEF CON event will rely on an evaluation platform developed by Scale AI, a California company that produces training for AI applications. Participants will be given laptops to use to attack the models. Any bugs discovered will be disclosed using industry-standard responsible disclosure practices.

Posted on May 8, 2023 at 11:29 AM11 Comments

Comments

Harry Potter May 8, 2023 4:20 PM

Really, no comments?
This is how it’s supposed to be. New capability, open to challenge. Kudos to all for putting themselves up there.

Clive Robinson May 8, 2023 9:42 PM

@ Harry Potter,

Re : Money makes the word go around

“Kudos to all for putting themselves up there.”

Thus giving “big tech” that just this year alone spent over 70million bribing the US legislative process, yet a further “free pass” to theft.

Remember all of those companies have stolen the work of others to use as input to build those models, and paid not a cent of compensation or consideration.

We realy do not need these faux-AI systems being pushed on us as surveillance tools, and used as the latest “Know your Customer” nonsense. Like the now compleatly discredited “Voice Stress Analysis” of a decade or two ago, or “Posture Analysis” etc etc. All used as arms length prejudice systems.

Richard Little May 9, 2023 1:50 AM

Question and comment for Bruce and others.

Question. Are bots AI or are they generally controlled by humans?

Comment. In an online discussion, I was repeatedly subjected to some very bizarre attacks that appeared to be coming from bots. As a test, I began an off-topic discussion in which I solicited their opinions on criticisms of various autocratic world leaders (I’ll let you guess who) and one of the apparent bots immediately told me off and then they all left me alone, and none would respond to a word I said thereafter.

Wannabe Techguy May 9, 2023 8:42 AM

@Clive

“Thus giving “big tech” that just this year alone spent over 70million bribing the US legislative process, yet a further “free pass” to theft.”

I think you’re probably correct, but how do you know? Where does that number come from?

Hanna May 9, 2023 9:56 AM

This is a cool and challenging event. AI hacking is a growing field with security risks and opportunities. I wonder how the participants will attack and defend the AI models. This event will be a great learning experience. AI hacking requires skills and expertise that not everyone has. I suggest CodeIT, a software development company that specializes in AI projects. They can help you with any AI problem or goal. They can develop custom AI solutions for various domains and industries. For example, they can create machine learning models for stock prediction. They can also secure your AI systems and prevent attacks. Visit their website and see their work and reviews.

Clive Robinson May 9, 2023 10:20 AM

@ Wannabe Techguy, ALL,

Re : Big Tech lobbying Washington around 70 Million USD.

“I think you’re probably correct, but how do you know? Where does that number come from?”

The just under/over figure comes from several sources both industry and MSM. But as far as MSN reporting, where fact reporting might be considered “better checked” it comes via the Waahington Post,

‘https://www.washingtonpost.com/technology/2022/01/21/tech-lobbying-in-washington/

It’s also been mentioned in several places specifically to do with AI… Where in effect those who are having their work stolen from them are quite bluntly saying that Alphabet, Meta and Microsoft are paying legislators money to stop them looking into the theft of all those “works” they are pushing through their systems to make their models. With even Bloomberg making comment and being reoorted vy MSM,

‘https://eu.detroitnews.com/story/business/2023/02/02/tech-giants-broke-their-spending-records-on-lobbying-last-year/69867494007/

There was an article in The Guardian by Naomi Klein that goes into the mechanics of what Alphabet, Meta, Microsoft are upto that @Mr. Peed Off linked to on the Squid page,

https://www.schneier.com/blog/archives/2023/05/friday-squid-blogging-mediterranean-beef-squid-hoax.html/#comment-421645

As others have noted we were coming upto a US election year so a lot of grubby hands expected to be crossed with silver, a lot of silver.

We can expect it to get worse as Fox and the like do not come cheap, especially as Mr Murdoch and his shareholders have ~3/4 billion dollars to find because of Tucker Carlon’s flapping gums over voting machine…

Even in Duckduckgo which is a frobt end for Microsofts search engines, barf out quite a few usefull links if you search with,

Tech companies spent almost $70 million lobbying Washington

Because it pulls up several million more from certain big ISP’s and telecos. Like,

‘https://www.washingtonpost.com/technology/2022/10/14/fcc-deadlock-gigi-sohn/

To say that the politicians are “rolling in it” would I think be an understatment…

One reason why these tech companies are getting away with it is the sad fact the US economy has effectively “flat-lined” and the only “flutter” is in two areas the Tech market, and the less reputable parts of the finance industry trying to create the start of a bew dot-com bubble they can vastly profit from.

Crypto-coins kind of worked for them Web3.0 it’s follow on not so much, but AI that really has got people fired up, and that level of MSM and similar reporting is probably going to get blown up into many billions spiralling round with nice fat fees getting sliced off.

As a few of us here have noted there is no real magic or intelligence for that matter behind the alkeged AI of LLM’s etc. The real interest from Alphavet, Meta, Microsoft is the fact they can be better used “to get inside your head” to improve their surveillance systems they profit from by the billions every day… So dropping a few tens of millions each into lobbying to keep politicos looking the other way is not even pennies on the dollar…

Oh and they are doing the same in Europe and other places, and those political hands are always open for the right sort of incentives…

ResearcherZero May 12, 2023 7:14 AM

@Richard Little

Controlled by humans. Generally pre-programmed with responses. Accordingly they may not respond to comments about authoritarian dictators. Such a tactic is to side step such subjects and limit further discussion on the topic.

ResearcherZero May 12, 2023 7:37 AM

They might be configured to post benign content until the bot farm administrator decides to weaponize them in an attack.

Astroturfing, for example, masks the real sponsors of a message to make it appear as though it originates from and is supported by grassroots participants. If people think bots are human, they are more likely to believe that the message has popular support.
‘https://www.comparitech.com/blog/information-security/inside-facebook-bot-farm/

This article has a good outline of how common they are, and how they operate.

https://www.dw.com/en/fact-check-how-do-i-spot-fake-social-media-accounts-bots-and-trolls/a-60313035

ResearcherZero May 12, 2023 8:06 AM

Early IRC bots provided automated services to users and sat in a channel to keep the server from closing it down due to inactivity.

The most prevalent use of bots on the internet is for web spidering, also called crawling, allowing internet search companies to analyze millions of files on servers throughout the world.

‘https://abusix.com/resources/botnets/a-brief-history-of-bots-and-how-theyve-shaped-the-internet-today/

ResearcherZero May 12, 2023 8:25 AM

@Richard Little

Bots can be programmed to vary the rate at which they post to make them look more like a human user, in order to avoid detection.

Unlike a human they cannot reason, but AI could make them behave more like a human. Malicious bots may become an increasing problem, as some are already configured to post divisive content, links from blogs that contain disinformation, and comments from official sources that are misquoted or used out of context.

There are a lot of bots. Facebook and Twitter regularly block millions of fake accounts.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.