Entries Tagged "social engineering"

Page 1 of 11

Fooling an AI Article Writer

World of Warcraft players wrote about a fictional game element, “Glorbo,” on a subreddit for the game, trying to entice an AI bot to write an article about it. It worked:

And it…worked. Zleague auto-published a post titled “World of Warcraft Players Excited For Glorbo’s Introduction.”

[…]

That is…all essentially nonsense. The article was left online for a while but has finally been taken down (here’s a mirror, it’s hilarious). All the authors listed as having bylines on the site are fake. It appears this entire thing is run with close to zero oversight.

Expect lots more of this sort of thing in the future. Also, expect the AI bots to get better at detecting this sort of thing. It’s going to be an arms race.

Posted on July 27, 2023 at 7:04 AMView Comments

Massive Data Breach at Uber

It’s big:

The breach appeared to have compromised many of Uber’s internal systems, and a person claiming responsibility for the hack sent images of email, cloud storage and code repositories to cybersecurity researchers and The New York Times.

“They pretty much have full access to Uber,” said Sam Curry, a security engineer at Yuga Labs who corresponded with the person who claimed to be responsible for the breach. “This is a total compromise, from what it looks like.”

It looks like a pretty basic phishing attack; someone gave the hacker their login credentials. And because Uber has lousy internal security, lots of people have access to everything. So once a hacker gains a foothold, they have access to everything.

This is the same thing that Mudge accuses Twitter of: too many employees have broad access within the company’s network.

More details. Slashdot thread.

EDITED TO ADD (9/20): More details.

Posted on September 16, 2022 at 9:07 AMView Comments

Problems with Multifactor Authentication

Roger Grimes on why multifactor authentication isn’t a panacea:

The first time I heard of this issue was from a Midwest CEO. His organization had been hit by ransomware to the tune of $10M. Operationally, they were still recovering nearly a year later. And, embarrassingly, it was his most trusted VP who let the attackers in. It turns out that the VP had approved over 10 different push-based messages for logins that he was not involved in. When the VP was asked why he approved logins for logins he was not actually doing, his response was, “They (IT) told me that I needed to click on Approve when the message appeared!”

And there you have it in a nutshell. The VP did not understand the importance (“the WHY”) of why it was so important to ONLY approve logins that they were participating in. Perhaps they were told this. But there is a good chance that IT, when implementinthe new push-based MFA, instructed them as to what they needed to do to successfully log in, but failed to mention what they needed to do when they were not logging in if the same message arrived. Most likely, IT assumed that anyone would naturally understand that it also meant not approving unexpected, unexplained logins. Did the end user get trained as to what to do when an unexpected login arrived? Were they told to click on “Deny” and to contact IT Help Desk to report the active intrusion?

Or was the person told the correct instructions for both approving and denying and it just did not take? We all have busy lives. We all have too much to do. Perhaps the importance of the last part of the instructions just did not sink in. We can think we hear and not really hear. We can hear and still not care.

Posted on October 21, 2021 at 6:25 AMView Comments

Using AI to Scale Spear Phishing

The problem with spear phishing is that it takes time and creativity to create individualized enticing phishing emails. Researchers are using GPT-3 to attempt to solve that problem:

The researchers used OpenAI’s GPT-3 platform in conjunction with other AI-as-a-service products focused on personality analysis to generate phishing emails tailored to their colleagues’ backgrounds and traits. Machine learning focused on personality analysis aims to be predict a person’s proclivities and mentality based on behavioral inputs. By running the outputs through multiple services, the researchers were able to develop a pipeline that groomed and refined the emails before sending them out. They say that the results sounded “weirdly human” and that the platforms automatically supplied surprising specifics, like mentioning a Singaporean law when instructed to generate content for people living in Singapore.

While they were impressed by the quality of the synthetic messages and how many clicks they garnered from colleagues versus the human-composed ones, the researchers note that the experiment was just a first step. The sample size was relatively small and the target pool was fairly homogenous in terms of employment and geographic region. Plus, both the human-generated messages and those generated by the AI-as-a-service pipeline were created by office insiders rather than outside attackers trying to strike the right tone from afar.

It’s just a matter of time before this is really effective. Combine it with voice and video synthesis, and you have some pretty scary scenarios. The real risk isn’t that AI-generated phishing emails are as good as human-generated ones, it’s that they can be generated at much greater scale.

Defcon presentation and slides. Another news article

Posted on August 13, 2021 at 6:16 AMView Comments

Malware Hidden in Call of Duty Cheating Software

News article:

Most troublingly, Activision says that the “cheat” tool has been advertised multiple times on a popular cheating forum under the title “new COD hack.” (Gamers looking to flout the rules will typically go to such forums to find new ways to do so.) While the report doesn’t mention which forum they were posted on (that certainly would’ve been helpful), it does say that these offerings have popped up a number of times. They have also been seen advertised in YouTube videos, where instructions were provided on how gamers can run the “cheats” on their devices, and the report says that “comments [on the videos] seemingly indicate people had downloaded and attempted to use the tool.”

Part of the reason this attack could work so well is that game cheats typically require a user to disable key security features that would otherwise keep a malicious program out of their system. The hacker is basically getting the victim to do their own work for them.

“It is common practice when configuring a cheat program to run it the with the highest system privileges,” the report notes. “Guides for cheats will typically ask users to disable or uninstall antivirus software and host firewalls, disable kernel code signing, etc.”

Detailed report.

Posted on April 2, 2021 at 6:00 AMView Comments

Details of a Computer Banking Scam

This is a longish video that describes a profitable computer banking scam that’s run out of call centers in places like India. There’s a lot of fluff about glitterbombs and the like, but the details are interesting. The scammers convince the victims to give them remote access to their computers, and then that they’ve mistyped a dollar amount and have received a large refund that they didn’t deserve. Then they convince the victims to send cash to a drop site, where a money mule retrieves it and forwards it to the scammers.

I found it interesting for several reasons. One, it illustrates the complex business nature of the scam: there are a lot of people doing specialized jobs in order for it to work. Two, it clearly shows the psychological manipulation involved, and how it preys on the unsophisticated and vulnerable. And three, it’s an evolving tactic that gets around banks increasingly flagging blocking suspicious electronic transfers.

Posted on March 22, 2021 at 6:15 AMView Comments

Hiding Malware in Social Media Buttons

Clever tactic:

This new malware was discovered by researchers at Dutch cyber-security company Sansec that focuses on defending e-commerce websites from digital skimming (also known as Magecart) attacks.

The payment skimmer malware pulls its sleight of hand trick with the help of a double payload structure where the source code of the skimmer script that steals customers’ credit cards will be concealed in a social sharing icon loaded as an HTML ‘svg’ element with a ‘path’ element as a container.

The syntax for hiding the skimmer’s source code as a social media button perfectly mimics an ‘svg’ element named using social media platform names (e.g., facebook_full, twitter_full, instagram_full, youtube_full, pinterest_full, and google_full).

A separate decoder deployed separately somewhere on the e-commerce site’s server is used to extract and execute the code of the hidden credit card stealer.

This tactic increases the chances of avoiding detection even if one of the two malware components is found since the malware loader is not necessarily stored within the same location as the skimmer payload and their true purpose might evade superficial analysis.

Posted on December 7, 2020 at 6:32 AMView Comments

Artificial Personas and Public Discourse

Presidential campaign season is officially, officially, upon us now, which means it’s time to confront the weird and insidious ways in which technology is warping politics. One of the biggest threats on the horizon: artificial personas are coming, and they’re poised to take over political debate. The risk arises from two separate threads coming together: artificial intelligence-driven text generation and social media chatbots. These computer-generated “people” will drown out actual human discussions on the Internet.

Text-generation software is already good enough to fool most people most of the time. It’s writing news stories, particularly in sports and finance. It’s talking with customers on merchant websites. It’s writing convincing op-eds on topics in the news (though there are limitations). And it’s being used to bulk up “pink-slime journalism”—websites meant to appear like legitimate local news outlets but that publish propaganda instead.

There’s a record of algorithmic content pretending to be from individuals, as well. In 2017, the Federal Communications Commission had an online public-commenting period for its plans to repeal net neutrality. A staggering 22 million comments were received. Many of them—maybe half—were fake, using stolen identities. These comments were also crude; 1.3 million were generated from the same template, with some words altered to make them appear unique. They didn’t stand up to even cursory scrutiny.

These efforts will only get more sophisticated. In a recent experiment, Harvard senior Max Weiss used a text-generation program to create 1,000 comments in response to a government call on a Medicaid issue. These comments were all unique, and sounded like real people advocating for a specific policy position. They fooled the Medicaid.gov administrators, who accepted them as genuine concerns from actual human beings. This being research, Weiss subsequently identified the comments and asked for them to be removed, so that no actual policy debate would be unfairly biased. The next group to try this won’t be so honorable.

Chatbots have been skewing social-media discussions for years. About a fifth of all tweets about the 2016 presidential election were published by bots, according to one estimate, as were about a third of all tweets about that year’s Brexit vote. An Oxford Internet Institute report from last year found evidence of bots being used to spread propaganda in 50 countries. These tended to be simple programs mindlessly repeating slogans: a quarter million pro-Saudi “We all have trust in Mohammed bin Salman” tweets following the 2018 murder of Jamal Khashoggi, for example. Detecting many bots with a few followers each is harder than detecting a few bots with lots of followers. And measuring the effectiveness of these bots is difficult. The best analyses indicate that they did not affect the 2016 US presidential election. More likely, they distort people’s sense of public sentiment and their faith in reasoned political debate. We are all in the middle of a novel social experiment.

Over the years, algorithmic bots have evolved to have personas. They have fake names, fake bios, and fake photos—sometimes generated by AI. Instead of endlessly spewing propaganda, they post only occasionally. Researchers can detect that these are bots and not people, based on their patterns of posting, but the bot technology is getting better all the time, outpacing tracking attempts. Future groups won’t be so easily identified. They’ll embed themselves in human social groups better. Their propaganda will be subtle, and interwoven in tweets about topics relevant to those social groups.

Combine these two trends and you have the recipe for nonhuman chatter to overwhelm actual political speech.

Soon, AI-driven personas will be able to write personalized letters to newspapers and elected officials, submit individual comments to public rule-making processes, and intelligently debate political issues on social media. They will be able to comment on social-media posts, news sites, and elsewhere, creating persistent personas that seem real even to someone scrutinizing them. They will be able to pose as individuals on social media and send personalized texts. They will be replicated in the millions and engage on the issues around the clock, sending billions of messages, long and short. Putting all this together, they’ll be able to drown out any actual debate on the Internet. Not just on social media, but everywhere there’s commentary.

Maybe these persona bots will be controlled by foreign actors. Maybe it’ll be domestic political groups. Maybe it’ll be the candidates themselves. Most likely, it’ll be everybody. The most important lesson from the 2016 election about misinformation isn’t that misinformation occurred; it is how cheap and easy misinforming people was. Future technological improvements will make it all even more affordable.

Our future will consist of boisterous political debate, mostly bots arguing with other bots. This is not what we think of when we laud the marketplace of ideas, or any democratic political process. Democracy requires two things to function properly: information and agency. Artificial personas can starve people of both.

Solutions are hard to imagine. We can regulate the use of bots—a proposed California law would require bots to identify themselves—but that is effective only against legitimate influence campaigns, such as advertising. Surreptitious influence operations will be much harder to detect. The most obvious defense is to develop and standardize better authentication methods. If social networks verify that an actual person is behind each account, then they can better weed out fake personas. But fake accounts are already regularly created for real people without their knowledge or consent, and anonymous speech is essential for robust political debate, especially when speakers are from disadvantaged or marginalized communities. We don’t have an authentication system that both protects privacy and scales to the billions of users.

We can hope that our ability to identify artificial personas keeps up with our ability to disguise them. If the arms race between deep fakes and deep-fake detectors is any guide, that’ll be hard as well. The technologies of obfuscation always seem one step ahead of the technologies of detection. And artificial personas will be designed to act exactly like real people.

In the end, any solutions have to be nontechnical. We have to recognize the limitations of online political conversation, and again prioritize face-to-face interactions. These are harder to automate, and we know the people we’re talking with are actual people. This would be a cultural shift away from the internet and text, stepping back from social media and comment threads. Today that seems like a completely unrealistic solution.

Misinformation efforts are now common around the globe, conducted in more than 70 countries. This is the normal way to push propaganda in countries with authoritarian leanings, and it’s becoming the way to run a political campaign, for either a candidate or an issue.

Artificial personas are the future of propaganda. And while they may not be effective in tilting debate to one side or another, they easily drown out debate entirely. We don’t know the effect of that noise on democracy, only that it’ll be pernicious, and that it’s inevitable.

This essay previously appeared in TheAtlantic.com.

EDITED TO ADD: Jamie Susskind wrote a similar essay.

EDITED TO ADD (3/16): This essay has been translated into Spanish.

EDITED TO ADD (6/4): This essay has been translated into Portuguese.

Posted on January 13, 2020 at 8:21 AMView Comments

Hacking Instagram to Get Free Meals in Exchange for Positive Reviews

This is a fascinating hack:

In today’s digital age, a large Instagram audience is considered a valuable currency. I had also heard through the grapevine that I could monetize a large following—or in my desired case—use it to have my meals paid for. So I did just that.

I created an Instagram page that showcased pictures of New York City’s skylines, iconic spots, elegant skyscrapers ­—you name it. The page has amassed a following of over 25,000 users in the NYC area and it’s still rapidly growing.

I reach out restaurants in the area either via Instagram’s direct messaging or email and offer to post a positive review in return for a free entree or at least a discount. Almost every restaurant I’ve messaged came back at me with a compensated meal or a gift card. Most places have an allocated marketing budget for these types of things so they were happy to offer me a free dining experience in exchange for a promotion. I’ve ended up giving some of these meals away to my friends and family because at times I had too many queued up to use myself.

The beauty of this all is that I automated the whole thing. And I mean 100% of it. I wrote code that finds these pictures or videos, makes a caption, adds hashtags, credits where the picture or video comes from, weeds out bad or spammy posts, posts them, follows and unfollows users, likes pictures, monitors my inbox, and most importantly—both direct messages and emails restaurants about a potential promotion. Since its inception, I haven’t even really logged into the account. I spend zero time on it. It’s essentially a robot that operates like a human, but the average viewer can’t tell the difference. And as the programmer, I get to sit back and admire its (and my) work.

So much going on in this project.

Posted on April 2, 2019 at 6:16 AMView Comments

Attacking Soldiers on Social Media

A research group at NATO’s Strategic Communications Center of Excellence catfished soldiers involved in an European military exercise—we don’t know what country they were from—to demonstrate the power of the attack technique.

Over four weeks, the researchers developed fake pages and closed groups on Facebook that looked like they were associated with the military exercise, as well as profiles impersonating service members both real and imagined.

To recruit soldiers to the pages, they used targeted Facebook advertising. Those pages then promoted the closed groups the researchers had created. Inside the groups, the researchers used their phony accounts to ask the real service members questions about their battalions and their work. They also used these accounts to “friend” service members. According to the report, Facebook’s Suggested Friends feature proved helpful in surfacing additional targets.

The researchers also tracked down service members’ Instagram and Twitter accounts and searched for other information available online, some of which a bad actor might be able to exploit. “We managed to find quite a lot of data on individual people, which would include sensitive information,” Biteniece says. “Like a serviceman having a wife and also being on dating apps.”

By the end of the exercise, the researchers identified 150 soldiers, found the locations of several battalions, tracked troop movements, and compelled service members to engage in “undesirable behavior,” including leaving their positions against orders.

“Every person has a button. For somebody there’s a financial issue, for somebody it’s a very appealing date, for somebody it’s a family thing,” Sarts says. “It’s varied, but everybody has a button. The point is, what’s openly available online is sufficient to know what that is.”

This is the future of warfare. It’s one of the reasons China stole all of that data from the Office of Personal Management. If indeed a country’s intelligence service was behind the Equifax attack, this is why they did it.

Go back and read this scenario from the Center for Strategic and International Studies. Why wouldn’t a country intent on starting a war do it that way?

Posted on February 26, 2019 at 6:10 AMView Comments

1 2 3 11

Sidebar photo of Bruce Schneier by Joe MacInnis.