Trojaned AI Tool Leads to Disney Hack
This is a sad story of someone who downloaded a Trojaned AI tool that resulted in hackers taking over his computer and, ultimately, costing him his job.
This is a sad story of someone who downloaded a Trojaned AI tool that resulted in hackers taking over his computer and, ultimately, costing him his job.
Bob • March 4, 2025 9:59 AM
@Clive
Our policies forbid downloading and running random software. The sooner people get it through their heads that their work computers aren’t their personal computers, the better. This isn’t exactly a drive-by attack.
Clive Robinson • March 4, 2025 11:22 AM
@ Bob,
With regards,
“The sooner people get it through their heads that their work computers aren’t their personal computers, the better.”
The same applies the other way which is why I indicated BYOD device is such a very bad idea.
The important paragraph in the article to note is,
“During the pandemic, companies quickly made sure workers could access systems from home—and hackers soon realized home computers had become corporate back doors.”
Thus the employer became an “insider attacker” to the “employees personal home computers” in so many cases as it was “the cheap option” for the employer…
So looking a little deeper, according to the article, the victims troubles started,
“when he downloaded free software from popular code-sharing site GitHub while trying out some new artificial intelligence technology on his home computer.”
“… the AI assistant was actually malware that gave the hacker behind it access to his [home] computer, and his entire digital life.
With the issue being that the victim had not used a “Password Manager” correctly as further indicated,
“The hacker gained access to 1Password, a password-manager that Van Andel used to store passwords and other sensitive information, as well as “session cookies,” digital files stored on his computer that allowed him to access online resources including Disney’s Slack channel.”
The article is vague but it appears there were at least two computers, the victims own computer and one issued by the employer. In theory if very good “OpSec” was followed then the two computers should have been the equivalent of “air-gapped”.
However I’ve yet to meet anyone who practices “very good OpSec” and most even half brain dead hackers not just know this but exploit it…
And the article claimed the mistake the victim made was not using 2FA on the password manager,
“the 1Password account—wasn’t itself protected by a second factor. It required just a username and password by default, and he hadn’t taken the extra step of turning on two-factor authentication.”
Lack of 2FA or better is a problem I all to often see with many user accounts and other things such as “security devices, software, and applications”. Hence my dislike of human memorable passwords/phrases and most password managers (preferring our hosts old “folded paper in your wallet” option as it’s more secure against outsider attacks).
But… Likewise inadequate authentication of which many 2FA systems are guilty of. Hence my skepticism with regards much of what is claimed to be 2FA.
The simple fact as I noted on this blog way more than a decade ago,
“You have to authenticate the transaction not the channel.”
The reason the channel all to often gets authenticated and not individual transactions is,
“User convenience”.
One thing is clear –if what is reported is accurate,– the use of a password manager across multiple computers / devices was the “weak link” that allowed access to just about everything the victim had access to.
Arguably the password manager was the security “air gap crossing” component and it’s use was an OpSec failure (and as such happens more than most would think). Hence my comment about “mitigation by segregation”.
lurker • March 4, 2025 12:48 PM
@Bruce
“Sad story” yes. And MSM are having a field day mangling the facts. Apart from the victim’s poor opsec, there is a question: should he have gone to LEA first, as he did, or should he have gone to his work security team? A snag with the second option is it appears the intruders may have uploaded to his work machine some NSFW material, which could be harder to deal with than simple theft of credit card details …
Bob • March 4, 2025 1:48 PM
BYOD is dumb AF. Typically the brain child of a bean counter and an executive who can’t be trusted with an RJ45 connector. The cost savings are great until the 5th or 6th breach that results.
Anyone promoting BYOD should be retroactively fired.
MrC • March 4, 2025 7:50 PM
The article isn’t clear about whether the gen-AI nature of the software mattered. Was the malicious nature of this software was somehow obscured inside its gen-AI functionality, or this is just the same plain, old “malware posing as useful software” attack that’s been around for decades?
ResearcherZero • March 4, 2025 11:03 PM
Maintaining work systems is bad enough without people monkeying around with them or polluting the environment with whatever that thing is they brought into the office.
There are those that do not care about reality and will happily pretend it does not exist.
This is the sad thing about reality, as much as I would like it to bend to my wishes, it rarely does so. Reality still requires regular maintenance, fact checking and re-analysis.
Others could not give a hoot about rules and the laws of physics and will attempt to defy them, no matter what the advice is, and they will keep on trying contrary to the danger.
Clive Robinson • March 5, 2025 2:53 AM
@ MrC
With regards the actual attack the article says,
‘Once someone has a keylogging Trojan program on his or her computer, “an attacker has nearly unrestricted access,” a 1Password spokesman said.’
So you are right the article is not clear, as it’s not in quite a few other respects, but I’m guessing it’s your last option of it,
“is just the same plain, old “malware posing as useful software” attack that’s been around for decades?”
Or a case of “new bottle, same old wine”.
My reasoning is two fold,
Firstly it’s the sort of thing certain types of attacker are well practiced in, and there is,
“A degree of truth in the old saying about dogs and vomit”.
(It’s how most “attributions have been made in the past).
And secondly in the article it says of the victim,
“His antivirus software hadn’t turned up anything on his PC, but he installed a second antivirus program that found the malware almost immediately.”
As far as I’m aware nobody has produced “antivirus software” that can look inside the weights of a current AI LLM’s DNN and determine it has built in malicious values.
Likewise I’ve not seen any “claims” that “antivirus software” can act as a set of “client side guide rails” that can detect “malicious communications to any current AI LLM” that a user has not produced themselves.
Whilst a lot has been talked about getting “current AI LLM’s” to produce hallucination output in source code and some malicious code. The simple fact is, the current consensus is that as “current AI LLM’s” are just glorified “auto-complete systems” malicious code would have to have been in the LLM “input corpus or user prompts”.
Whilst I would not rule out a possible weird combination of hallucination and risky code/prompt. I do think the probability of it is rather low, and more importantly, if someone had found a way to do it reliably they would have published a paper that the community choir would be loudly “singing about”.
Clive Robinson • March 5, 2025 3:52 AM
@ ResearcherZero, ALL,
With regards,
“Maintaining work systems is bad enough without people monkeying around with them or polluting the environment with whatever that thing is they brought into the office.”
Whilst that is true enough, I suspect the issue here was a little more subtle than most hence my mentioning of “segregation” crossing. Which is something I looked into and talked about on this blog some time before stuxnet came along [1].
If the WSJ article is to be believed then the attack time line was,
1, The victim used a password manager for all his passwords.
2, He did not use 2FA on the password manager, thus it was vulnerable to “key logging”.
3, On his own computer –not his work computer– he downloaded an AI interface program that had a key logger built in.
4, The malicious attacker used the key logger to get the username and password for the password manager.
5, The malicious attacker used these to access the victims password manager.
6, The malicious attacker in effect downloaded all the victims usernames and passwords and corresponding server URLs.
7, The malicious attacker then impersonated the victim.
8, In order to be able to extort the victim the malicious attacker used a number of techniques to put pressure on the victim (some of which are mentioned indirectly in the article).
9, When the extortion failed the malicious attacker published information they had obtained by impersonating the victim.
What is less clear is how the malicious actor accessed the computers it’s said they did.
However it is also unclear what version of the password manager was in use. That is if it was “local” as was once available, or entirely cloud-vault subscription bassed as is apparently now the only option (something the company received user complaint over),
https://en.m.wikipedia.org/wiki/1Password
The point is that the malicious attacker gained “access” to the “home computer” then using the victims password manager pivoted their attack across the segregation / gap to gain access to the victims work servers etc.
However how the victims work computer was attacked is not at all indicated in the article.
[1] Quite some years ago now, I did some independent research on how to “get malware on to voting machines that are air-gapped” as a Proof of Concept (to have the whole very bad idea of “electronic voting machines” dropped). I found several ways to “cross the air-gap” and reported one or two indirectly on this blog.
The main one I talked about was “fire and forget” software via a free-download game or similar that would infect a careless maintenance techs computer used for voting machine diagnostic / patch / update. That should always be kept isolated but in the case of many voting machine companies was effectively not.
Thinking about how to do such PoCs, is an interesting “research project” in it’s own right, and has the advantage you only have to think about how to get it to work in one direction, not both directions as you have to do when trying to illicitly acquire protected data.
lurker • March 5, 2025 12:30 PM
@Clive Robinson
“5, The malicious attacker used these to access the victims password manager.
6, The malicious attacker in effect downloaded all the victims usernames and passwords and corresponding server URLs.”
Thus displaying the weak point of password managers, which must be reinforced by strong opsec (including 2FA). “Airgapping” home and work machines should require separate password managers, or separate accounts on the same cloud-based system, and including separate 2FA tokens.
It might be idle to speculate whether the work culture would affect the success in managing this, i.e. entertainment vs. say, medical research.
Clive Robinson • March 5, 2025 1:40 PM
@ lurker, ALL,
As you note,
“Thus displaying the weak point of password managers, which must be reinforced by strong opsec (including 2FA).”
It’s just one of many weak points certain password managers have (read more below).
But importantly it is not just password managers that have this weak point. It’s way way to many other items security actually relies on as well and most are not obvious to users.
Or for that matter “Security Professionals” either… It’s actually hard even for “security professionals” to see not just “The complexity” but also “The interconnectivity”. With the two combined giving a very real “security nightmare”…
But as I said I don’t like quite a few “password managers” because not only are they not secure in use, and represent a single point of failure, they give a false sense of security.
Anyone who has their “password vault” available to a 2nd, 3rd or more party such as on an accessable or Internet Server really does not understand “basic security principles” from a technical attackers perspective.
As for what to do,
“Airgapping” home and work machines should require separate password managers,”
The word “entirely” should be in there, because the “loss of security” on one implies a higher probability of a loss of security on a second or more password manager that has any “commonality” with the first.
So these options should really be ruled out,
“… or separate accounts on the same cloud-based system, and including separate 2FA tokens.”
Oh the problem with way to many supposed 2FA tokens is they are in effect “algorithmic” with the main protection being fairly predictable. That is,
1, Variant by time derivative.
2, Variant by count derivative.
Neither is a good idea without other fairly extensive algorithmic protection.
But they almost all fail to the “root of trust” issue. Where a single integer underlies all of the algorithms and is often called the “seed”.
Some feel that using a “crypto secure” algorithm like AES solves this… It does not because it becomes a “single point of failure” even if you do try to hide it behind hash algorithms (which fail at the simplest to “dictionary attacks” and all that implies).
But as I’ve indicated long long ago when talking about “authenticating transactions” across air-gaps thus through “users”. Trying to get the level of “entropy” required through a user without error is in many cases near impossible…
However using a Shannon Channel that does not go “through the user” immediately gives the probability of a “Simple Side Channel” within it (see G. Simmons on the use of redundancy for subliminal channels). That is it can be in effect “overt” in protocol as it is “covert” by being not easily visible to the user.
I’ve yet to see common “password managers” or “2FA algorithms/devices” that get even close to what high end attackers now have as technical capabilities.
It’s something that researchers need to address fairly promptly.
Full article, the linked WSJ one is behind a CSS-faded-out-text-wall for me. The modern web means no idea who sees what otherwise, is is a paywall, is is geo, is it the presence of an adblocker here, or script control?
ResearcherZero • March 7, 2025 12:56 AM
@Clive Robinson
It’s the problem with the often confusing messages from tech companies and a lack of clear and uniform standard in procedure, naming conventions and the language they use. This creates confusion for those outside of the cyber security profession on the limits and confines of authentication methods and security tools and how this may be exploited to jump gaps, or move from system to system. Businesses may not enforce strict password policy, or are slow to enforce it, and may allow employees to use at-work credentials externally.
I often see businesses fail to provide unique credentials to employees in a timely manner, which leads to sharing of passwords by multiple employees to access a single account. Poor logging and access controls, along with failure to enforce rigorous credential policies.
Once credentials or access are gained on a home network for example, an attacker can then disguise their point of origin to attack the corporate network by using ORB networks to defeat IP blocking, proxy their connections and disguise their traffic.
I fear things may get a lot uglier before they get better.
A dangerous development for cyber security…
‘https://www.microsoft.com/en-us/security/blog/2025/03/05/silk-typhoon-targeting-it-supply-chain/
i-Soon
https://www.bitdefender.com/en-us/blog/hotforsecurity/us-indicts-10-professional-hackers-chinese-prc
Dave • March 8, 2025 10:16 AM
This was his fault for not following the policies/procedures that I guarantee you Disney had in place. No one to blame but himself for his actions.
ResearcherZero • March 24, 2025 12:50 AM
There are quite a few prompt injection attacks that have been discovered along with methods
to insert and then activate backdoors in LLMs. Disturbingly, malign infrastructure to target and poison AI agents has been in place for some time and may have already spent that time preparing the way for what could eventually be used to deploy supply chain attacks.
This could also be used to poison the weights in a manner that is difficult to detect.
LLM grooming
Kremlin saturating search results and web crawlers at scale.
‘https://www.newsguardrealitycheck.com/p/a-well-funded-moscow-based-global
The manipulation is subtle and not easily visible.
https://www.forbes.com/sites/torconstantino/2025/03/10/russian-propaganda-has-now-infected-western-ai-chatbots—new-study/
Network appears custom-built to flood Large Language Models.
https://www.americansunlight.org/updates/new-report-russian-propaganda-may-be-flooding-ai-models
ResearcherZero • March 24, 2025 1:13 AM
@Dave
There are many managers who do not follow the company security policy and direct – or allow – the staff to ignore security policies without warning them of the risks. There are an endless number of techniques to target staff within an organization and take advantage of their belief in their own skills and knowledge or their ignorance. The human factor is often a key ingredient in both highly sophisticated attacks and unsophisticated attacks.
The staff are ‘prey’ in this sense. As such we should have a degree of sympathy that they are the ones who then take the fall for the affected company, rather than the management.
With the right payload for the right circumstances, any security can be penetrated. There will be a human who is exploited in that kill chain and they will suffer impact from it.
ResearcherZero • March 24, 2025 1:46 AM
There for the plucking.
A single OPSEC failure is not required for TAO. A target does not need to click on a link or interact in any way to conduct a successful operation. They can then be geo-located.
A 1,000 lb of payload can be easily delivered to their location from 400 miles away by cruise missile, and any human collateral in the immediate vicinity will be incinerated.
Perhaps software contained a zero day or backdoor. Perhaps they logged into their server to configure the VPN access, leaving their real IP within the logs. Perhaps, perhaps, perhaps.
An expert review panel set up by the White House recommended the government cease the mass collection of metadata …for the following reasons which Michael Hayden discusses here:
‘https://www.youtube.com/watch?v=kV2HDM86XgI&t=1079s
“We cannot discount the risk, in light of the lessons of our own history, that at some point in the future, high-level government officials will decide that this massive database of extraordinarily sensitive private information is there for the plucking.”
ResearcherZero • March 24, 2025 2:04 AM
Incidentally, DOGE deployed an AI agent ahead of schedule, which it plans on rolling out further. Not just private companies would like to get their hands on government data.
Rapidly rolling out any unqualified tool in such conditions is exceptionally dangerous.
‘https://theconversation.com/doge-threat-how-government-data-would-give-an-ai-company-extraordinary-power-250907
Subscribe to comments on this entry
Sidebar photo of Bruce Schneier by Joe MacInnis.
Clive Robinson • March 4, 2025 8:29 AM
Sad yes, but it was going to happen to someone, as the industry has made a “Target Rich Environment” that includes any and all ICT Systems that connect to external communications like the Internet.
Thus somebody’s number was going to come up and it was this guy’s and he’s become in effect a modern day out cast / leper.
Welcome to the modern world where you are to blame for the actions of others, because you are doubly an “easy target”. It more commonly goes by the name of “victim blaming”.
Oh and if you follow the “usual advice” it’s really not going to change anything with respect to your vulnerability.
So what can you do?
First realise that you are in a “Red Queen’s Race” where no matter how hard you try you are eventually going to loose.
Thus as expressed back in the 1983 movie War Game’s the only way to draw or win is,
“A strange game. The only winning move is not to play.”
So think on that carefully then reread the article.
Two things you should see and note,
1, Most security products are to fragile to work reliably.
2, Lack of mitigation by segregation etc enabled the attacker free reign.
But let’s be a bit more blunt,
“Adding junk software won’t noticeably strengthen a badly designed system, in fact it will probably make it break more easily”
Which is the history of most consumer and commercial security products. With even higher security products for “Government Agencies/Entities” failing on a regular basis.
In part because,
“All the consumer and commercial systems are broken by design”.
And it gets worse because,
“Most ‘security tool’ software/Apps and devices since the early AV days back before the 1990’s Internet kicked off were ‘junk’ and still are”.
Because there was and still is little or no incentive to make them otherwise. In fact it’s easy to find reasons why they are kept at junk status by considered design.
You get told you “have to have” AV / FireWall etc etc etc. So you have to “buy it” or as you are told “be at risk”. What they don’t tell you is buying it usually does not really change your “risk profile” except adversely.
So you are in effect a “captured market” that is seen as “something to milk dry” by the producers who have no incentive to do a proper job as that would “kill the profit”.
Have a look at Alphabet/Google and the Android and the Chrome Browser products. They very deliberately stop you having any type of effective security of worth, because they make most of their revenue by selling you as a product… Because you would not be a profitable product if you had effective security.
Some do try, which is why Alphabet/Google have forced not just Identifiers you can not change or stop being broadcast onto your devices, they are yet again changing things to stop effective security products from working,
You can read more on this at,
https://www.theregister.com/2025/03/04/google_android/
So when you actually get down to it you realise the only way to improve your security is by,
“Using effective segregation mitigations”.
Anything else is just not going to work for you, long term, short term, or now…
The only things stopping you getting completely violated are,
1, Your turn has not yet come up.
2, When it does and it will you have ensured there is nothing to steal or ransom.
3, Anything of importance is not connected by communications thus can not be reached by external attackers.
Which unfortunately leaves another issue,
4, Employers acting as inside attackers.
Yup due to lockdown employers forced many employees to install irremovable junk on the employees personal devices as an extension to the ludicrously insecure “Bring Your Own Device”(BYOD) nonsense.