On Marcus Hutchins
Long and nuanced story about Marcus Hutchins, the British hacker who wrote most of the Kronos malware and also stopped WannaCry in real time. Well worth reading.
Page 182
Long and nuanced story about Marcus Hutchins, the British hacker who wrote most of the Kronos malware and also stopped WannaCry in real time. Well worth reading.
US Cyber Command has uploaded North Korean malware samples to the VirusTotal aggregation repository, adding to the malware samples it uploaded in February.
The first of the new malware variants, COPPERHEDGE, is described as a Remote Access Tool (RAT) “used by advanced persistent threat (APT) cyber actors in the targeting of cryptocurrency exchanges and related entities.”
This RAT is known for its capability to help the threat actors perform system reconnaissance, run arbitrary commands on compromised systems, and exfiltrate stolen data.
TAINTEDSCRIBE is a trojan that acts as a full-featured beaconing implant with command modules and designed to disguise as Microsoft’s Narrator.
The trojan “downloads its command execution module from a command and control (C2) server and then has the capability to download, upload, delete, and execute files; enable Windows CLI access; create and terminate processes; and perform target system enumeration.”
Last but not least, PEBBLEDASH is yet another North Korean trojan acting like a full-featured beaconing implant and used by North Korean-backed hacking groups “to download, upload, delete, and execute files; enable Windows CLI access; create and terminate processes; and perform target system enumeration.”
It’s interesting to see the US government take a more aggressive stance on foreign malware. Making samples public, so all the antivirus companies can add them to their scanning systems, is a big deal—and probably required some complicated declassification maneuvering.
Me, I like reading the codenames.
Lots more on the US-CERT website.
The Army is developing a new electronic warfare pod capable of being put on drones and on trucks.
…the Silent Crow pod is now the leading contender for the flying flagship of the Army’s rebuilt electronic warfare force. Army EW was largely disbanded after the Cold War, except for short-range jammers to shut down remote-controlled roadside bombs. Now it’s being urgently rebuilt to counter Russia and China, whose high-tech forces—unlike Afghan guerrillas—rely heavily on radio and radar systems, whose transmissions US forces must be able to detect, analyze and disrupt.
It’s hard to tell what this thing can do. Possibly a lot, but it’s all still in prototype stage.
Historically, cyber operations occurred over landline networks and electronic warfare over radio-frequency (RF) airwaves. The rise of wireless networks has caused the two to blur. The military wants to move away from traditional high-powered jamming, which filled the frequencies the enemy used with blasts of static, to precisely targeted techniques, designed to subtly disrupt the enemy’s communications and radar networks without their realizing they’re being deceived. There are even reports that “RF-enabled cyber” can transmit computer viruses wirelessly into an enemy network, although Wojnar declined to confirm or deny such sensitive details.
[…]
The pod’s digital brain also uses machine-learning algorithms to analyze enemy signals it detects and compute effective countermeasures on the fly, instead of having to return to base and download new data to human analysts. (Insiders call this cognitive electronic warfare). Lockheed also offers larger artificial intelligences to assist post-mission analysis on the ground, Wojnar said. But while an AI small enough to fit inside the pod is necessarily less powerful, it can respond immediately in a way a traditional system never could.
EDITED TO ADD (5/14): Here are two reports on Russian electronic warfare capabilities.
The attack requires physical access to the computer, but it’s pretty devastating:
On Thunderbolt-enabled Windows or Linux PCs manufactured before 2019, his technique can bypass the login screen of a sleeping or locked computer—and even its hard disk encryption—to gain full access to the computer’s data. And while his attack in many cases requires opening a target laptop’s case with a screwdriver, it leaves no trace of intrusion and can be pulled off in just a few minutes. That opens a new avenue to what the security industry calls an “evil maid attack,” the threat of any hacker who can get alone time with a computer in, say, a hotel room. Ruytenberg says there’s no easy software fix, only disabling the Thunderbolt port altogether.
“All the evil maid needs to do is unscrew the backplate, attach a device momentarily, reprogram the firmware, reattach the backplate, and the evil maid gets full access to the laptop,” says Ruytenberg, who plans to present his Thunderspy research at the Black Hat security conference this summeror the virtual conference that may replace it. “All of this can be done in under five minutes.”
Lots of details in the article above, and in the attack website. (We know it’s a modern hack, because it comes with its own website and logo.)
Intel responds.
EDITED TO ADD (5/14): More.
The California Consumer Privacy Act is a lesson in missed opportunities. It was passed in haste, to stop a ballot initiative that would have been even more restrictive:
In September 2017, Alastair Mactaggart and Mary Ross proposed a statewide ballot initiative entitled the “California Consumer Privacy Act.” Ballot initiatives are a process under California law in which private citizens can propose legislation directly to voters, and pursuant to which such legislation can be enacted through voter approval without any action by the state legislature or the governor. While the proposed privacy initiative was initially met with significant opposition, particularly from large technology companies, some of that opposition faded in the wake of the Cambridge Analytica scandal and Mark Zuckerberg’s April 2018 testimony before Congress. By May 2018, the initiative appeared to have garnered sufficient support to appear on the November 2018 ballot. On June 21, 2018, the sponsors of the ballot initiative and state legislators then struck a deal: in exchange for withdrawing the initiative, the state legislature would pass an agreed version of the California Consumer Privacy Act. The initiative was withdrawn, and the state legislature passed (and the Governor signed) the CCPA on June 28, 2018.
Since then, it was substantially amended—that is, watered down—at the request of various surveillance capitalism companies. Enforcement was supposed to start this year, but we haven’t seen much yet.
And we could have had that ballot initiative.
It looks like Alastair Mactaggart and others are back.
Advocacy group Californians for Consumer Privacy, which started the push for a state-wide data privacy law, announced this week that it has the signatures it needs to get version 2.0 of its privacy rules on the US state’s ballot in November, and submitted its proposal to Sacramento.
This time the goal is to tighten up the rules that its previously ballot measure managed to get into law, despite the determined efforts of internet giants like Google and Facebook to kill it. In return for the legislation being passed, that ballot measure was dropped. Now, it looks like the campaigners are taking their fight to a people’s vote after all.
[…]
The new proposal would add more rights, including the use and sale of sensitive personal information, such as health and financial information, racial or ethnic origin, and precise geolocation. It would also triples existing fines for companies caught breaking the rules surrounding data on children (under 16s) and would require an opt-in to even collect such data.
The proposal would also give Californians the right to know when their information is used to make fundamental decisions about them, such as getting credit or employment offers. And it would require political organizations to divulge when they use similar data for campaigns.
And just to push the tech giants from fury into full-blown meltdown the new ballot measure would require any amendments to the law to require a majority vote in the legislature, effectively stripping their vast lobbying powers and cutting off the multitude of different ways the measures and its enforcement can be watered down within the political process.
I don’t know why they accepted the compromise in the first place. It was obvious that the legislative process would be hijacked by the powerful tech companies. I support getting this onto the ballot this year.
EDITED TO ADD(5/17): It looks like this new ballot initiative isn’t going to be an improvement.
It’s the oldest squid attack on record:
An ancient squid-like creature with 10 arms covered in hooks had just crushed the skull of its prey in a vicious attack when disaster struck, killing both predator and prey, according to a Jurassic period fossil of the duo found on the southern coast of England.
This 200 million-year-old fossil was originally discovered in the 19th century, but a new analysis reveals that it’s the oldest known example of a coleoid, or a class of cephalopods that includes octopuses, squid and cuttlefish, attacking prey.
As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.
Read my blog posting guidelines here.
Used Tesla components, sold on eBay, still contain personal information, even after a factory reset.
This is a decades-old problem. It’s a problem with used hard drives. It’s a problem with used photocopiers and printers. It will be a problem with IoT devices. It’ll be a problem with everything, until we decide that data deletion is a priority.
EDITED TO ADD (6/20): These computes were not factory reset. Apparently, he data was intentionally left on the computer so that the technicians could transfer it when upgrading the computer. It’s still bad, but a factory reset does work.
This is a good explanation of an iOS bug that allowed someone to break out of the application sandbox. A summary:
What a crazy bug, and Siguza’s explanation is very cogent. Basically, it comes down to this:
- XML is terrible.
- iOS uses XML for Plists, and Plists are used everywhere in iOS (and MacOS).
- iOS’s sandboxing system depends upon three different XML parsers, which interpret slightly invalid XML input in slightly different ways.
So Siguza’s exploit —which granted an app full access to the entire file system, and more - uses malformed XML comments constructed in a way that one of iOS’s XML parsers sees its declaration of entitlements one way, and another XML parser sees it another way. The XML parser used to check whether an application should be allowed to launch doesn’t see the fishy entitlements because it thinks they’re inside a comment. The XML parser used to determine whether an already running application has permission to do things that require entitlements sees the fishy entitlements and grants permission.
This is fixed in the new iOS release, 13.5 beta 3.
Implementing 4 different parsers is just asking for trouble, and the “fix” is of the crappiest sort, bolting on more crap to check they’re doing the right thing in this single case. None of this is encouraging.
More commentary. Hacker News thread.
It’s the twentieth anniversary of the ILOVEYOU virus, and here are three interesting articles about it and its effects on software design.
Interesting story of malware hidden in Google Apps. This particular campaign is tied to the government of Vietnam.
At a remote virtual version of its annual Security Analyst Summit, researchers from the Russian security firm Kaspersky today plan to present research about a hacking campaign they call PhantomLance, in which spies hid malware in the Play Store to target users in Vietnam, Bangladesh, Indonesia, and India. Unlike most of the shady apps found in Play Store malware, Kaspersky’s researchers say, PhantomLance’s hackers apparently smuggled in data-stealing apps with the aim of infecting only some hundreds of users; the spy campaign likely sent links to the malicious apps to those targets via phishing emails. “In this case, the attackers used Google Play as a trusted source,” says Kaspersky researcher Alexey Firsh. “You can deliver a link to this app, and the victim will trust it because it’s Google Play.”
[…]
The first hints of PhantomLance’s campaign focusing on Google Play came to light in July of last year. That’s when Russian security firm Dr. Web found a sample of spyware in Google’s app store that impersonated a downloader of graphic design software but in fact had the capability to steal contacts, call logs, and text messages from Android phones. Kaspersky’s researchers found a similar spyware app, impersonating a browser cache-cleaning tool called Browser Turbo, still active in Google Play in November of that year. (Google removed both malicious apps from Google Play after they were reported.) While the espionage capabilities of those apps was fairly basic, Firsh says that they both could have expanded. “What’s important is the ability to download new malicious payloads,” he says. “It could extend its features significantly.”
Kaspersky went on to find tens of other, similar spyware apps dating back to 2015 that Google had already removed from its Play Store, but which were still visible in archived mirrors of the app repository. Those apps appeared to have a Vietnamese focus, offering tools for finding nearby churches in Vietnam and Vietnamese-language news. In every case, Firsh says, the hackers had created a new account and even Github repositories for spoofed developers to make the apps appear legitimate and hide their tracks.
EDITED TO ADD (7/1): This entry has been translated into Spanish.
Sidebar photo of Bruce Schneier by Joe MacInnis.