Duqu 2.0

Kaspersky Labs has discovered and publicized details of a new nation-state surveillance malware system, called Duqu 2.0. It’s being attributed to Israel.

There’s a lot of details, and I recommend reading them. There was probably a Kerberos zero-day vulnerability involved, allowing the attackers to send updates to Kaspersky’s clients. There’s code specifically targeting anti-virus software, both Kaspersky and others. The system includes anti-sniffer defense, and packet-injection code. It’s designed to reside in RAM so that it better avoids detection. This is all very sophisticated.

Eugene Kaspersky wrote an op-ed condemning the attack—and making his company look good—and almost, but not quite, comparing attacking his company to attacking the Red Cross:

Historically companies like mine have always played an important role in the development of IT. When the number of Internet users exploded, cybercrime skyrocketed and became a serious threat to the security of billions of Internet users and connected devices. Law enforcement agencies were not prepared for the advent of the digital era, and private security companies were alone in providing protection against cybercrime ­ both to individuals and to businesses. The security community has been something like a group of doctors for the Internet; we even share some vocabulary with the medical profession: we talk about ‘viruses’, ‘disinfection’, etc. And obviously we’re helping law enforcement develop its skills to fight cybercrime more effectively.

One thing that struck me from a very good Wired article on Duqu 2.0:

Raiu says each of the infections began within three weeks before the P5+1 meetings occurred at that particular location. “It cannot be coincidental,” he says. “Obviously the intention was to spy on these meetings.”

Initially Kaspersky was unsure all of these infections were related, because one of the victims appeared not to be part of the nuclear negotiations. But three weeks after discovering the infection, Raiu says, news outlets began reporting that negotiations were already taking place at the site. “Somehow the attackers knew in advance that this was one of the [negotiation] locations,” Raiu says.

Exactly how the attackers spied on the negotiations is unclear, but the malware contained modules for sniffing WiFi networks and hijacking email communications. But Raiu believes the attackers were more sophisticated than this. “I don’t think their style is to infect people connecting to the WiFi. I think they were after some kind of room surveillance—to hijack the audio through the teleconference or hotel phone systems.”

Those meetings are talks about Iran’s nuclear program, which we previously believed Israel spied on. Look at the details of the attack, though: hack the hotel’s Internet, get into the phone system, and turn the hotel phones into room bugs. Very clever.

Posted on June 12, 2015 at 6:18 AM59 Comments

Comments

Mathew Smith June 12, 2015 6:40 AM

In the Citizen4 movie documentary, there is a moment where Snowden implies that the voIP phones in the hotel room could be hacked. He said it was a routine tactic.

alanm June 12, 2015 7:09 AM

I’m surprised it took this long for someone to attack one of the anti-virus vendors.

Like Kaspersky says in his article: there’s only two types of companies, those who’ve been hacked and those who don’t know they’ve been hacked.

Winter June 12, 2015 8:05 AM

This is the result from the Intelligence Community going all out into offensive capacity and nothing on defenses.

They all want to build vulnerabilities and backdoors into the infrastructure so they can spy. Meanwhile, their home base is pilfered.

What use are the offensive cyber-war capabilities of the NSA, CIA, and FBI when they cannot protect their own turf against spies?

The same for Germany, where all these surveillance capabilities of their own secret service cannot even protect their own parliament against being rooted.
http://www.theregister.co.uk/2015/06/12/bundestag_malware_outbreak_confusion/

Anura June 12, 2015 8:06 AM

@alanm

What makes you think this is a first time antivirus vendors have been exploited? There have been plenty of exploits in the past; a buffer overflow when scanning a file or updating software, and an otherwise harmless file downloaded by a user can cause arbitrary code to execute under the SYSTEM account. Complex software interacting with everything on the system and running with root/admin permissions is almost guaranteed to result in serious exploits.

rgaff June 12, 2015 9:37 AM

Anura is right. This is certainly NOT the first time an antivirus vendor has been hacked… It’s just the first one that I can remember who announced the fact to the world. Most companies including antivirus vendors prefer to just keep quiet about such things, fearing that publicity of their failings will adversely affect their bottom line.

I am surprised and impressed by Kapersky for instead making it a public teachable moment… It seems that the main factor in delaying so long before announcing is they were responsibly giving some time for all those 0-days to be fixed first…

Heartbleed June 12, 2015 9:57 AM

Kaspersky malware vector: all Windows exploits.
OMB malware vector: (apparently) all Windows exploits.
Mandiant malware vector: all Windows exploits.
RSA malware vector: all Windows exploits.
Online banking malware vector: all Windows exploits (Krebs).
Google in China malware vector: all Windows exploits.

Is Windows a hugely successful target because of its relative market share, or because of its Byzantine architecture that is difficult to secure? Who cares!!

Why would anyone continue to expose themselves with this ridiculously insecure OS?!

Naked Emperor June 12, 2015 10:12 AM

What heartbleed says just above: WTF, people- you keep showering in Windows gasoline, expect immolation. It’s like the worst slapstick vaudeville schtick, all these butthurt Windows slaves.

Roger Dodger June 12, 2015 10:27 AM

@Anura

Well said.

That does apply to their own networks. And they have been hacked before. Actually, even the pursuit of performing malware analysis on a mass scale is guaranteed to get some screw ups. But, I mean targeted, APT style attacks.

@Winter

Yes, the OPM hack appears to be one of the biggest clusterfucks I have seen in quite sometime. And I see a new clusterfuck everyday.

http://www.wired.com/2015/06/opm-breach-security-privacy-debacle/

Probably more like “they got everyone”, then “four million”. And if they cross check that data with live records (death, marriage, credit, etc), they would hit the motherlode vein for undercovers. Which I mention because they surely would. This goes back to the 80s, one report said.

@Bruce, @all

‘Level of sophistication’. Kaspersky plays up the “high level of sophistication” in these medias and reports strongly. They do this as part of sales presentation both to themselves and the audience. The reason is attributing nation state level attacks is extremely vague work. “Level of sophistication” is a major weight there.

There are really four criteria for nation state level attacks to deduce attribution:

  1. Level of sophistication
  2. Target analysis — “who” was hacked
  3. Where the scene of the crime took place. Where does the data eventually end up. Where are the proxies. Where did the attack come from.
  4. Signature identifiers in code, including CNC.

All of these can and have been faked or otherwise obscured. In fact, no one can really say for sure who is government and who is not.

There are a variety of factors in all of these weights. For instance, “3” might include data on something like “working hours”.

“4” can include snippets of code similar to snippets of code found in previous attacks. Here, the code is attributed to a large family of seemingly US based code.

Flaws, therefore, are not much looked at. There are some severe flaws. This was a highly risky operation.

  1. They reused code, probably, from previously used operations.
  2. They used the same attack code against Kaspersky as they did against global operations of high value targets.
  3. Many of those operations, especially the Kaspersky operation, would have been far safer to perform up close and personal. Sneakier, more cautiously. Hack the hardware going in. Get someone in there that works for you. Get someone there to work for you. Or hack it, but hack it closer up, where you are not digging around in the dark. Segment the hacks, if necessary. Figure out the scope of the network. Hit home systems, friends of friends systems. Work your way from the outside of the circle in. Kaspersky was a very hard target and should have had a much more longer term, safer approach.
  4. The signature identifiers in the code track back to certain nation states.
  5. CNC systems are a very poor way to deliver information out. For a nation state.
  6. “ugly.gorilla” name was used for CNC communication, the report cited. “Ugly Gorilla” is a leader of a primary Chinese government hacking group, according to Mandiant. No other efforts were made to paint this as Chinese. So, that says “American all over it”. A bit too clearly. Even if “hidden”. They made no other pains, even sloppily reusing code. Probably an emotion laden “fuck you”. Though, saying “fuck you China” on a gun you use to kill a Russian cop… er. Why not just put your driver’s license on it, while you are at it?

This was caught, exposed, throw to the press. But… the real more serious threat for the hackers here was that they might have been caught and it kept secret. True information then fed to them with false information.

From the data, I do not think these systems are American. I think there may have been some American influence. Stuxnet, for instance, was probably based on some American ideas about attack strategies against the Siemens systems. America probably supplied Siemens vulnerability information.

There is a level of sophistication which makes it clear this is nation state level. It appears to be Israeli. Maybe joint-Israel-American, but whatever the case, a lot of sloppy work, with elements of high level ‘nation state’ sophistication.

I do not think Israel was trying to finger America. They may have wanted to throw some confusion on the issue. Or they may have sincerely disliked China. (I have never met an Israeli who expresses such a view, but then again, I probably just don’t meet that sort.)

Lotsa, lotsa speculation though.

Very wide margins in all of this for disinformation, and clever misdirection.

On one level, everyone is assuming it is nation state because of the sophistication of the code… but on another level, they are ignoring clear areas where there is seemingly astounding lack of sophistication.

So either there are gaps in the sophistication… or that was sophisticated misdirection in and of its’ self.

Roger Dodger June 12, 2015 10:33 AM

@Matthew Smith

In the Citizen4 movie documentary, there is a moment where Snowden implies that the voIP phones in the hotel room could be hacked. He said it was a routine tactic.

Have not seen it yet, wait for most movies to hit rental. Hopefully tonight. Hear it is very good.

VOIP often is poorly installed and default passwords are left.

Not much trickery to that, unfortunately. Nor in hacking hotel security.

A lot of hotels have wide open wifis.

Stingrays are relatively inexpensive, and can be delivered foreign or reconstructed from innocuous parts and code downloaded. Cell towers will also be static, and can be hacked. That requires sophistication.

Reconstructing or delivering stingrays overseas requires some level of sophistication.

grayslady June 12, 2015 12:45 PM

From an article in Spiegel.de:

“In 2011, Kaspersky analysts found a few oddities in the program code for the previous version of Duqu, which confirmed the suspicions. These suggested that the code’s authors were from a country in the GMT + 2 time zone, and that they worked noticeably less on Fridays and not at all on Saturdays, which corresponds to the Israeli work week, in which the Sabbath begins on Friday.”

To me, this is an impressive level of in-depth analysis by Kaspersky. I’ve always been curious about attribution to various state actors when I read U.S. statements saying that “such and such” a country was responsible for a U.S. hack, especially when it seems relatively easy to fake an IP address.

Meanwhile, given what Kapersky says about the financial resources required to write these types of intrusive programs, I guarantee that the money to finance this didn’t come from the Israeli budget but from the U.S. We already know, courtesy of Edward Snowden, that the NSA shares raw data with Israel. A quid pro quo comes to mind here. Let the U.S. finance the research but ostensibly keep its hands clean when Israel gets caught.

albert June 12, 2015 1:07 PM

@Heartbleed,

Thank you for mentioning the 408.23kg sasquatch in the room.

Windows™ is targeted because it is in ubiquitous in the banking, political, legal, manufacturing and corporate sectors. You have all their employees whose systems are lovely attack vectors. I think the military is less enthusiastic. I can’t imagine spy agencies using it, but they love it when others do.
.
Security aside, working with Windows™ was always a love/hate situation for me. In the past, I hoped it would die just because it sucked as an OS. Apparently, that wasn’t enough. With all its Achilles heels, will ‘security’ finally destroy it?

One can only hope.

.

65535 June 12, 2015 1:24 PM

If the code resides in RAM would Address Space Layout Randomization (ASLR) have helped mitigate this attack?

https://en.wikipedia.org/wiki/Address_space_layout_randomization

I notice that Kaspersky Labs was us Domain Controllers which mean Windows. And, then I read that ASLR is only enabled for certain .exe’s like DLL’s. So, there are holes in the windows ASLR implementation.

“Microsoft’s Windows Vista (released January 2007) and later have ASLR enabled for only those executables and dynamic link libraries specifically linked to be ASLR-enabled. For compatibility, it is not enabled by default for other applications. Typically, only older software is incompatible and ASLR can be fully enabled by editing a registry entry “HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management\MoveImages”, or by installing Microsoft’s Enhanced Mitigation Experience Toolkit.” – Wikipedia

https://en.wikipedia.org/wiki/Address_space_layout_randomization#Microsoft_Windows

I wonder if the EMET kit was installed in Kaspersky’s systems.

All and all the malware is fairly amazing [using magic strings to re-infect systems].

82de478ea93bdd87 June 12, 2015 1:52 PM

@ Heartbleed.

Very true!

I understand a security company like Kaspersky Labs runs Windows on some machines as its flagship product to non-enterprise customers is an antivirus package, but they can do better on their internal networks.

There are powerful and secure operating systems, like OpenBSD, that will make these attacks more difficult, if possible at all while providing a full set of secure services.

Kerberos is too large, too difficult, too “engineered”, to be trusted. Security must be simple, easy to understand, easy to audit. This one is the reason we dropped Kerberos on OpenBSD a year ago. Its codebase is too large to be effectively audited.

Windows itself is a bad operating system. It was built on a weak base, and its growth is full of mistakes and bad design decisions.

@ albert

No, Windows is not targeted because it is ubiquitous. It is targeted because it is poorly written. Period.

Why is it so difficult to understand? Let us talk about Linux and Unix operating systems. Linux is widely used, it is used in academia, it is used in datacenters. Is it not a valuable target? It is! Only a few attacks are known against current Linux distributions, a lot fewer than the ones available on Windows.

What about OpenBSD? Is it one of the most secure operating systems because no one has interest on attacking it? Certainly not! As an operating system that does not have known attacks against its base system (apart of mistakes made by system managers and users) anyone developing a tool to attack it will become immediately famous. Most services running on a stock OpenBSD installation are the same ones available on Linux (OpenSSH, nsd, unbound…). Why are these services less vulnerable? Because OpenBSD has proactive security. It is not lack of interest on these operating systems, it is just that Linux and BSDs are developed following good coding practices and modern secure coding styles.

Are we talking about getting in trouble? Windows.
Do we want something pretty but not very secure? OS X, iOS, Android…
Do we want a mainstream operating system that provides good security? Any reasonable Linux or BSD distribution will do the trick.
Do we want rock solid security? OpenBSD.

Are we talking about information security? I would drop any closed source tool, not to say the ones written in the United States for obvious reasons.

Open source is the way to go to get security; there are lots of people auditing the code, not only developers, and giving valuable feedback. Just look into the OpenBSD mailing lists to see how the open source model should work. OpenBSD is not only the product of a dedicated team of highly skilled developers, it has a lot of contributions by end users that read, understand and improve its source code too.

A fundamental principle on computer and information security: you cannot trust a product whose source code you cannot see.

I cannot understand how Kaspersky Labs calls themselves a “security company” when they are running one of the worst operating systems on their internal networks.

964907216836963 June 12, 2015 2:14 PM

@Hartbleed: “Why would anyone continue to expose themselves with this ridiculously insecure OS?!”

Because if you find yourself target of an attack of such sophistication, your OS doesn’t really seem to matter:
“US Navy wants 0-day intelligence to develop weaponware
[…]
The Navy’s definition of “ widely used software” includes “Microsoft, Adobe, JAVA, EMC, Novell, IBM, Android, Apple, CISCO IOS, Linksys WRT, and Linux, and all others.””
http://www.theregister.co.uk/2015/06/12/us_navy_wants_0day_intelligence_to_develop_weaponware/

@Roger Dodger: “This was caught, exposed, throw to the press. But… the real more serious threat for the hackers here was that they might have been caught and it kept secret. True information then fed to them with false information.”

The hackers already knew that they were caught, so there was no point in holding back the information – better turn it into publicity instead, hoping that nobody leaks it beforehand, while you are waiting for the 0-days to be patched, thereby damaging your reputation, or uses it for blackmail:
“Another indication that a spear-phishing email was used was the fact that while Kaspersky was investigating the breach, the attackers wiped the mailbox and browsing history from the infected employee’s system, preventing Kaspersky from fully analyzing it.
The wipe occurred just four hours before Kaspersky identified the employee’s machine as “patient zero,” suggesting the intruders knew they’d been caught and were racing to eliminate evidence before Kaspersky could find it. Raiu suspects they may have been tipped off when Kaspersky disconnected many of its critical systems from the Internet after discovering the breach. He notes, however, that the company has backups and logs of the employee’s system, and once they’re able to compile and review them, he’s confident they’ll produce evidence of how the attackers got in.”
http://www.wired.com/2015/06/kaspersky-finds-new-nation-state-attack-network/

964907216836963 June 12, 2015 2:46 PM

@82de478ea93bdd87: “I cannot understand how Kaspersky Labs calls themselves a “security company” when they are running one of the worst operating systems on their internal networks.”

Well, I would think that they don’t use Windows end user default configuration, but probably have people paid for hardening whatever OS that is they use. (I won’t go into the closed vs. open source sabotage possibilities debate.)

P.S.: I’m not trying to defend Windows. I just don’t see any point in finger pointing here. Pick OS(es) according to your wishes, but accept that there may be other use cases (e.g. compatibility).

Nick P June 12, 2015 2:52 PM

@ Bruce Schneier

The attack scheme is clever. Matthew Smith pointed out it’s a known attack vector. Even before that, I think you reported on security research where all kinds of conference phones where hacked. If it’s software and COTS, then they might hack it is my view. That’s why organized crime and certain security focused groups do critical meetings with no technology present at all along with bug sweeps. You’d think the countries doing the most attacks, err our U.N. “Security” Council, would’ve learned this by now. 😉

Far as the malware, I think that detailed document is the single greatest argument for a POLA architecture that I’ve seen to date. Compromising the tiniest things gave them access to pages and pages of privileges. That’s just ridiculous given we had systems that did the opposite in the 70’s and 80’s. It’s long past time our proprietary and FOSS teams start building POLA into their architectures. I’m even fine with it being subsidized by the taxpayers to tune of billions of dollars if we can’t do it any other way.

Meanwhile, I’ll continue to recommend the more radical and minimalist architectures to any who ask while citing documents such as this when they complain. Securing a Windows system while using standard Windows apps/services is impossible or close to it. Securing a UNIX, Linux, or Mac system can be just as hard. Only the POLA and clean slate architectures stand a chance.

Nick P June 12, 2015 3:27 PM

@ 82de478ea93bdd87

“Only a few attacks are known against current Linux distributions, a lot fewer than the ones available on Windows.”

Linux has had hundreds of vulnerabilities. The nation states have attack kits for it. Black hats have even hit it. That it’s rarely hit these days is caused by two things: (a) most attacks on back-ends hit application layer where most vulnerabilities are found; (b) Linux desktops are rare, especially among juicy targets.

“Windows is not targeted because it is ubiquitous. It is targeted because it is poorly written. Period.”

Windows is targeted because it has 90+% market share and most targets run Windows (see above report). Even malware writers have said this before. Period.

“What about OpenBSD? ”

It’s had many potential vulnerabilities in OS and ported applications. They called them “bugs” (not vulnerabilities) and do patch most rather quickly. My method for OpenBSD hacking is “Watch mailing list for bug announcements. Quickly determine if it’s exploitable. If so, turn it into a payload. Hit target system. Profit.” So, it’s special in terms of quality focus but still can be hacked if anyone bothered to try. (Few bother.) Especially at layers below kernel that they don’t care about, privilege escalation because they refuse M.A.C., in a VM because Theo doesn’t worry about those issues, and covert channels because they probably never heard of them.

Note: It’s reputation adds more risk as it’s often used for “fire-and-forget” appliances. This can be reinterpreted as “they don’t patch or they patch very rarely.” The more they trust it, the more likely the bad guys get in. 😉

“Do we want rock solid security?”

You’d have to use an OS that supports POLA with a security-focus. The mainstream OS’s lack POLA, have no trusted path, are full of covert channels, are architecturally weak, use complex implementations, run tons in kernel mode, and so on. Everything you don’t want in a secure system. That OpenBSD has these problems is why it was abandoned in Navy’s Montery Security Project as not secure enough or even structured enough to make a security argument. So, you’ll probably have to use a minimalist setup. Most “secure” desktops are cheating by combining separation kernels with user-mode Linux and security-critical stuff outside the kernel. Only two (usable) offerings in open-source aiming for a whole POLA stack are GenodeOS and JX.

So, feel free to download, use, or improve one of the POLA-enforcing systems. Anything else is false security waiting to be smashed because it lacks the very properties that make security work. Especially Windows-, UNIX-, or Mach-like systems. I remember geniuses working hard back in the day to reimplement the latter two securely. Even a 17,000 line of code UNIX couldn’t be secured while maintaining backward compatibility and took extensive changes to architecture/implementation. Good luck trying to protect a hundred Kloc or 1+ Mloc UNIX while maintaining risky architecture and compatibility with insecure applications. Although, the CERT people do appreciate such efforts to protect their job security.

albert June 12, 2015 4:27 PM

@82de478ea93bdd87,

If Linux/Unix/OpenBSD were ubiquitous in the areas I mentioned, they would certainly be more heavily targeted, regardless of how ‘secure’ they are, simply because that’s where the high value targets are.
.
Yes, Linux is used in academia, Apache is the most popular Web server. [A company I worked for (a Windows house) specifically bought a Linux box just to run their website, at the insistence of my Windows-hating boss, and it wasn’t even a security issue.]
.

Kenny June 12, 2015 4:35 PM

I suggest reading through the detailed PDF from Kaspersky on how Duqu 2.0 works. The level of sophistication, esp the techniques that were used to hide the malware from the Kaspersky products, was really staggering. The malware plug in architecture had over 100 plug-ins.

Credit to Kaspersky for being transparent about the attack.

Rodger Dodger June 12, 2015 7:00 PM

@grayslady

“In 2011, Kaspersky analysts found a few oddities in the program code for the previous version of Duqu, which confirmed the suspicions. These suggested that the code’s authors were from a country in the GMT + 2 time zone, and that they worked noticeably less on Fridays and not at all on Saturdays, which corresponds to the Israeli work week, in which the Sabbath begins on Friday.”

Meanwhile, given what Kapersky says about the financial resources required to write these types of intrusive programs, I guarantee that the money to finance this didn’t come from the Israeli budget but from the U.S. We already know, courtesy of Edward Snowden, that the NSA shares raw data with Israel. A quid pro quo comes to mind here. Let the U.S. finance the research but ostensibly keep its hands clean when Israel gets caught.

Smashing insight, Mrs Gray. (I so love the modern proliferation of the name “Gray” these days. Always reminds me of classy, innocuous sophistication with a bit of hard to fathom morality at the backbone.)

Reminds me a bit of the sloppiness in the dubai assassination. A mixture of extraordinary sophistication, distinct innocuousness… and downright clumsiness.

But hard to argue with that, as contrary to the trumpets of the media, no one actually got caught, just fake faces on pictures.

I do not want to see Israel take much blame here, last thing they need is their foaming at the mouth critics get another morsel.

America can handle the criticism. And that is what Russia really, badly wants to believe.

@964907216836963

,The hackers already knew that they were caught, so there was no point in holding back the information – better turn it into publicity instead, hoping that nobody leaks it beforehand, while you are waiting for the 0-days to be patched, thereby damaging your reputation, or uses it for blackmail:

The wipe occurred just four hours before Kaspersky identified the employee’s machine as “patient zero,” suggesting the intruders knew they’d been caught and were racing to eliminate evidence before Kaspersky could find it. Raiu suspects they may have been tipped off when Kaspersky disconnected many of its critical systems from the Internet after discovering the breach.

Insightful catches, 964907216836963.

I forgot there was an zero day in there that got immediate rush to patch. And the detail that they that there was evidence they were aware they were caught. I took the later as likely just SOP, and missed the obvious implications there. I do believe you are correct. I think my suspicious bias got in the way of seeing that clearly.

You seem to be talented at getting into people’s heads and figuring out their likely angle of perspective. So, two questions that are bugging me on this, if you could? I can’t figure this out.

What do you make of this?

Additionally, the security firm Symantec, which obtained samples of Duqu 2.0 provided by Kaspersky, uncovered more victims of the targeted attack code among its own customers, and found that some of these victims were in the US—a fact that would be cause for even more concern if the attack were perpetrated by the US government.

Also, what do you make of the “ugly.gorilla” reference? (And, considering the strong value of the term, the unknown to me term “romanian.anti-hacker”.)

They would first send one of two “magic strings” to the driver—either “romanian.anti-hacker” or “ugly.gorilla”—from an IP address in Jakarta or Brazil. The strings triggered the driver to add the IP addresses to a whitelist so communication to them wouldn’t be flagged.

http://www.bloomberg.com/bw/articles/2014-05-19/u-dot-s-dot-charges-five-chinese-military-hackers-with-online-spying

The U.S. government and private security researchers have been tracking the depredations of Chinese hackers on U.S. companies for years, including the online footprints of Ugly Gorilla, the alias of one of the indicted hackers, Wang Dong. Ugly Gorilla has been identified with the campaigns of Comment group, one of the best-known teams that has been linked to intrusions at hundreds of organizations in the past decade.

I do not see any unrelated, previous usage of the exact term “romanian anti-hacker”, anywhere.

Just kind of weird. Like a personal signature at a crime scene or something.

AnotherJustin June 12, 2015 7:15 PM

“grayslady • June 12, 2015 12:45 PM

From an article in Spiegel.de:

“In 2011, Kaspersky analysts found a few oddities in the program code for the previous version of Duqu, which confirmed the suspicions. These suggested that the code’s authors were from a country in the GMT + 2 time zone, and that they worked noticeably less on Fridays and not at all on Saturdays, which corresponds to the Israeli work week, in which the Sabbath begins on Friday.”

This is ridiculous on it’s own, it’s exactly what one would do if one wanted to point blame elsewhere. Of course one would emulate another time-zone. That much is easy.

Rodger Dodger June 12, 2015 8:10 PM

@AnotherJustin

time zone suggesting israeli

This is ridiculous on it’s own, it’s exactly what one would do if one wanted to point blame elsewhere. Of course one would emulate another time-zone. That much is easy.

Yes. It is interesting to speculate. But one of these viruses, believe it was some Kaspersky report indicated the detail of evidence the developers worked eastern time, us. Now this. And why did it hack us targets or use nicks like “ugly gorilla”, “anti-romanian hacker” (seems like the two together kind of say ‘f u’). Hard to believe either US or Israel would do that.

Also Auschwitz connection seems like an ‘all too obvious’ Israel tie. Why would they do that THEN hack the guys who catch this kind of thing, unless they wanted to get caught? And with plenty of evidence tying back to every “other” US & Israeli suspected hack job.

Nobody knows anything about most state hacks, the many Chinese ones included, probably. Why would China consistently hack from their own shore?

Only the suspected governments really know, and they are not saying. Possibly to their own detriment.

Non-evidence lead investigations can collect a lot of seemingly significant evidence pushing a favorite theory and misstatements and erroneous conclusions can cling together and make one giant ball of paper that is entirely misleading.

Could be an attempt to cause relationship problems between the US and Israel, or ready for a really bad attack after years of building up a seemingly dead accurate attribution… all the while never literally naming names. Just implying.

82de478ea93bdd87 June 13, 2015 10:35 AM

@964907216836963

Well, I would think that they don’t use Windows end user default configuration, but probably have people paid for hardening whatever OS that is they use. (I won’t go into the closed vs. open source sabotage possibilities debate.)

Closed versus open source is not only about sabotage possibilities, a real threat that becomes even more dangerous for U.S.-based corporations, but also about the lack of enough qualified eyes spotting bugs in the source.

@Nick P

It’s had many potential vulnerabilities in OS and ported applications. They called them “bugs” (not vulnerabilities) and do patch most rather quickly. My method for OpenBSD hacking is “Watch mailing list for bug announcements. Quickly determine if it’s exploitable. If so, turn it into a payload. Hit target system. Profit.” So, it’s special in terms of quality focus but still can be hacked if anyone bothered to try. (Few bother.) Especially at layers below kernel that they don’t care about, privilege escalation because they refuse M.A.C., in a VM because Theo doesn’t worry about those issues, and covert channels because they probably never heard of them.

How many of these potential vulnerabities in OpenBSD can be exploited? We call “bugs” what is a bug and “vulnerability” what is a vulnerability. A local/remote denial of service may be annoying, but it is not a vulnerability, it is a bug that may degrade a service. Even if a service is vulnerable, how many chances have a cracker to exploit it? Most services are chroot’ed, systrace limits the amount of damage that can be done, we have partial ASLR support since 2003 and complete ASLR support since 2008, strong randomization, and lots of proactive security measures just in place.

Virtual machines are unsecure on most architectures; we fully support virtualization where it can be done right (e.g. some high-end SPARC64 gear), however we do not support it where it cannot be done in the right way.

What do you mean when saying “especially at layers below kernel that we don’t care about”? Firmware? How can we audit/fix, let us say, the firmware on a hard disk or solid state drive? No one has the tools required to work at so low level right now, except firmware developers. We cannot audit Intel ME or vPro technologies for obvious reasons, but there is a chance to apply some limits to the control imposed by hardware where possible.

It’s reputation adds more risk as it’s often used for “fire-and-forget” appliances. This can be reinterpreted as “they don’t patch or they patch very rarely.” The more they trust it, the more likely the bad guys get in. 😉

Sadly, you are right here. There are lots of appliances that are never patched because “OpenBSD is secure”. People should follow stable at least.

You’d have to use an OS that supports POLA with a security-focus. The mainstream OS’s lack POLA, have no trusted path, are full of covert channels, are architecturally weak, use complex implementations, run tons in kernel mode, and so on. Everything you don’t want in a secure system. That OpenBSD has these problems is why it was abandoned in Navy’s Montery Security Project as not secure enough or even structured enough to make a security argument. So, you’ll probably have to use a minimalist setup. Most “secure” desktops are cheating by combining separation kernels with user-mode Linux and security-critical stuff outside the kernel. Only two (usable) offerings in open-source aiming for a whole POLA stack are GenodeOS and JX.

Sorry, but it looks more like a legal requirement than a technical one. Now I am unemployed, but in the past I had been working on environments where OpenBSD and OpenSSH were not allowed “because they were not listed as secure” while Windows was ok.

There are two approaches to computer security: the administrative one, based on certifications, and the technical one, based on facts. Sadly, only one of these models work. When exploiting a computer network no one will care if your operating system and tools have a lot of certifications if it is full of exploits.

Ironically, OpenBSD is “not POSIX compliant” because it lacks certification (a process that at the end can be outlined as paying big amounts of money to a group to get a certification to each new release of the operating system) while Windows is fully POSIX compliant.

Perhaps the fact I am unemployed (as I had been most of my life) skews my point of view on this matter, but I do not care about the certifications a software product has. I only care about facts, and I like OpenBSD.

Of course, to be really secure we cannot install anything we want on an OpenBSD server/workstation/appliance, but this restriction applies to POLA operating systems too. There are limits about what can be done with a computer if we want it remain secure.

Nick P June 13, 2015 12:01 PM

@ 82de478ea93bdd87

“How many of these potential vulnerabities in OpenBSD can be exploited? We call “bugs” what is a bug and “vulnerability” what is a vulnerability. ”

Don’t know, don’t even care. The point is that bugs pour out of OpenBSD development process much like others. I’ve seen processes such as Fagan Inspections, Cleanroom, Praxis’ Correct by Construction, and others where defects either existing or making it to a commit are rare. Praxis CbC defect rate was 0.1 per kloc for example. Then there’s the majority of projects doing the opposite. See the problem? You’d think quality-focused developers would learn and use what’s empirically proven to work.

“What do you mean when saying “especially at layers below kernel that we don’t care about”? Firmware? How can we audit/fix, let us say, the firmware on a hard disk or solid state drive? ”

You can either (a) not support untrustworthy hardware or (b) support it with a warning that it’s secure except for specific components that you name. If your team does that, then your good by my standards. Otherwise, putting OpenBSD on such a piece of hardware then calling it a secure machine, as many do, is deceptive. I’m glad you mentioned SPARC, though, as it was one of my solutions to this problem back in the day. So much more open than the rest. Current solution for many projects is doing open hardware or firmware then porting OSS to it. Back to SPARC, Aeroflex-Gaisler has even has open SPARC processors to use along with board to buy. Check them out.

And user-mode drivers for availability. Let’s not forget that. Proven in practice to help via safety-critical microkernels and Minix 3. OpenBSD have that or are drivers still a big risk area?

“Sorry, but it looks more like a legal requirement than a technical one. ”

Strawman followed by a bunch of others and red herrings whether intentional or not. There’s only one proven approach to building secure systems: a clear, appropriate set of requirements; a clear security policy for the system; a simplified, layered, modular design proven to implement both with all execution paths and error states fail-safe; an implementation of the same with language subset & guidelines to avoid most errors; strong verification that the above correspond to each other; testing of components & whole system; pen testing; covert channel analysis components & system; source to object code verification; trusted distribution (which OpenBSD just got lol); good installation and configuration guidance. Every system built this way resisted or contained most to all attacks while most systems not built this way have had many vulnerabilities.

The Montery Project, like similar ones, aimed to be certified to medium or high assurance security. That meant its software development had to adhere to above principles. Such principles produce a TCB that clearly illustrates to reviewers that it should do its job under about any conceivable circumstance. Such a security argument also requires small amount of kernel mode code, which BSD’s/Linux’s all fail on. Mandatory controls & strong isolation primitives are also required to contain damage from compromised applications. Theo is opposed to those IIRC. So, OpenBSD could never endure a high security evaluation because it lacks critical features and the rigorous development process necessary to make a believable, assurance argument. Montery switched to the STOP OS used in XTS-400 because it’s one of the few that has these and was only one of those they had on hand.

For examples of these principles in real systems, see Section 7 Assurance (p 117) in this old MLS system or this capability system (inspired by this one). These kinds of architectures and development processes are rigorous enough to believe the result will either be secure or the closest thing to it achievable. Whereas OpenBSD leverages the kind of monolithic, complex, ad-hoc style that’s traditionally been responsible for our systems’ vulnerabilities and ensured they’re severe. OpenBSD team is commendable in trying to hunt the bugs they produce and introducing mechanisms to attempt to reduce their impact. Ahead of mainstream OS’s in that regard. Yet, it’s far from secure at any given moment because it doesn’t do what security takes.

And I bet the team would never, under any circumstances, re-implement the system in a way that would pass an EAL6 evaluation or even have a convincing security argument for its many execution states. That’s why I classify it Low-Medium assurance. It’s also why it continues to have risks that are impossible (or improbable) with methods and architectures such as above. I cited GenodeOS and JX because they know what assurance takes and are at least attempting to use some of the above principles. Not endorsing their current security, just the attempt. I’d love to see a splinter group of OpenBSD’s talented people try to pick up where eg EROS left off and implement a whole OS on it piece by piece using established security engineering techniques. The results would probably be better than proprietary offerings given what they’ve accomplished using a monolithic approach.

They won’t, though. OpenBSD team, like others, failed to learn from the successes and failures of the past. Didn’t even learn UNIX lessons where secure attempts pulled everything non-essential out of kernel mode into a ring in between kernel and apps. Often used segments, too, for internal isolation in each layer. OpenBSD hasn’t even bested ancients such as UCLA Secure UNIX or Trusted Xenix in design assurance, much less a MILS or capability architecture.

So, the system is not to be trusted and people wanting to build high security must start with a different foundation. I would consider using OpenBSD for an untrusted component doing, say, Internet interface. I have before. Not the TCB, though. It will just get hacked by the pro’s and it will be too complex for me to even see the hack coming. That’s what always happens to monolithic systems with excess complexity, privilege, and bugs.

Justin June 13, 2015 2:53 PM

@ Nick P

I like OpenBSD, and I like reading the criticisms of it, too.

For a mainstream OS that runs a lot of mainstream software, I happen to think it’s pretty good. For me, it’s a “sensible” operating system that’s easy to administer without a lot of fuss or nasty surprises when it comes to day-to-day security.

The security model is simple. It’s based on user/group/world permissions and chroot. A lot of that mainstream software is ported with sensible options and configuration for best practical security.

Linux has MAC, courtesy of NSA. It isn’t even used in a lot of mainstream Linux distributions, and even in those distributions, it isn’t even applied to secure a lot of risky applications, such as (???) web browsers. The “right” security policies simply haven’t been developed. Meanwhile OpenBSD’s DAC probably stronger than Linux’s DAC, simply because of code auditing and attention to quality.

Regarding some of the development philosophies that were mentioned:

… Praxis’ Correct by Construction, and others where defects either existing or making it to a commit are rare. Praxis CbC defect rate was 0.1 per kloc for example.

What I read about it is this: “Testing is still performed, but its role is to validate the correct-by-construction process rather than to find bugs.” That sounds like a mushy philosophy to me. You develop software according to some model, and then you are as likely to find issues in the model or specification itself as in the code. In real life, those might not be discovered until testing. And in real life, beyond the simple fact that more code has more errors, defect rate per kloc is a somewhat bogus metric, because you are not specifying what counts as one defect vs. more than one defect, whether it’s in the actual code, or in the specification or model, and furthermore you can only count what’s been discovered. The assumption of reaching perfection at the end of any development process is arrogant.

Don’t get me wrong, I’m not against these high-assurance methodologies, and your posts about them pique my curiosity. It’s just that I have a hard time shaking the impression that theory doesn’t always carry over into practice, where in the real world, software is developed poorly or not at all.<.b>

Hasboo-rah June 13, 2015 9:49 PM

A hearty welcome to the two unctuous newcomers Roger Dodger & anotherJustin! Extra international goodwill for oleaginous flattery and double overtime on the sabbath!

‘I do not want to see Israel take much blame here, last thing they need is their foaming at the mouth critics get another morsel.’ Yeah, those crazy foaming at the mouth critics, namely everybody in the world except the US ambassador to the UN.
http://www.ifamericansknew.org/stat/un.html for 190-1 yuks

‘This is ridiculous on it’s own, it’s exactly what one would do if one wanted to point blame elsewhere.’ Or if out-of-control genocidaires stepped on their crank like that time they tried to bomb the British foreign office. Or Manhattan.
http://www.veteranstoday.com/2015/01/12/mapping-911-fort-lee/

‘Hard to believe either US or Israel would do that.’ Yeah, that’s what Rachel Corrie said to herself right before she said SPFFTHPT.

‘Could be an attempt to cause relationship problems between the US and Israel’ Ah, the Eitan gambit! In other news, the only possible explanation for the Dubai clusterfuck is ‘a foreign agency, an enemy of Israel, is trying to harm Israel.’

‘years of building up a seemingly dead accurate attribution.’ What, did they find duqu2 in Khaled Mash’al’s ear?

Good job, bubeleh, blowing years of US-funded work in a Russian honeypot. At this rate Pollard will be home in no time.

By the way, Marc Grossman sez condolences on the F-16s, the replacement neutron bombs are in the mail.

Nick P June 13, 2015 11:54 PM

@ Justin

Fair opening. Most simultaneously neutral and positive statement I’ve seen yet lol. I’m going to address your points one by one to clear up some common misconceptions.

“For a mainstream OS that runs a lot of mainstream software, I happen to think it’s pretty good.”

The problem is that all the mainstream OS’s intentionally use bad architecture and security decisions. OpenBSD aims at being a non-mainstream OS that supposedly makes better decisions. It does in small, tactical ways. Yet, the overall big picture is just as bad: monolithic design with slow IPC, poor decomposition, tons of overprivileged code, no POLA, and an inherently insecure API. Largely SSDI: Same Shit, Different Implementation.

Even though I like it as a UNIX, too. 🙂

re OpenBSD security model, MAC, etc.

Ok, the most basic security model is dividing a system into subjects (active) and objects (passive/resources) with rules on what can access what. There are a number of security models that are proven to protect desktops/servers to varying degrees. MLS (eg GEMSOS) did for military’s security model for protecting secrecy; Biba (eg Windows Integrity Control) prevents integrity violations of various layers; system-level ACL’s offer fine-grained protection without context; Domain Type Enforcement (eg LOCK, SELinux) can do most of above with context; capability model can enforce arbitrary policies on everything in system with modeling ease of OOP. DTE and capabilities are most powerful model with also most efficiency overhead. Either could be used and were in exemplar secure systems.

OpenBSD, based on my limited knowledge, uses several security models: 2-level Biba via kernel & user mode protection of MMU; discretionary access control; whatever chroot is considered. The huge amount of code in kernel mode and number of ways to interact with it mean the Biba protection isn’t trustworthy. The discretionary access control model is known to be so weak (see Windows CVE’s) better ones were invented to deal with it. chroot depends on the Biba-protected TCB and inherents its risk as it might be bypassed. So, OpenBSD’s security model sucks despite them implementing it stronger than most UNIX’s.

Note: NSA’s involvement in SELinux is probably not an issue. The Type Enforcement technology was developed over many projects with microkernels and much stronger security for their use. Got adopted commercially at SCC & in McAfee Sidewinder firewall. SELinux was a prototype implementation of that “Flask architecture” on Linux to encourage development of CMW-like features and came with a warning that it alone didn’t make anything secure. Pretty above board to me. I argued for a strong re-implementation of it to medium-high assurance, though, to prevent vulnerabilities. It wasn’t done & vulnerabilities followed…

Alternative

So, let’s just say hypothetically we’re wanting a secure UNIX. The first thing we do is search for information on past attempts to (a) make highly secure computers and (b) secure UNIX itself. Both lead to the same assurance techniques (eg Orange Book B3/A1) I mentioned before. We would’ve discovered things like UCLA Secure UNIX, Trusted Xenix, LOCK/IX, the Trusted Mach + UNIX hybrids, and so on. They’d have showed us to keep kernel code minimal, be careful on interfaces, use more than two rings of protection, put just security-critical code in kernel mode, modify API’s to remove their risks, eliminate setuid issue, ensure an unspoofable trusted path, use language subset to eliminate some coding vulnerabilities, covert channel suppression, and so on. These were all common in secure UNIX designs that were getting anywhere with reviewers and NSA pentesters.

So, then we’d start building this. Each API we’d implement to understand it, we’d analyse its success/failure states, we’d decide how to handle failures, we’d do a minimal covert channel analysis, and we’d implement it outside kernel of course. Kernel itself would handle tasks that couldn’t be done anywhere else, manage those things data, and wall itself off from outside influence as much as possible. The system would be carefully layered with little to no looping possible between layers. Inspections and simplified coding style would show absence of defects that plague other systems. Proven methods such as bounds-checked arrays, segments, rings, reverse stacks, Xenix’s setuid trick, and others would be used where possible to eliminate entire classes of attack. The result would have such a strong design and assurance argument that it would get B2/EAL5+ at minimum, eclipsing all but two of the competition.

The above isn’t the path OpenBSD or any NIX-like OS’s took. OpenBSD team, particularly, could’ve achieved it given the time they put into quality. Instead, they began and continue an uphill battle to achieve good security engineering results while applying few of its principles. Thing is, if they built a CMW or MILS system, I’d probably still be using it and its quality would be giving the proprietary vendors a beating. Instead, they built another insecure BSD with better bug hunting and countermeasures to some UNIX issues. Everyone’s loss…

re Correct by Construction

What you read about Eschersoft’s “Correct by Construction” process doesn’t apply to Praxis’s. I think Praxis originated the Correct by Construction label for their specific software process. Confusingly, others in the niche use it as a term for the whole category of processes that all aim to get things right the first time. The one you cited has achieved some decent results. Yet, Praxis’s method is here. I think you’ll find their overall process to be quite thorough. They also gripe about unit testing by arguing (correctly) that it’s often a waste of effort given the nature of most errors. They focus on eliminating interface errors while semi-automating test generation for test cases that make sense. Margaret Hamilton took a similar approach for Apollo software and her team’s stuff worked flawlessly.

Far as its defect count, those are representative of the total number of flaws found since the system was delivered to the customer. Their process rarely produces any significant flaw. That a system runs flawlessly or at least according to spec every day of every year since its delivery says plenty about its quality. That they regularly delivered such results says plenty about the effectiveness of their methodology. It’s why I brought it up although Fagan Inspections and Cleanroom are easier to do for a BSD team.

re real world

That’s not the real world: that’s the majority of software projects or companies. They are (a) not knowledgeable about this stuff, (b) not equipped with resources to use it, and (c) rarely even care to attempt such a thing. The results reflect their market, management, and/or individual attitudes. The companies that aim for quality, predictability, and security are delivering it in spades to their customers. Examples off the top of my head include the DO-178B companies, Esterel, AdaCore, Altran/Praxis, iMatix, Green Hills, many smartcard vendors, all Type 1 vendors, the Cleanroom companies that warrantied code, a local one that does too, and quite a few more. Each of these invest in system engineering with strong techniques for correctness and/or security while making a profit selling it.

So, in the real world, these things exist and are applied by wise companies. Not all of them can do it. Not all should. But OSS projects aiming for high security have every reason to do at least as much as proprietary companies did. Proprietary companies that got results, too. Meanwhile, the majority of the market and OpenBSD’s architects continue down the paths that got NSA results. The SIGINT side, that is. 😉

Figureitout June 14, 2015 8:09 PM

82de478ea93bdd87
–While OpenBSD is up there and I’m using it as a stepping stone to something perhaps better, you sound like a fanboy. Rootkits below OS, doesn’t matter your OS; so it works until then and it’s a bad feeling when you can’t find it or get rid of it. You mentioned unemployment (apply yourself…) but Windows still reins supreme b/c you can get hella “real” work done on it (not reconfiguring stuff or opening a terminal everytime); aka the analog/hardware engineers that don’t screw w/ IDE’s and just want PSPICE to run. Lots of software I need mostly runs on Windows (ham radio software and others for RF chips/boards). Main thing is Windows terminal sucks, not fun at all, Unix reins supreme there. Raspberry Pi has made huge leaps (and Ubuntu, and most especially Debian and all it’s derivatives (ahem Kali and Tails…)) in terms of “usefulness” and getting stuff up and running quickly.

I don’t really get why OpenBSD want you to connect so bad to a bunch of sketchy http mirrors during install, any reason for that?

Justin June 14, 2015 9:16 PM

@ Nick P

Thanks for the explanation. I think there is a lot to your arguments. I appreciate your bringing them out in such detail for me and addressing my points. I am starting to see more where you are coming from now. In particular:

OpenBSD aims at being a non-mainstream OS that supposedly makes better decisions. It does in small, tactical ways. Yet, the overall big picture is just as bad: monolithic design with slow IPC, poor decomposition, tons of overprivileged code, no POLA, and an inherently insecure API. Largely SSDI: Same Shit, Different Implementation.

I totally agree with all that. I like OpenBSD for what it is, but of course, let’s not pretend it to be something that it is not. There are a lot of things about Unix that should be done differently if one had it to do over, but OpenBSD does not reinvent Unix. I have no affiliation with OpenBSD. Just a sometime user and fan, always open to alternatives and other ideas.

The companies that aim for quality, predictability, and security are delivering it in spades to their customers.

I don’t really have a way to evaluate that statement, because to the extent that there are companies really doing that, they are concentrating more on delivering to their customers than marketing their name to the general public. Such companies seem to have a very small web footprint, and their customers don’t seem to be the type to talk very much, either. That’s partly why I am so curious about your posts about the same.

@ Figureitout

I don’t really get why OpenBSD want you to connect so bad to a bunch of sketchy http mirrors during install, any reason for that?

Redundancy. I don’t know. A lot of Linux distributions do that, too. Should it matter if you verify checksums?

Figureitout June 14, 2015 11:46 PM

Justin
–A lot of linux distros I can just download an ISO (check checksums, even though we know attacks have spoofed bank login websites, why couldn’t checksums be false and their sites be spoofed..?). OpenBSD I do that nervously and I have to go download the rest of the parts from some http mirrors in Hungary (why should I trust a site here, or anywhere…?). I’ve done otherwise, downloaded them on a USB then installed after searching just for file name, not file path (should order CD, but mail interdiction is an actual concern for me); but they really encourage to do this all on a multitude of http sites that just look like typical dead hacked sites.

Wael June 15, 2015 1:13 AM

@Figureitout, @Justin,

But FreeBSD already uses a checksum, MD5 … I think OpenBSD does the same.

And Security advisories are signed with a PGP key, for example here is the advisory for TLS security “issue”. Remember that one?

With FreeBSD you have several options of obtaining the OS: you can get a minimal boot disk, an ISO image (source and binaries), or order the physical DVD for a fee. How else would you get the OS?

Justin June 15, 2015 1:16 AM

@ Figureitout

You can just download an ISO for OpenBSD: say,

ftp://ftp.openbsd.org/pub/OpenBSD/5.7/amd64/install57.iso

Verify the signed SHA256 hash with the signify(1) utility.

When it installs, you have the option of choosing to fetch the files from the CD itself rather than a mirror. Even if you install, say from “floppy57.fs”, and have to download the install files from a “sketchy” mirror, I believe those files will still be verified. One of the developers has a write-up of it here.

Clive Robinson June 15, 2015 1:49 AM

@ Justin, Wael,

As @Figureitout has indicated in the past, just trying to get anything across the net or by mail order gives an oportunity not just for interception but identification as a potential person of interest.

Futher it is known that creating CDs with malware on and substituting them in transit for the real ones is a “known” tactic of the likes of the FEYES.

As for “secure hashes” and “signed code” Stuxnet has shown that this is not an obstacle of much strength where the FEYES are concerned.

We have no idea let alone guarantee that the likes of NSA or GCHQ “contractors” have not “black bagged” the OpenBSD signing key, or have not added malware at some point in the development chain to enable the injection of rogue code so that it gets signed.

After all if CA’s who’s sole purpose in life is securing their own signing key and it’s usage, have been repeatedly subverted what hope for a group of geographicaly dispersed volunteers?

As I’ve pointed out before the secure delivery of code has all the same issues of securely distributing KeyMat and then some more on top…

Wael June 15, 2015 2:29 AM

@Clive Robinson, @all,

Just skimmed through the 46 pages of the Kaspersky report. Malware depended on several zero-day vulnerabilities, that they claim were patched. Malware also attempted to hide its signature by using varying compression and encryption algorithms of the payload. It also used steganography for outbound traffic (appending exfilterated information to pictures of various formats.) It also used tunneling and port forwarding to disguise traffic as “legitimate”. And the bulk of its manifestation were memory resident; no HD persistence was utilized, which a static trusted boot mechanism would flag. Sounds like a class III attacker …

I guess the best thing is to air gap sensitive equipment. And by air gap, I mean isolated from the surrounding environment.

The other day I noticed that notes I update on my iPhone were being synced to my iPad, and I never opted to use iCloud. Noticed in the settings under my email account that “notes” was turned on. I don’t know the mechanism of how notes is synced, but apparently even turning it off may only mean it won’t be downloaded to my other device. Turning it off may not stop notes from being uploaded to some server… I guess if it’s good for notes, then it applies to everything else… It’s reasonable to assume everything on the mobile device is backed up somewhere remote. Don’t feel like monitoring the traffic — too boring, I’ll just operate under that assumption.

Wael June 15, 2015 2:56 AM

Kaspersky says:

PERSISTENCE MECHANISM:
The Duqu 2.0 malware platform was designed in a way that survives almost exclusively in memory of the infected systems, without need for persistence. To achieve this, the attackers infect servers with high uptime and then re-infect any machines in the domain that get disinfected by reboots. Surviving exclusively in memory while running kernel level code through exploits is a testimony to the technical prowess of the group. In essence, the attackers were confident enough they can survive within an entire network of compromised computers without relying on any persistence mechanism at all.
The reason why there is no persistence with Duqu 2.0 is probably because the attackers wanted to stay under the radar as much as possible. Most modern anti-APT technologies can pinpoint anomalies on the disk, such as rare drivers, unsigned programs or maliciously-acting programs. Additionally, a system where the malware survives reboot can be imaged and then analyzed thoroughly at a later time. With Duqu 2.0, forensic analysis of infected systems is extremely difficult – one needs to grab memory snapshots of infected machines and then identify the infection in memory..

I would also speculate that an attack of this sort targets virtual machines as well! I would say the main reason, from a high level, is the separation of domains principle isn’t implemented properly, which results in malware gaining access to kernel mode from a user mode word document or other apps. The Castle can be penetrated and the Prison can be escaped from … Of course intrusion detection, updating mechanisms (MSI) and crypto operations were shot as well. On top of that, firewalls were fooled when traffic went through normally and “legitimately” open ports (80, 443, etc)

Clive Robinson June 15, 2015 5:47 AM

@ Wael,

The Castle can be penetrated and the Prison can be escaped from …

If you remember back one major design function of the prison was that each jail/cell had an MMU controled by the hypervisor –not the cell CPU– as the door. The MMU was set so that the jail only had sufficient memory to run a small function and no more, so “no run of the castle”. Thus there would be insufficient room for malware to reside and thus “nowhere to hide”…

Wael June 15, 2015 5:59 AM

@Clive Robinson,

No room for additional malware, but what about room for malware that replaces existing ligitware?

Nick P June 15, 2015 6:36 AM

@ Justin, Figureitout, Wael, Clive

It seems Justin finally mentioned it and Clive beat me to it: the hashes/checksums don’t mean crap unless their source is verified. The OpenBSD team just recently added trusted distribution of their software via signify program. Before that, you mainly got it from their (possibly-compromised) website, (possibly-compromised) mirrors, and spy agencies posing as mirrors. A lot of assurance there.

So, they did eliminate the need to rely on HTTPS to get the packages. Of course, you need to use HTTPS to get the first copy with signify. So, there’s still a chicken and the egg problem. They have some guidance of this on their signify page but it’s still basically luck or tons of labor. And let’s hope a subversion didn’t get through in which case they’re all compromised.

@ Wael, Clive

Isolation that fine grained basically can only use segments for performance reasons. I’m not sure we’ve discussed enough the most important point: pointer protection. The old hardware systems I’ve been citing do that as well as some new ones. Pointer abuse is at the center of all kinds of injection problems. ROP even uses existing code by abusing pointers to it. So, any architecture should consider paying a premium to (a) protect pointers or (b) detect violations of their anticipated usage. I know Clive mentions this in his Prison concept but implementation details matter a lot for performance. Many architectures were canned because their pointer mechanism had too much overhead.

Clive Robinson June 15, 2015 8:04 AM

@ Wael,

No room for additional malware, but what about room for malware that replaces existing ligitware?

Again it was protected by a couple of the foundation ideas.

Firstly, the Hypervisor CPU, loads the cell CPU program function, which is in a particular form that makes a hash or similar check fast. Then every so often during operation the Hypervisor asserts the cell CPU “halt” line, and then checksums the fixed memory and range checks variables including cell CPU registers etc, if they checksum fails or the variables are out of range then thy hypervisor keeps the halt assertrd and kicks it up to the next level. If they are OK the hypervisor unasserts the halt line and the cell CPU continues executing unaware it was ever halted (there are real world time issues but there are ways of dealing with this).

The Second way is by “signature analysis”, all computer programs are determanist based on the instructions and data, thus well formed sub functions of any program are quite predictable in performance. Varience to the expected signiture causes the hypervisor to assert the cell CPU halt line and do the first check.

It is because both tests are probabilistic in nature I named it probabalistic security. Thus the more often you schedual the cell check the less chance malware has to go undetected.

The important thing to realise is that the program fragments that run each cell have to be “well formed” in certain respects as getting that wrong not only reduces performance it also reduces security. Importantly most “parallel methods” produce suitably well formed code that just requires minor memory rearangment.

@ Nick P, Wael,

Pointer abuse is at the center of all kinds of injection problems.

Yes and one way to reduce such issues is via an MMU to keep pointers in fixed range. Another way again with an MMU is to use a sliding window, such that pointers are in effect removed from the control of the cell CPU to that of the hypervisor. This second method requires some often odd looking ways to use memory but it can be quite efficient.

I know Clive mentions this in his Prison concept but implementation details matter *a lot* for performance. Many architectures were canned because their pointer mechanism had too much overhead

Performance is a relative measure, and thus you have to decide as you do with speed-v-memory etc what measure you wish to optimize.

Way back in the mists of time for most ICT folk K&R realised that programer time was way more expensive than CPU time, thus they made certain choices which whilst increasing CPU time significantly eased the complexity for programers and thus at a stroke made programers more productive. In more recent times the system performance bottle necks have moved from hardware bound to OS bound hence one reason for “direct access” to IO hardware push we see and it’s attendant issues. Oracle were one of the first to go down this route, by taking the hard disk access out of the *nix kernel in various ways.

Specmanship was the rage of the 1980’s and 1990’s and was at the end of the day fairly pointless, due to raw system performance doubling about every year (remember system performance is bound by memory and I/O as well as CPU). Often the length of time it took programers to switch to a supposadly improved algorithm the raw performance improvment had solved the problem.

Today hardware cost is negligible compared to other costs hence the push to multiple CPU systems. Provided pointer use is bound in range to what a CPU can see local to it then issues such as cache misses that realy criple performance are avoided.

Thus if software is designed to use only local not far pointers or other hardware mechanisms (MMU based sliding windows) to keep far pointers local to the CPU pointer performance is often not an issue.

More interestingly perhaps is the on chip gains of segmented caches, shared memory and Harvard RISC not von Newman CISC CPUs. Software designed for traditional IaX86 systems obviously will behave like a dog on such a system, however when designed for the architecture the performance radicaly changes. The advantage of modern compilers is they can take a big chunk of such issues away from the programer. That said part of the prison design was to not alow the application programers to get even remotely close to the hardware. The system specialists who design and build the tasklet fragments the application programers “script” do that.

GeorgeL June 15, 2015 10:25 AM

@ Clive Robinson

While looking up “compilers” I encountered an entry on this blog from 2006. You had made some remarks on assembly code watermarking in response to comments posted to “countering trusting trust.” Has your observations changed since then? It’s been a while…

Wael June 15, 2015 4:46 PM

@Clive Robinson, @Nick P,

Again it was protected by a couple of the foundation ideas.
Firstly, the Hypervisor CPU, loads the cell CPU program function, which is in a particular form that makes a hash or similar check fast.

So this is still vulnerable just like hash based verification is, the way it’s currently implemented in any OS today, on any platform. What makes this any different?

Then every so often during operation the Hypervisor asserts the cell CPU “halt” line, and then checksums the fixed memory and range checks variables including cell CPU registers etc, if they checksum fails or the variables are out of range then thy hypervisor keeps the halt assertrd and kicks it up to the next level. If they are OK the hypervisor unasserts the halt line and the cell CPU continues executing unaware it was ever halted (there are real world time issues but there are ways of dealing with this).

Okay, I get the Hypervisor operation of halting the execution thread on a particular imprisoned CPU — this is the “Warden” in action. I understand how variables can be checked if they aren’t within boundaries. The part about checksumming fixed memory and CPU registers is, I think, a bit hard. These values aren’t known a-priori and can take any value. There is no way to distinguish good hashes from bad hashes unless there is a reference used for comparison. I think we had this discussion a while back, and I took it that the reference is actually a “social” or “crowd” reference, meaning the given thread is redundantly run on several CPUs under the control of the hypervisor, which will use some sort of voting mechanism to decide what’s an acceptable hash and what isn’t. There maybe other ways that you haven’t elaborated on.

The Second way is by “signature analysis”, all computer programs are determanist based on the instructions and data, thus well formed sub functions of any program are quite predictable in performance. Varience to the expected signiture causes the hypervisor to assert the cell CPU halt line and do the first check.

Signature based analysis at the code block level is a current research topic with early deployment. CodeDNA combines techniques used in genetics to detect various strains of a known malware.

It is because both tests are probabilistic in nature I named it probabalistic security. Thus the more often you schedual the cell check the less chance malware has to go undetected.
The important thing to realise is that the program fragments that run each cell have to be “well formed” in certain respects as getting that wrong not only reduces performance it also reduces security. Importantly most “parallel methods” produce suitably well formed code that just requires minor memory rearangment.

I thought it was probabilistic because of other reasons, such as the voting mechanism and the “probability” that a piece of code is actually malware. This explanation also makes sense.

@Nick P,

Pointer abuse is at the center of all kinds of injection problems…

Yes, but it’s not alone. Having a formidable barrier between different privilege levels contains the injection damage. Then again, pointer abuse is typically at the boundaries or beyond — not at the center 😉

Wael June 15, 2015 5:07 PM

Justin, Figureitout, Wael, Clive Robinson,

It seems Justin finally mentioned it and Clive beat me to it: the hashes/checksums don’t mean crap unless their source is verified.

Just go to different mirrors and make sure the Hash is consistent. Then make sure the hash of the files you downloaded matches.

Justin June 15, 2015 5:37 PM

@ Wael

Just go to different mirrors and make sure the Hash is consistent. Then make sure the hash of the files you downloaded matches.

That’s not good enough for the truly paranoid: if your own internet connection is compromised by a MITM, the checksum may be altered but consistent no matter which mirror you think you are getting it from.

I suppose you could go through TOR — try to download the (very small) checksum file from different mirrors through different TOR exit nodes, and verify that it does appear the same everywhere.

At some point an attacker has to create a make-believe world with the wrong checksum for those files, and maintain the consistency of that make-believe world in order to fool someone successfully. It’s not going to fool anyone who has sufficient independent means of verifying those checksums. Unless the whole world is fooled, and as Nick P suggests, OpenBSD is indeed compromised.

Wael June 15, 2015 5:48 PM

@Justin,

Okay, go to different mirrors using different platforms on different days, locations, times, running different Operating Systems…

Justin June 15, 2015 6:35 PM

@Wael

Okay, go to different mirrors using different platforms on different days, locations, times, running different Operating Systems…

Yes, at some point you will have pretty good assurance that the checksum file you have downloaded is actually the one being published on the internet. Otherwise, if you find any discrepancies, it would be interesting to download the file corresponding to the wrong checksum, and compare it to the file with the right checksum. Do an installation, and compare it with a known good installation. What is different?

I myself have had trouble with something interfering with (blocking) my download of OpenBSD in the past, no matter where I tried to get it from. By patient trial and error, I found a mirror that offered a download by https with a self-signed certificate, and that seemed to work. Then the DSLAM in my area was upgraded, and apparently no more trouble since then …

Nick P June 15, 2015 9:30 PM

@ Justin

The companies do advertise but only to likely customers. Altran is huge. Green Hills Software is a top embedded company whose marketing gets a bit over the top at times. A lot of companies doing things this way respond to surveys about use of formal methods or low defect processes with interesting results. That’s only way I knew about them. Most of them do word of mouth, though. That’s how I did things. Although it reduces advertising costs, the real benefits are these: (a) competitors have a hard time outdoing you while having to carefully fish for details; (b) the impression of exclusivity, quality, and first-class support demand higher prices (and profits). So, it is really a niche market mostly dominated by small businesses with a number of very large firms (esp defense contractors). Yet, it’s there and happens in real world.

I think their philosophy is something similar to what OpenBSD team taught me. They say they build the software for them and people like them. They don’t really care about what other people want (esp crud). They build features they need the way they want them built at certain quality, portability, performance, and documentation levels. People that like that and are willing to make sacrifices choose OpenBSD. Likewise, companies willing to pay extra, ask for reasonable features, and move in a reasonable timeframe can get top-tier results from a niche of firms serving such companies. They make so much money doing this that they really don’t care about the mass market and most are always hiring albeit conservatively.

I hope that makes some sense. I really don’t have much more thorough as that market is so scattered, isolated, and drowned out by the rest. This is my take from what I’ve pieced together on the market over the years.

@ Wael

re pointers

There’s already a formidable barrier between levels with constant bypasses. As Schell & Karger found out, then later Hamilton & Praxis, most of the major errors in problems are interface errors. Components of different domains must interface to get the job done. This leaves plenty potential for problems. It’s why even MILS had high assurance message engines for inter-partition communication.

Anyway, regarding pointers, they underly the most common things in the system: low-overhead argument passing, buffers, arrays, strings, and compound data structures. There’s an abuse for every type of these. Coincidentally, the most common abuses are in these areas and others involving pointers. Security enforcement code often relies on correct pointer operation as well. So, protecting pointers is paramount. That capability-security people came up with all kinds of ways of doing this is their best contribution to the field.

re mirrors

“Just go to different mirrors and make sure the Hash is consistent. Then make sure the hash of the files you downloaded matches.”

I did that. The even had the same name and hash. Yet, the Windows7Cracked.iso file I downloaded was infected with malware. It was like the attacker tricked a bunch of people into thinking it was real and they just uploaded it in return. 😛

@ Clive, Wael

The reason I try to ignore anything doing checks for probabilistic security is overhead. Check out how many cycles it takes to hash several bytes (i.e. a register). Assuming you do every register in parallel, that’s still plenty of cycles. Compare to these:

Tag on pointer – 1-2 cycles to ensure read-only & bounds-check

Tags on basic datatypes – around the same (parallel, simple hardware)

Segments – 2 cycles on Intel Atom

Paging – 8 cycles very basic MMU check

IBMON-cache control flow protection – 0.5% worst overhead on top of regular ops

Aegis hashing – Merkle check in max 160 cycles in hardware for 128-bit memory blocks on 130nm node

SecureMe Hashing – SHA-1 HMAC at max 80 cycles for 128-bit on 2GHz clock

Cryptopage hashing – max is 1520 cycles for 19 deep Merkle Tree

As you can see, the strong security methods protecting critical primitives have simple building blocks with low overhead. Every hashing scheme that’s been bult is anywhere from 10-1000x slower with only probablistic security. That’s despite all prototypes assuming top-rate performance at processor level while simple things were built on machines with 60’s-80’s technology.

So, performance might not be as important as it once was. Yet, it seems a bit much to drop from stong, foundational security mechanisms to probabilistic ones and pay 10x more in overhead. The only time I’d consider it is for designs where nothing outside SOC is trusted. Then, the crypto and hashing are suddenly necessary. Past that, I think simple designs benefit for widespread application and performance. Even microcontrollers might use one of the top few schemes.

Note: Although I think Clive’s design is less efficient for endpoints it might more value than most in the cloud model. The measured resource usage, determinism, containment, flexibility, and so on match their needs. His model would also have almost no overhead compared to traditional hypervisors. Might want to look into applying it to those, grid computing, and so on. Could more potential waiting especially on these inherently virtualizable processors popping up.

Clive Robinson June 16, 2015 1:17 AM

@ Nick P, Wael,

The hash I’m talking about is not a crypto hash, it does not need to be. Further the hashing is only used on the cell “executable memory”[1] not the cell memory used for variables or the cell CPU registers as these have to be checked for range not absolute value.

The simple non hash case is, as the hypervisor loads the tasklet into the cell CPU executable memory, it has –or could have– a copy of the tasklet in it’s own memory that it simply does a byte for byte comparison on to perform the executable memory part of the “cell check”.

The hash is used for reducing the amount of memory used in the hypervisor for storing and checking the executable memory of the tasklet.

This non crypto hash can be done because the machine code in the cell executable memory is derived from a higher level language and thus contains considerable “known redundancy”[2]. What I’m currently looking at is if it’s more effective for the hypervisor to be given the high level code –rather than the executable code– that it then transcribes into machine code on the fly to be put in the cell executable memory and if the transcribing process will also be fast enough to use to check the cell executable memory, thus reduce the hypervisor memory overhead.

[1] I say “executable memory” rather than program memory as it has a more strict interpretation. That is it is static and constant and of a type that could be stored as a ROM image.

[2] The reason it is “known redundancy” is two fold, firstly as with any high level to low level transcoding the aim of the higher level language is to provide a known set of functions with minimal coding of the function identifier. This is the idea behind the likes of the P-Code and Java “byte code” interpreters / compilers. Secondly the transcoding can be done by replacing each byte code with a set string or set series of strings that contain the CPU machine code. This is a fully determanistic process so will always provide a known translation.

Wael June 16, 2015 3:59 AM

@Clive Robinson, @Nick P,

Further the hashing is only used on the cell “executable memory”[1] not the cell memory used for variables or the cell CPU registers as these have to be checked for range not absolute value.

The simple non hash case is, as the hypervisor loads the tasklet into the cell CPU executable memory, it has –or could have– a copy of the tasklet in it’s own memory that it simply does a byte for byte comparison on to perform the executable memory part of the “cell check”.

Aha! Now that makes sense. Performance is not a concern for me at this point. It can be optimized later …

Wael June 16, 2015 11:49 AM

@Nick P,

There’s already a formidable barrier between levels with constant bypasses. As Schell & Karger found out, then later Hamilton & Praxis, most of the major errors in problems are interface errors. Components of different domains must interface to get the job done. This leaves plenty potential for problems. It’s why even MILS had high assurance message engines for inter-partition communication.

Agreed, but I included the interface in the “barrier” so the barrier is still not formidable. I’m starting to thing that two domains isn’t sufficient (user mode / kernel mode or supervisor mode.) Most Operating Systems use two rings even if the underlying HW supports more, for example on Intel only Ring 0 and Ring 3 are used. The addition of HW supported virtualization effectively adds another layer of privilege levels, for example on an ARM device with TrustZone extensions, there will be Normal World (which consists of user mode and supervisor mode) and then there is Secure World; more or less a virtualization component with a separate micro kernel, exclusive interrupt bits and other HW capabilities. The interface between SW and NW has some weaknesses, and that leads me to think a few more layers of privilege levels are in order — something Multics did (8 rings, in the Wiki link above.)

Re pointers, yea, sounds right 🙂

Nick P June 16, 2015 12:30 PM

@ Wael

Yes, the ring model isn’t good enough. The B3/A1 class systems that used it in the past used all the hardware rings with only security kernel in Ring 0 (see here, esp SCOMP). Most also supplemented rings with segments to enforce more fine-grained protection on memory & storage. My gripe was plenty of code was in Ring 1 and 2 with potential for interface attacks. Breaking that up enough would cause too much overhead on such processors.

The tagged, esp type-safe, systems used type checks at data and function call levels to enforce POLA throughout. They ranged from checking a scalar value that was a type all the way to the types of every argument in a function call. The capability systems used protected pointers (capabilities) for access control. The mechanism was flexible enough to do fine-grained POLA, control flow, MLS, and so on with varying performance tradeoffs. IBM mainframes, PA/RISC, and Itanium have memory keys that restrict which processes access which memory albeit with a low number in hardware cache. A recent category of machines use hardware-supported encryption and hashing to protect a system per memory page.

So, I’d suggest you ignore the ring model as much as possible in favor of the other models. I mean, use it where you have to for sure. Otherwise, it seems the tagged, capability, stronger CFI, and crypto architectures are showing most promise for fine-grained protection with reasonable performance hit. Work also continues on the popular hypervisor model with interesting work but they’re all scattered rather than converging. You might be able to use one TrustZone solution for one problem but not the others. Integration is usually easier than solving the hardest design problems, though, so at least they’re working on the hard stuff. 😉

Rodger Dodger June 16, 2015 2:37 PM

@Hasboo-rah

A hearty welcome to the two unctuous newcomers Roger Dodger & anotherJustin! Extra international goodwill for oleaginous flattery and double overtime on the sabbath!’I do not want to see Israel take much blame here, last thing they need is their foaming at the mouth critics get another morsel.’

Oh, gollee gosh, gee whiz. You got me, you clever man. I admit it, I am actually a Mossad counter-intelligence agent!

Darn it. I just hate it when you supergeniuses blow my cover, which seems to be so very often.

And what a valuable covert operation you just exposed, too! Why, there were millions of dollars invested in this.

Heck, I spent all these years building up an american legend, learning how to speak american, I have an american family (supposedly), who are Protestants (supposedly), work on American soil (true story)… and here I go and post on some forum somewhere and completely blow my cover, because I am outsmarted by my willingness to engage in a little pro-Israel propaganda!

Gosh all those decades, all those people involved, all that money, all those resources to build up such an extensive, multi-level cover dating back to school times… and it is all exposed because of some clever armchair wannabe counterintelligence officer on a forum!

Gollee gosh, gee whiz.

If it was not for you guys, I would have gotten away with it too! You really have done your homework!

ROFL.

Figureitout June 17, 2015 10:28 PM

Wael
How else would you get the OS?
–I don’t know, probably the usual downloading the ISO from a hacked site, on my hacked router and modem, and burn an install disk w/ a program I didn’t write nor read full source on my infected PC. All I know is I don’t trust it from the start and I feel like punching a wall now.

Justin
–I DO NOT trust plain FTP anymore, this is what I’m saying. HTTP, FTP protocols are pretty nice in a fairy land; but not on the internet of today (slowly but surely turning to sh*t).

W // CR // NP // J RE: pointers
–Came across error message today and started reading on something that I’ve definitely used just wasn’t aware of concept; far pointers ( https://en.wikipedia.org/wiki/Far_pointer ). Strikes me as an immediate security threat. Led to the “A20 Line” wiki… ( https://en.wikipedia.org/wiki/A20_line ), dude fricking security vulnerability or it just feels wrong!

Referenced the last way, an increase of one in the offset yields F800:8000, which is a proper address for the processor, but since it translates to the physical address 0x00100000 (the first byte over 1 MB) the processor would need another address-line to actually access this byte. Since such a line doesn’t exist on the 8086 line of processors, the 21st bit above, while set, gets dropped, causing the address F800:8000 to “wrap around” and to actually point to the physical address 0x0000000.

[…]

The 80286 had a bug where it failed to force the A20 line to zero in real mode. Due to this bug, the combination F800:8000 would no longer point to the physical address 0x0000000 but the correct address 0x00100000. As a result some DOS programs would no longer work. In order to remain compatible with these programs, IBM decided to fix the problem on the motherboard. This was accomplished by inserting a logic gate on the A20 line between the processor and system bus, which got named Gate-A20. Gate-A20 can be enabled or disabled by software to allow or prevent the address bus from receiving a signal from A20. It is set to non-passing for the execution of older programs which rely on the wrap-around. At boot time, the BIOS first enables Gate-A20 when counting and testing all of the system’s memory, and disables it before transferring control to the operating system.

Wael June 17, 2015 11:41 PM

@Figureitout,

probably the usual downloading the ISO from a hacked site, on my hacked router and modem, and burn an install disk w/ a program I didn’t write nor read full source on my infected PC

It’s ok as long as you air gap your platform after OS installation. Good for some use cases, not good for use cases that require connectivity.

HTTP, FTP protocols are pretty nice in a fairy land; but not on the internet of today (slowly but surely turning to sh*t).

Oh, man! These protocols hit rock bottom long ago, then started digging and struck a crap well 😉

A20, HIMEM.sys, Expanded memory? Man, you bring back old memories! Get with the times dude! By the way, some of the hardware bugs from this era were useful, and were extensively used 😉

Figureitout June 18, 2015 12:42 AM

Wael
Maybe ok, “fire-and-forget” malware (the most destructive/worthless) still ruins that. I’m planning an airgapped PC as best as I can do (but I still want to transfer files to-an’-fro’ it via a data diode/guard-ish device; otherwise it’s worthless (no device is truly “air-gapped” these days or ever was anyway, just temporarily “offline”)). It’s annoying doing stuff on it though! Like getting firmware source I need and compiling, it was a hot mess what I was doing; eventually it needed to compile w/ my USB stick in! Argh, delete delete.

And I’m w/ the times, just been looking into making my own bootloader (similar to Truecrypt, locking it before OS boots but adding some personal touches), maybe OS (I’d just want to add lots of programs, like modified liveCD’s) for x86 PC’s. If things keep getting worse and worse though maybe not (the PC I was going to do it on just died again and I don’t want to save it lol, got enough PC’s to save 🙁 ).

Leslie Fish June 18, 2015 2:41 AM

Why are you assuming this was an Israeli job? It sounds more like the sort of work the Saudi government would pay good money for. The combination of skill and sloppiness doesn’t sound like Israeli work either, which tends to be very meticulous. Is this just more of the Arab-inspired assumption of blame the Jews for everything, or is there some real evidence?

0x98483938 June 26, 2015 12:20 PM

There are probably dozens of memory corruption vulnerabilities in AV HIPS and file scanner engines. Also, most use keygens to encrypt data under TLS for phoning home, you reverse those you can MITM.

I find it humorous that people think the US government hires a bunch of half-wits for their hardware design and research..

I’ve assumed they have MMU backdoors and prime and EC methods for years.. Who needs RCE or clownish-logistics hardware and software kits if you have those?

Toshiba is coming out with quantum crypto in 2017.. OTP and air gap till then..

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.