TDSS Rootkit

There’s a new version:

The latest TDL-4 version of the rootkit, which is used as a persistent backdoor to install other types of malware, infected 4.52 million machines in the first three months of this year, according to a detailed technical analysis published Wednesday by antivirus firm Kaspersky Lab. Almost a third of the compromised machines were located in the United States. With successful attacks on US-based PCs fetching premium fees, those behind the infections likely earned $250,000 on that demographic alone.

TDL-4 is endowed with an array of improvements over TDL-3 and previous versions of the rootkit, which is also known as Alureon or just TDL. As previously reported, it is now able to infect 64-bit versions of Windows by bypassing the OS’s kernel mode code signing policy, which was designed to allow drivers to be installed only when they have been digitally signed by a trusted source. Its ability to create ad-hoc DHCP servers on networks also gives the latest version new propagation powers.

Posted on July 1, 2011 at 12:08 PM108 Comments

Comments

al July 1, 2011 12:51 PM

hmmm…the The Register article states that:

“The first is by infecting removable media drives with a file that gets executed each time a computer connects to the device.”

Brings us to that USB stick issue Bruce just blogged about.

al July 1, 2011 12:54 PM

…and this feature (also from the The Register article) would probably spread the bot in places where infected PC is sharing the network (Starbucks, open Wifi, etc)…

“The second method is to spread over local area networks by creating a rogue DHCP server and waiting for attached machines to request an IP address. When the malware finds a request, it responds with a valid address on the LAN and an address to a malicious DNS server under the control of the rootkit authors. The DNS server then redirects the targeted machine to malicious webpages.”

Alan Kaminsky July 1, 2011 1:59 PM

From the linked article (http://www.securelist.com/en/analysis/204792180/TDL4_Top_Bot):

“When developing the kad.dll module for maintaining communication with the Kad network, code with a GPL license was used — this means that the authors [of TDL-4] are in violation of a licensing agreement.”

So there’s no need to fear! Richard Stallman will save us from this pernicious GPL-violating rootkit!

Timothy Keith July 1, 2011 4:13 PM

“How does one detect and remove this?”
-foosion

Pretty much the normal ways one detects malware… if you find most of your removal tools (AVs, etc) are being killed when ran regardless of being in safe mode or not, you’re probably dealing with a ring0 rootkit like TDSS.

Kaspersky has a free tool called TDSSKiller that will remove the rootkit from the kernel, then you can use your tools to clean the infection.

Make sure you run scandisk afterwards, the several infections I’ve dealt with have corrupted parts of the filesystem in some degree… although this might be malware that simply uses TDSS, and not TDSS itself.

Your best and safest bet is probably to backup your files with a known safe bootable OS and do a complete reinstall.

tommy July 1, 2011 8:04 PM

Calling Nick P…. Calling Nick P…. The market demand for your eventual high-assurance hw and OS just grew a little more, and will continue to do so…

Apparently, my 32-bit XP is as safe as any Win7, since the new security features seem to be so easily defeasible.

Perhaps the MBR should be hard-coded into firmware or hw, requiring some type of actual physical access or physical token to modify? Just a thought.

@ Timothy Keith:

Probably even safer to nuke the drive first, perhaps with Darik’s Boot And Nuke, or with a tool like Eraser that overwrites it many times with random data, before reformatting and reinstalling.

Carl 'SAI' Mitchell July 1, 2011 9:29 PM

Never assume that a rootkit can be removed from an infected system. Use a LiveCD to scan for and remove such malware.

Nick P July 2, 2011 12:39 AM

@ tommy

“Calling Nick P…. Calling Nick P…. The market demand for your eventual high-assurance hw and OS just grew a little more, and will continue to do so… ”

It goes up and it goes down. I’ve decided, though, that I’m tired of catering to those who don’t look after themselves. I was seriously considering leaving IT security but its my passion & I have too much vested in it. So, I figure I’ll take a Theo de Raadt stance (paraphrased in my words): “we don’t add those features or do it that way because we don’t build it for you. we build it for us. we want portable, reliable, clean, secure code. anything else takes a back seat.” Might be a better way to go. 😉

“Perhaps the MBR should be hard-coded into firmware or hw, requiring some type of actual physical access or physical token to modify? Just a thought. ”

Nah. MBR protection is already built into TPM schemes and even TrueCrypt that I’m aware of. The idea is that the peripheral components, like hard disks, should be considered untrusted. The best method uses an embedded private key to sign the clean version of the software. So, when it’s loaded (or before), it can be hashed & checked. If the check passes, the firmware passes execution control over to it.

I often apply this to the firmware in my designs. The idea is that a small, high assurance component wouldn’t need to be fixed (hopefully). This does nothing but load the firmware and verify its signature. Then, the firmware would do self-tests of the hardware, load/verify the software, and give it control. This means that only a tiny piece of functionality truly need to be trusted and architected with the highest assurance. The rest can be gradually built on and improved via patches/updates if necessary.

This rootkit changes nothing. It doesn’t really change anything in any way so far as security issues go. The architectures and principles that lead to trustworthy devices still defeat this rootkit. The “penetrate and patch”, bloatware approach fails even more. I have to say that I agree with that minority of security professionals: I loved LulzSec and miss them. They, and people like them, showed just how pathetically weak the current security approach is. They gave more robust solutions some free advertising by making many lay people ask their geek friends: “How could anyone avoid this stuff?” The answers vary, but probably made us all safer. 😉

mesrik July 2, 2011 3:30 AM

FYI,

On network edge the dhcp-snooping and DHCP option 82 has been widely available many moons (4-5 yrs) already to managed switches .

If there are still sites which don’t block DHCP-servers on unauthorized ports you should definitely expect good explanation from your network management and fail to approve anything but yes sir, we’ll do it right away!

I recommend also enabling DAI (Dynamic Arp Inspection) on Cisco switches and ARP protection on HP switches. Both allow traffic only from DHCP provided addresses from the port.

Port security and reasonable mac-address limits should also be enabled everywhere but absolutely not possible.

Those are most important L2 level protections available and used today.

Lot of problems can be avoided by proper network equipment configurations and there is no real good reason not to use it.

Only if you run with zero-budjet and haven’t been able to purchase new gear in 5 years you are mostly out of luck.

But then, you should reconsider is it worth being vulnerable as these kind of issues can be avoided by uprading those old switches up to date. Prices of the proper managed switches is very reasonable today compared what they were 5-10 years ago.

Cheers,

tommy July 2, 2011 4:01 AM

@ Timothy Keith:

I read somewhere that some malware, especially in MBR, can survive a reformat. Don’t remember where; didn’t dig into the details; perhaps I hallucinated it. Hence the step of nuking before reformatting, not for privacy, but to be sure the malware is truly off of every sector of the drive.

@ Nick P.: Can you confirm or deny the above vague memory from some years back, about malware surviving reformatting?

“portable, reliable, clean, secure code. anything else takes a back seat.”

Works for me. Let me know when you have something ready to test. Will gladly give it a test drive, although it would still have to run on my x86 machines, unless you’ll lend the machine, too. 😉

As usual, you’re way ahead of the crowd with your designs for the single HA module building a chain of trust. The issue here was MS’s claim that only drivers with digital signatures from trusted authorities could be loaded. And that just went down the drain. They’re trying to lock it from above or from the side; you’re locking it from below. Much better.

I know the “increased demand” was a bit of an exaggeration, but as each new attempt to patch at the OS or kernel level or MBR level fails (ASLR, anyone?), it adds to the growing body of evidence that your path is probably the only hope.

mesrik July 2, 2011 4:18 AM

One more comment, about MBR & OS-bootsector infecting rootkits.

I believe that a bit twisted pc’s boot system should be enough to circumwent these kind of problems.

First, pc should boot always from attached read-only flash drive (microSD, SDHC,…) unless you keep pressed some sufficently hard key combination (A-F-H-L) and that doesn’t exsist fail to boot showing kind message why boot failed.

Now, if you press that combination you can boot directly from HD, CD, DVD, USB, NET, … etc.

But the whole point is that pressing those keys every time is tedious and booting from that microSD which is prepared before OS-install would enable many usefull features and while it must be hardware write-protected manually it won’t easily be tampered.

On that microSD you would have an advanced mini OS (EFI, kind of) which then would be able to verify the boot sectors of the chain booted OS’s before launching them, set up full encrypted HD etc. Have a capability to contact OS makers support site and verify system even deeper etc.

So, what’s the difference with TPM? A lot ofcourse, but most importantly it would be open system, you could any time pop that microSD from slot, make a copy and verify it. No risk at tweaking for those who like to do that. And it wouldn’t be too expensive either as it uses common easily available technology already in wide use.

Sure we would have some occations where dog ate that microSD and people would complain, but most parts it would not be problem.

Just to think of it makes me wonder why we don’t have these already?

Cheers,

Andy July 2, 2011 5:23 AM

Not at me but..”Can you confirm or deny the above vague memory from some years back, about malware surviving reformatting? ”
You can mark parts of HDD as bad sectors and the hdd will skip them, even thought they might not be bad, to remark them as good bios/graphics what not or “hypthical” some logic code stuff up if the computer code is place on the hdd at the same place.
something like fff0h.

Clive Robinson July 2, 2011 8:47 AM

@ tommy,

“Can you confirm or deny the above vague memory from some years back, about malware surviving reformatting?”

It rather depends on what you mean by “reformatting”, there were at one time or another four names attached “High level”, “Low level”, “Fast” and “Full”.

Back when HD’s had two ribbon cables (data and control) and MFM encoding was considered “neat” the “High Level format” was carried out by the OS and the “Low Level format” carried out by the HD Hardware. Even the original “Shugart Asoc System Interface” (SASI) which later became “Shugart Compatible Systems Interface” (SCSI) and standardized as the “Small Computer Systems Interface One” (SCSI-1) had a command for the drive to “self-format” which at the time was the only way to make a “bad block table”. So any malware could hide from “High Level Formats” provided it could “refind” it’s self after the process.

at one time a fast reformat simply ment reseting the first charecter of each entry in the File Allocation Table, in just the same way as ‘del .‘.

Often formating a drive very specificaly avoided the whole first cylinder (and still does) and you had to force rewriting of the MBR with the /MBR flag to FDISK which would only over write the first 512byte sector of cylinder 0.

Few reformats used to go to every byte of the hard drive and change them to known values (even now this is avoided due to the time involved).

And many of those that claim to, still make the mistake of reading the disk size etc back off of the HD before doing it or ignoring cylinder 0.

Thus some software / malware new/knows how to hide “beyond” the end of the drive or in cyclinder 0.

The problem with increasing drive size got quite bad due to MicroSoft’s in built assumptions about the number of heads/platters, cylinders and their sizes.

For those old enough back in the late 1980’s “Compaq Computers” brought out a hard disk system that was larger than MS Desktop OS’s could handle, so they came up with a typical kludge to get around the problem. It was the knowledge of how to do this that a short while later gave rise to malware manipulating the HD drive size parameters.

Even NT 4 still suffered from in built assumptions as to drive size for the boot disk, and who remembers the Linux issue with /root having to be entirely inside of the first 1024 cylinder limit for LILO to work?

Then there was the flaky issue of drive partitions and MS assumptions that they would always end at the end of a cylinder etc.

The list goes on some, and many of these can be used or have been used as refuges for malware in one way or another.

Simon Zerafa July 2, 2011 10:23 AM

Hi,

Firstly TDL4 is not that new; it’s been in the wild since July 2010 (v0.01).

The current current v0.03 variants were first seen in April 2011.

This rootkit has been in the wild for about a year!

It’s fairly easy to remove TDL4 TDL3 and the earlier TDSS rootkits using publically available tools such as the Kaspersky TDSSKiller:

http://support.kaspersky.com/viruses/solutions?qid=208280684

It’s updated regularly and does a good job of detecting and removing these rookits.

Regards

Simon

Nick P July 2, 2011 1:29 PM

@ tommy

“Can you confirm or deny the above vague memory from some years back, about malware surviving reformatting? ”

What they said. Additionally, it depends where the malware persists. If it’s a BIOS malware or in PCI device firmware, it will survive any activity on the harddisk.

“As usual, you’re way ahead of the crowd with your designs for the single HA module building a chain of trust. ”

I learned from the best. People like Paul Karger, Richard Kemmerer, Bell, and Cynthia Irvine paved the way for me by figuring out what works and doesn’t. The best approaches came down to two: keep the critical part as small & verified as possible. This part also should be isolated by hardware, booted first, and mediate any accesses to information. This gave birth to security kernels. These paved the way for both separation kernels and modern OS security. I’m just reusing a similar scheme to start with something ultra-small and simple to boot a signed piece of code. The less the coders have to get right, the better things will be. (Esp at the firmware level)

“Let me know when you have something ready to test. Will gladly give it a test drive, although it would still have to run on my x86 machines, unless you’ll lend the machine, too. ;)”

A secure system? Running on x86? You’re kidding me right? If it’s a virtualization solution, then I might port to x86 just to make legacy code safer. Problem is x86 is complex, buggy, and totally controlled by Intel. I’m more likely to use a POWER- or MIPS-based design. China went with MIPS and their Loongson processors even emulate x86. If I do anything x86 in a robust design, it would likewise be emulation. But that’s so slow… Probably better to just port OpenOffice & Flash over to POWER/MIPS. 😉

But, sure, I’d lend you a system to play with. The first things I design would probably be appliances just to keep it small and get a reputation going for the organization. Maybe one EAL7 evaluation using our development process. Then, we’d do private evaluations from that point on. The first thing you get would probably be a firewall, VPN, transaction appliance, SCM system, or undefaceable web server. Those are on my todo list.

Nick P July 2, 2011 1:41 PM

@ Clive Robinson on next best design
(tommy you might want to look at it too)

You and I have been working a HA TCB for a while now. Remember back when I was talking about trying to get a hold of the Secure Ada Target (ASOS) and LOCK specs? I had a design that combined the tagged memory of ASOS, the trusted coprocessor for enforcement tactic of LOCK, and a verified separation kernel with root of trust booting for software security. The design would be simple, run on a verified processor, and provide pervasive POLA. It combines aspects of your prison approach and my castle approach.

Guess what? I just found out a team independently came up with this and took it further to the point it doesn’t require a kernel and every operation on the system maintains POLA using a functional paradigm. (Kind of like your prison approach, eh?) It’s called TIARA. I want you to read through their presentation & paper and check it out. If it has no obvious design flaws, I might shift my design efforts to target that platform as my underlying TCB. And I’m not concerned with any hardware-level attacks or side channels in this analysis: it’s a separate security issue & we need to solve the software issue first to eliminate low hanging fruit.

TIARA Main Site
http://people.csail.mit.edu/hes/TIARA/

TIARA Technical Paper
http://people.csail.mit.edu/hes/TIARA/public-proposal.pdf

TIARA Overview Powerpoint
http://people.csail.mit.edu/hes/TIARA/overview-presentation.ppt

Optimus Crime July 2, 2011 6:56 PM

Prevention is better than cure. Operating system hardening. Daily use of user privileges rather than Admin privileges. Running suspect files on virtual machines and most important COMMON SENSE!!!, for example a 2 Mbytes executable file cannot be a 2:00 hours movie (hint: it’s a F*cking malware!!) …but the antivirus said it was a clean file… ANTIVIRUS CAN BE EASILY BYPASSED.

PD:Deeply sorry about the caps

Have a nice day.

tommy July 2, 2011 7:25 PM

@ Clive Robinson:

Would gladly have invited you to the party, but Nick P. was already on the thread, and hadn’t seen you around. As usual, you not only answered the question, but filled in some great bits of history as well, to give context. Please don’t feel left out, old chum – excellent answer as always, and of course you have a blanket invitation to contribute any and all of your huge knowledge base AFAIC. 🙂

“Often formating a drive very specificaly avoided the whole first cylinder (and still does) and you had to force rewriting of the MBR with the /MBR flag to FDISK which would only over write the first 512byte sector of cylinder 0. Few reformats used to go to every byte of the hard drive and change them to known values (even now this is avoided due to the time involved). And many of those that claim to, still make the mistake of reading the disk size etc back off of the HD before doing it or ignoring cylinder 0.Thus some software / malware new/knows how to hide “beyond” the end of the drive or in cyclinder 0.”

Ah, so I didn’t hallucinate that. Malware can indeed survive most reformats, including, apparently, many still done today by “standard” methods.

If you nuke the drive with Darik’s or Eraser (presumably running from a CD or from another machine to which the infected drive has been connected), they’ll get every last “bit” of it, correct? (Sorry for the awful pun on “bit”.) Acronis also has a Drive Cleanser tool that I’ve never used, but it apparently allows selecting the entire drive, which would presumably include the first cylinder and track, and lets you choose from their overwrite algorithms, importing algorithms, or creating your own. You can view the contents of the HD after the process is complete. Booting from the recovery CD that you already have, it eliminates the need for a second machine — and being a read-only CD, it’s our old frined, the unchangeable Live CD (Linux kernel, in my case) that can’t get infected itself. Anyone ever used this?

@ Nick P,:

“If it’s a BIOS malware or in PCI device firmware, it will survive any activity on the harddisk.”

Understood. The issue in question was my suggestion to nuke the drive to get rid of specifically HD malware, rather than merely reformat it, and Timothy Keith’s reply that was more concerned with forensic drive content analysis than with eliminating malware. Clive confirmed my recollections very well; your work on the other issues of building BIOS and PCI trust is of course critical.

“A secure system? Running on x86? You’re kidding me right?”

Yes. Hence the 😉 at the end — guess that should have been after “run on my x86” so it would be clearer. My bad. You’ve told us repeatedly that x86 (and 64) are not only insecure, but insecurable. Sorry for the confusion. Would still love to test anything, including an x86-emulator, no matter how slow, or a prototype machine with your choice of CPU, etc. … btw, I wouldn’t need a server, but it’s probably quicker to design, build, and test the client side first anyway. If those all hit the mark (or get debugged completely), then move on to the server side — what do you think?

Re: TIARA: GMTA. Be honored.

@ Andy:

Clever attack! Never thought of that. Does anyone know of that being done in the wild? If I may take the liberty of clarifying Andy’s ESL (no offense, Andy, just do your best): The malware marks certain drive sectors as bad, but also stores some of itself there. User reformats drive, “full, low-level”, or whatever is the most complete on Clive’s list; reinstalls OS. Malware then removes the “bad” marking by some means, thus giving the falsely-reassured user the same infection as before the reformat. Everyone is invited to answer! 🙂

I admit that I haven’t had to format a drive in ages. Last time HD died, went to shop, got new one, went home, booted Acronis recovery CD, and in about 15 minutes, entire HD was painted to where it was yesterday (when last incremental backup was made), including formatting, OS, MFT, contents and all. Much easier than reformat, reinstall, add all your apps, restore all your data, redo all your configs and tweaks… So I really can’t remember the process – thank goodness!

@ Optimus Crime:

All Best Practices, of course. But what Nick P. Clive Robinson, and others have been discussing is creating systems that are >inherently secure<, rather than the band-aid approach that hasn’t worked. As you said, AV can be bypassed; and the blog post was that MS’s kernel/driver protection has already been bypassed. You have a great day, too.

Andy July 2, 2011 8:13 PM

@tommy, About your live-cd secuirty thing, have you thought about putting it on a stick of ram. If its in the first bay and can change the address mapping it should beable to see ever thats on the computer, same level has the cpu maybe.

Most tec shops or half the users could install it in laptops and destops them selfs.

JJ July 2, 2011 9:22 PM

Dear all,

I and two schoolmates just started a project for an external device as part of our university studies. This device (which we have named the surrogatus box) is supposed to run some flavor of Linux strictly on firmware and has multiple possible uses such as:
A. running a scan on your PC from the device to determine malware infections
B. using the device as a proxy to surf the web from (through a remote control session)
C. using the device to trace the network traffic from your PC.

Plus some other future uses such as providing a proxy service to somebody living in a country behind a firewall.

Some of these configuration options are shown here:
http://blog.surrogat.us/post/2011/06/09/Some-configuration-examples

Since this place has a great collection of people with a good understanding of computer security I thought I ask for feedback on what you think of a device like this.

Thanks,

JJ

RobertT July 2, 2011 9:33 PM

@Nick P
” Problem is x86 is complex, buggy, and totally controlled by Intel. I’m more likely to use a POWER- or MIPS-based design. …”

Have you given any thought to building a HA platform using GPU’s (say from Nvidia)? There are certainly a lot of additional issues to be addressed but this sort of hardware parallelism, can be directly mapped to a separation kernel approach without the efficiency hit, inherent in a single CPU separation kernel, that’s needed to eliminate covert timing channels.

There are already companies building simulation boxes and high speed servers based on the GPU concept, so a HA platform would be able to leverage this infrastructure. I also think I could sell this concept to Nvidia and get them to incorporate trusted hardware concepts, because they need to add value to their offering and HA EAL6+ certification would open up some interesting markets for them.

Andy July 2, 2011 9:40 PM

@JJ, it would be good if you could plug a mouse and keyboard into the device and control the computer throught it, any inet onections when the mouse or keyboard hasn’t send any movements gets block or delayed, or if port 80(dest) is open and if there is traffic but the mouse hasn’t being clicked etc…

RobertT July 2, 2011 9:47 PM

@NickP
” Problem is x86 is complex, buggy, and totally controlled by Intel. I’m more likely to use a POWER- or MIPS-based design. …”

Just one more thought: MIPS is the dominant processor for TV and STB chipsets. Today the big design challenge for TV/STB is to fully integrate IPTV, this is not just displaying, youtube and netflix but rather involves targeted integrated IP advertising into a TV channel model.

This market requirement is creating a sort of parallel home computing platform that could benefit if it develops from scratch as a HA platform.

JJ July 2, 2011 9:50 PM

Hi Andy,

do you mean to control the users main PC through the device? I must admit that is one aspect we had not thought about.

Thank you for bringing that up.

Besides that we did think to have it so that the Linux running on the surrogatus box can be controlled directly by plugging a mouse and keyboard into the USB ports (actually that is one of the reasons for so many of those ports).

Best regards,

JJ

Andy July 2, 2011 11:14 PM

@JJ, not so much controled just let it pass throught it and have a sniffer attached. GPL for the sniffer code 🙂

Nick P July 3, 2011 12:02 AM

@ tommy

“Would still love to test anything, including an x86-emulator, no matter how slow, or a prototype machine with your choice of CPU, etc”

Well, this isn’t one of my designs, but you might like to play with it. Check out VX32. It’s a sandbox that uses Intel hardware to restrict untrusted binary code, dynamically rewrites it to remove unsafe instructions, and works on legacy OS’s. Idk of its quality: it’s a “stable research prototype.” But I like their scheme & I’ve thought of quite a few applications for it, especially using risky 3rd party libraries in a medium assurance app.

VX32 Virtual Extension Environment
http://pdos.csail.mit.edu/~baford/vm/

“@ andy Clever attack! Never thought of that. Does anyone know of that being done in the wild?”

Yes. It’s a well-known trick for hiding information on your hard disk in plain sight. I can’t remember if it was first designed for virus’s or stego, but I know forensics guys were taught to watch for that back when Win 2000 was new. Modern tools like EnCase might defeat that approach, might not. I didn’t bother to take a chance trying it. 😉

Nick P July 3, 2011 12:09 AM

@ RobertT

“Have you given any thought to building a HA platform using GPU’s (say from Nvidia)?… this sort of hardware parallelism, can be directly mapped to a separation kernel approach without the efficiency hit”

I’ve thought about baking security into them, but I haven’t thought of them as the platform. Hmmm. Aren’t those processors much simpler, making it difficult to do general-purpose software on each one? (I’m thinking of CUDA in particular.) Well, if we could, then it might be a decent idea. A similar advantage is won by using a Cell processor. Those have many security features baked-in, including isolating the SPU’s and a TRNG. But the number of isolation cores was too limited for the apps I wanted to build. A GPU is significantly more, but can we decompose a common application amongst those cores?

“This market requirement is creating a sort of parallel home computing platform that could benefit if it develops from scratch as a HA platform. ”

“I also think I could sell this concept to Nvidia and get them to incorporate trusted hardware concepts, because they need to add value to their offering and HA EAL6+ certification would open up some interesting markets for them.”

These could make some good motivations if the aforementioned issues work out positively.

RonK July 3, 2011 1:16 AM

@ Alan Kaminsky

You missed the fine print. The rootkit authors offer to send you the full source code if you fill in a few personal details on their web page. 🙂

tommy July 3, 2011 1:44 AM

@ Clive and Andy:

First, an apology. In the refreshing flood of ideas here, I told Clive he didn’t need to apologize for answering (he never does, d’oh!), but it was Andy who opened with “not at me, but…” and replied. Same thing, Andy. Anyone who can contribute is welcome. It’s just that Nick P. and I have had a running discusson at another thread.

@ Them + Nick P.:

Another facepalm here. Clive had already answered my question about Andy’s stealth-bad-sector HDD attack being used in the wild, by saying “many of these can be used or have been used as refuges for malware in one way or another.”. D’oh^2 here.

@ Andy:

“About your live-cd secuirty thing, have you thought about putting it on a stick of ram.”

Umm, I may be setting myself up for yet another facepalm, but I thought RAM was volatile, and you’d have to keep power to it constantly (or freeze-dry it – the “cold boot attack”), which would be impossible to do while plugging it into a bay? I mean, we ground ourselves, and everything…

@ All:

Andy’s link to the anti-forensics, even though Unix-only, was interesting enough just for the table of contents, though Clive and Nick say that this is old stuff. Which reinforces the idea of Your Humble Servant that nuking the drive with overwrites of all cylinders/heads/sectors/tracks/clusters/bits/label/packaging/warranty/receipt was about the only way to hope for a truly clean HDD before reinstalling.

Given the plunge in price of moderate-sized HDs these days, I’d probalby just buy a new one, in a sealed package, from my local factory-authorized sales-and-repair facility, just as I did when the old one died. … if that infection ever happens.

@ Nick P.:

“Check out VX32. It’s a sandbox that uses Intel hardware to restrict untrusted binary code…”

Did. This was in the second paragraph:

“Vx32 runs on unmodified x86 FreeBSD, Linux, and Mac OS X systems without special permissions, privileges, or kernel modules. It also runs on x86-64 Linux systems. Ports to x86-64 FreeBSD and Mac OS X should not be difficult. A port to Windows XP should also be possible.”

So I need to wait for that “port to Win XP”. In the meantime, no one ever responded to my suggestion at the USB-in-the-street thread,

http://www.schneier.com/blog/archives/2011/06/yet_another_peo.html#comments

that using Sandboxie (or any other good sandboxing or virtualizing solution) was a good-as-it-gets stop-gap measure to keep us safe on the Web, as well as from found USB drives, until such time as HA becomes the norm. Have you looked at Sandboxie, or do you just to complete VM? The latter is a little “heavy” for low-end home machines, and a bit complex for some users. If you don’t have an opinion on Sandboxie, you might take a look at it. It solved the found-flash-drive problem for me – safely look for owner info without getting infected. It keeps my browsing safe, too, even when asked to go to some strange sites to do diagnostics. And it’s here-and-now, light, small, and free-nagware or low cost.

Repeating the disclaimer there: I have no personal or financial connection to Sandboxie, and my experience is not a guarantee of results nor assumption of liability for your results. Choose your own virtualizing solution after careful investigation. http://www.sandboxie.com

And Nick,

WILL YOU PLEASE GET SCHNEIER BLOG BETA THREAD-BASED MODEL UP AND RUNNING, ALREADY???? >grin< .. this is starting to get really hard to follow so many great discussions at so many different threads, some gone from the home page, or about to go. 🙂

Andy July 3, 2011 2:43 AM

@tommy, “Umm, I may be setting myself up for yet another facepalm, but I thought RAM was volatile, and you’d have to keep power to it constantly (or freeze-dry it – the “cold boot attack”), which would be impossible to do while plugging it into a bay? I mean, we ground ourselves, and everything… “,
The chip isn’t meant to be ram stick, it would be more of a processor with a small bit of storage, it just uses the first bays i/o pins to control what the cpu sees and input data, and chages in some way were the data gets stored in ram(or modifer OS A20(I need to look into that area abit more)) to beable to intercept that info.

Clive Robinson July 3, 2011 3:47 AM

@ Andy (and others ;),

With regards the Phrack Mag article, take carefull note about the “proof of concept”…

What the article did not mention or deal with is ‘meta-evidence’ (which is evidence about evidence).

For instance it talks about setting dirty inodes back to a virgin state but nothing about “packing” the result.

Back in the 1990’s I started looking at what is now refered to by some half hartedly as “forensic geology” in that you examin the way the files map in time to the inodes etc.

Put simply if you have a directory entry you do not expect to find “virgin” entries in the middle of the inode list only at the end as their life cycle is ‘virgin, used, dirty, re-used’ the system does not reset them to virgin. Thus seeing virgin entries anywhere other than at the end of the inode list is meta-evidence of file deletions, and as such can be used as secondary evidence. The solution is to “pack the list” by either moving valid inodes into the position of the inodes you wish to “vanish” or fill the inodes with harmless crud that looks like valid entries made by an editor or some such.

However even this has it’s problems because an examination of inode usage against file meta-data such as creation dates etc can show how blocks of inodes are missing or out of order time wise which should throw up the question of Why?

Then on top of this are jouraling file systems that have a habit of puting time identifiable cruft through out the drive as files are added and this in turn provides meta-evidence. Likewise we now have “file system snap shots” to deal with as well, and more recently “Flash HD” with “wear-leveling”

These systems make “on the fly” anti-forensics very difficult at best, however few forensic examiners have the time or knowledge to pursue meta-evidence as it is not something the legal process realy deals with yet (but it will, as evidenced by cases where the pocession of encryption software is used as evidence of ill intent or guilt).

It is also the reason I have looked at things like “data shadows” for storing information, but this is a whole different game at the “spook-v-spook” level.

Clive Robinson July 3, 2011 4:17 AM

@ tommy,

No need to apologise for not inviting me to the party 8)

The reason I have not been around much is I’ve partialy succumbed to one of my myriad of illnesses. I’m not a good patient, I resent being ill and as a result tend to over do things and my body gets it’s own back by cutting the oxygen supply of to my brain to make my take rest in the horizontal possition (without the need of alcohol). Which this time thankfully ment I fell flat on my face in private not public otherwise they would have carted me off to hospital yet again for a week to top me up with somebody else’s blood 8(

Anyway enough of that I’ve probably already put you and other readers of their next meal 😉 Back to the topic…

With regards hiding stuff on a hard disk through a reformat there is one issue people tend to forget about and it’s important. Data on a disk is just data it’s not malware or anything else. To become malware the computer has to load it into it’s execution space either directly or through an interpreter (ie malware in java byte code does not execute nativly but through the byte code interpreter, the same with the old MS Word macro virus etc).

Now if the computer is unaware of the “data” it will not be executed or even found and the chances are it won’t unless you make the computer aware in some way.

The simplest way with PC Hard drives is to create a hidden partition at the end of the drive and put it there, then modify the MBR to make that partition bootable, this then loads the malware into memory and makes it ‘visable’ to the OS that gets loaded next.

The old way to do this was to make it look like a “device driver” installed during boot from the device (see PCI hardware spec for this or the original IBM PC I/O design). Aside from “I/O Shims” the new funky way is to make the malware the base OS and run the users OS in a virtual machine…

Then it realy does not mater what Anti-Malware software they run because it’s all running in it’s own private world that it cann’t see out of, and there is no malware in that private world to be seen or removed etc…

Clive Robinson July 3, 2011 5:13 AM

@ Nick P,

Thanks for the info on TIARA, I will look into it.

However as you are probably aware “tagged containers” go back a long way and the ideas behind them can still be seen in modern hardware and OS’s where the containers (ie 4K pages) of memory are tagged as Read Only, No EXecute etc as well as being assigned to a process via the MMU page table.

Tagged Containers have a number of issues which have ment they have been unpopular in the past.

The first of which is the container size, the minimum sized container on a digital computer is the single bit, the next logical size after that is the width of the addressable memory be it registers or RAM in bytes or words of multiple bytes. The next logical sized container being contiguous regions of memory allined at sutable address boundries (ie Memory pages). Obviously the tag has to be a certain minimum size so the larger the container size the more efficient the tagging system is in memory usage, hence the modern page being 4K in size.

What is less obvious is that the tag is also a constraint on the system, in that it’s size dictates certain restrictions. For instance if the tag contains bits for Read Only, Write Only, No Execute and one or more process ID’s then it is going to be quite a few bit’s in width. The bit width for the process ID obviously limits the number of processes that can be used on the CPU at anyone time and the translation between the tag ID and role ID opens up vulnerabilities of re-use (much like hanging pointers in a threaded environment).

This also gives rise to the issue of container assignment and re-use, on a single CPU system it is the same CPU that in different contexts that asigns the tags in a privileged context and creates exceptions with the tags in a non privileged context. Context switching should be atomic but rarely is, nor is it generaly fool proof. To do it properly needs considerable sillicon resources which is why much of it is done in kernel memory tables, which are effectivly vulnerable from anything the kernel trusts or cannot see (such as DMA or I/O).

Now saying “no kernel” is a bit of a misnomer in many cases as the bottom end function of a kernel is context switching, and the only way to avoide this on a single CPU is run everything in the same memory space cooperatively which we know from MS DOS with Windows on top just does not work (which is why MS gave up on it as a design with 98/ME and moved to NT entirly with Win2K/XP).

However even having a CPU run a single task generaly does not obviate the issue due to I/O, which generaly requires the “task switch” from background to foreground for interupt handeling.

So doing it on the same CPU is almost guaranteed to leave a vulnerability that can be exploited some how to escalate privilege irrespective of the additional container tagging.

Which is why for my “prison” concept, I went with the idea of multiple simple CPU’s that had their resources controled externaly by a hypervisor. The “user” working CPU’s never task switch and all the I/O is done through memory buffers, access to which is mediated by the hypervisor (that is each CPU has a MMU between it and the system memory, that it cannot see that is controled by the hypervisor, when it writes or requests it gets “halted” the hypervisor makes the appropriate changes and then lets the CPU run again).

Now this taged container structure would be a good idea to add to a state machine style hypervisor controling many slave (prisoner) CPU’s but not to a general purpose CPU which switches between contexts both for privileged/unprivileged work and for I/O work.

I’m a great beliver in KISS and Seperation with clear simple easy atomic interfaces where all the states can be modelled unambiguously, especially where if minimises and simplifies the design and removes hardware that otherwise complicates a boundary security issue (ie a context switch for privi / IO operation).

Clive Robinson July 3, 2011 6:47 AM

@ Andy,

“The chip isn’t meant to be ram stick, it would be more of a processor with a small bit of storage, i just uses the first bays i/o pins to control what the cpu sees and input data, and chages in some way were the data gets stored in ram”

You can do that on some architecturs but not all it depends on how the address decoding is done and if you can “chain” it etc.

“(or modifer OS A20(I need to look into that area abit more)) to beable to intercept that info.”

If you are refering to the Address line 20 issue with IaX86 CPU’s it was a hardware cludge by IBM on the PC AT motherboard to get around an issue that arose by a silicon mistake on Intel CPU’s prior to the 80286 and it’s 16Mbyte memory addressing. And in turn lead to a CPU cludge on the 486 and later CPU’s due to the inclusion of cache memory in the CPU which necessitated an extra pin on the chip to resolve the issue).

The mistake arose by the use of segmented addressing on the IaX86 architecture (a poor way to get away without an MMU and still have virtual memory).

What happened with “real mode” “segmented addressing” was two sixteen bit addresses held in two CPU registers where added together with a four bit offset to provide a 20bit address range.

However if you think a little on it you will realise that the addition of the two registers allows for more than a 20bit address range (ie not 1Mbyte but 1Mbyte + 64Kbyte – 16bytes).

Intel decided to ignore the “out of range memory” issue above 1Mbyte by simply making it’s address “wrap around” to the low memory addresses rather than raising an exception.

Some namless software droid thought he was being “oh so clever” by using this offset wrap around to access IO etc at what was now the top of his memory address range for his “broken” segment.

Due to it appearing to be a “neat solution” to a problem (that didn’t realy exist) the idea caught on and spread to many DOS programs.

[Worse idiocy happened with the likes of Compaq who actually designed “expanded memory” cards in the early 1980s to page lots of extra memory into the region just below the top of the 1Mbyte memory, in the UMA memory region above the original IBM PC 640K memory limit as well. This became formalised in the EMS specification cooked up by Lotus Software, Microsoft and Intel all of whom should have known better].

Thus without the IBM A20 Gate (out) cludge on the IBM PC AT motherboard many earlier DOS programs would be broken in the new 16Mbyte memory model as addressess above the A20 (1Mbyte) limit where now valid. The A20 Gate was simply an AND gate with one input being the A20 address line and the other “control” input coming from the keyboard controller chip (8048 single chip microcontroller).

If you think this nonsense is all in the dim and distant past of the early 1990’s, yould be wrong, because Intel by default starts the IaX86 CPU’s in “real mode” at boot and the software then has to flip the CPU to “32bit” “Protected Mode”, but those pesky segment registers still exist as a poor mans MMU, only they make the problems of debugging code many many times more complicated than it should be and the results can be seen in the mind numbing ELF format and the troubles it causes loaders for 32bit OS’s like Linux and BSD. It even effected the “C standard” with addressing arrays, it is why for array[n] the address array[n+1] is legal but invalide but address array[0-1] is both illegal and invalid because the array might begin at address 0 of the segment, thus segment -1 might address some other unknown memory 64k above it…

There are a couple of pages up on wikipedia that explains it in more depth,

http://en.m.wikipedia.org/wiki/A20_line

http://en.m.wikipedia.org/wiki/High_memory_area

And also the other expanded (paged) memory nonsense,

http://en.m.wikipedia.org/wiki/Expanded_memory

http://en.m.wikipedia.org/wiki/Upper_memory_area

JJ July 3, 2011 11:15 AM

@Andy

yes having the surrogatus box as a proxy to sniff on traffic flowing through it is one of the initial intentions. That is why it has two LAN ports (one on both ends).

It is supposed to have no hard disk and purely run the Linux in RAM. Not sure how doable this is but we thought to give it a try.

Best regards,

JJ

Clive Robinson July 3, 2011 12:22 PM

@ Nick P,

I don’t know if you have seen this,

http://mobile.nytimes.com/2011/06/30/technology/30morris.xml

But Bob Morris senior has died just a few days ago.

He was responsible amongst other things for the C maths library, parts of the security design of the Unix operating system and spent some time working at the NSA.

Although he trained as a mathematician he developed an interest in cryptography and whilst at the bell labs he and a colleague (Dennis Ritchie) worked out independently a way to automate Jim Reed’s attack on a WWII Haglin M-209 cipher system ( http://cm.bell-labs.com/cm/cs/who/dmr/crypt.html ).

[For those who don’t know the M-209 cipher system was based on the “coin counter mechanism” Hagelin had developed, and in a slightly different form was at the time still in active use in various middle east countries. And the supplier of the equipment was Crypto AG in Zug Switzerland. As an indication of how serious this was one of Crypto AG’s senior sales people was later arrested on espionage charges in the middle east and after negotiating his release he was subsiquently prosecuted by a Swiss court.]

Although Bob and Dennis where not specifficaly asked not to publish the paper the authors and contributors decided to withold publication. As Denise put it they all felt they had shaken the friendly velvet glove over the hand of steel.

Because of Bob’s work at Bell labs he later got an invite to visit the NSA who gave him a job which led to him becoming their chief computer scientist, where he in turn he invited Clifford Stoll when he was “stalking the whilley hacker”.

Bob Morris at one time or another issued sage words on computers and security including,

1) The thre golden rules of computer security are one; never own one, two; never turn it on, and three never use it!

2) The first rule of cryptoanalysis, always look for plain text (specifficaly “known plaintext”).

Oh and he also advised never underestimating the time effort and resources a determined adversary will employ in trying to read your traffic.

He is survived by his wife children and grandchildren.

Nick P July 3, 2011 1:31 PM

@ Clive Robinson

THanks for the reply. You really have to look at their technical paper, though. Plenty of your worries don’t apply. They were designed out. One example is container size: it’s a memory word and every word is tagged. Apps already use word sized containers, so no big deal. 😉

“context switching should be atomic…”

It’s a “zero kernel” design. There is no kernel. The hardware provides mechanisms that effectively replace it. There might still be atomicity issues.

“is run everything in the same memory space cooperatively which we know from MS DOS with Windows on top just does not work”

Actually, it can work. Both Microsoft’s Singularity and NICTA’s Mungi were single-address space OS’s far as apps could tell. They still maintained POLA. In this case, memory is structured and every subject/object labeled in such a way as to enforce POLA without a kernel & user mode. (From what I recall…)

So, like I said, this is quite different from many tagging or kernelized approaches. I’d like you to read the technical paper & overview presentation first so that you can hone in on any real weaknesses in the design.

Btw, I compared it to your prisons approach because of functional mediation. You always said you wanted some kind of checking or access control at the function level. You also had simple, MMU-less processors to further isolate them. This designs structures the whole of system operation as a series of functions being performed on objects. It also applies mediation to each function & the word-level permissions apply isolation similar to your prisons approach. Those are my only points of comparison, as I understand your approach is quite different otherwise.

On Morris

I appreciate the link. I honestly hadn’t followed him much because most of his best work wasn’t public. That’s why I like groups like Navy Research Lab: they tell me some of what they figure out. Where would I be without all those papers about covert channel identification and suppression? Oh yeah, still trying to decipher your posts on the subject and reverse engineer the fundamentals out of them. The papers were a much easier solution. 😉

Nick P July 3, 2011 1:46 PM

@ JJ

I know your device can work because it’s similar to designs other people and I have posted in the past. OK Labs demoed a way to plug a smart phone into a keyboard & monitor to form a Citrix thin client computer. I once posted a radically different way to do computing. Everyone would have kind of a compute card with onboard processor, IOMMU, RAM, and PCI. Laptops, desktops and kiosks would be designed to just accept someone’s device. The design allows malicious or otherwise untrustworthy connections because the compute card contains the protection mechanisms. Users wouldn’t have to carry a laptop across the border: just the card and rent a laptop shell (or whatever else) when they get there. It can be used for other things as well, but this is primary usage. Onto your design.

The first thing about your design that concerns me is that it’s essentially a stand-alone, general-purpose PC. It has Linux, networking, security software, USB stacks, graphics, etc. It’s effectively no different from having a second, small-form factor Linux PC, like my VIA Artigo board designs. All of the attack vectors available to people targetting Linux, which are numerous, are available to target your system.

I also don’t see you using firmware to your advantage here. The reason is that Linux is so complex and changes so often that updates to important modules happen regularly. I could see you having a “factory state” Linux kernel in the firmware to boot up the updated one, run checks, etc, but that still doesn’t stop a compromise of the kernel being updated. So, the design has less assurance than some of the others proposed, including a LiveCD [because it’s non-writeable by malware].

So, I see it being useful. It might even temporarily have a security purpose due to its obscurity (e.g. Mac OS X). But systems like that are still sandcastles. Even arrows can put holes in castles made of sand.

JJ July 3, 2011 3:11 PM

Hi Nick P.,

thanks for your feedback!

It is true that the device is like a second, small-form factor Linux PC with some specialized functionality. This was the only way we saw it being able to accomplish the tasks we wanted. Our intention has been to make it as small and inexpensive as possible.

But how is the firmware different from LiveCD, as the firmware is not directly writable by the Linux it runs?

Besides, since the device does not have a hard disk, any attack vectors would have to deal with that the code would have to be directly executable (as it could only be embedded into the RAM memory and thus could not utilize any of the ‘autostart-at-boot’ functionality in an OS).

Another issue is that any code that would be executed on it would still have to make it to the main PC somehow, and if the main PC is a Windows, the code would also have to take that into consideration.

Best regards,

JJ

Nick P July 3, 2011 3:59 PM

@ JJ

Thanks for the additional information. I didn’t know it wouldn’t have a hard drive. As for the firmware, I figured you meant it loaded from a EEPROM or flash-type memory like most BIOS’s do. The reason I made this assumption is that any device using the Linux platform needs to be able to do security updates for the OS and important libraries like OpenSSL. Otherwise, it remains insecure, lacks necessary functionality to remain interoperable, etc. Without a hard drive, I’d assume it would update and somehow write the new state to memory. That makes the memory writeable. The alternative LiveCD approach just sends out updated LiveCD’s every month or so and hence maintains its read-only capabilities.

“Another issue is that any code that would be executed on it would still have to make it to the main PC somehow, and if the main PC is a Windows, the code would also have to take that into consideration.”

Definitely. I was kind of curious as to how your machine would check the main OS. Malware typically escalates privileges and destroys or subverts any software used to detect or monitor malware. If your detection scheme used software, it would have to have kernel-mode privileges and be able to hide from malware that scan memory. This is tricky and brings much risk.

Then I thought you might use hardware. You mentioned USB in one design, but that relies on the OS not being subverted. Else, the rootkit feed your device BS data. Next is Firewire, which has direct access to memory. This could work, but requires the other system doesn’t have a IOMMU. In other words, the PC has to intentionally get rid of a very important security feature for pain-free scanning of memory. Otherwise, we have to figure out how to set up Intel VT-d to let your device have total access, while restricting others.

It all just sounds a lot harder to do than the virtual machine or sandboxing approaches. So, how were you planning to get information from the main PC while ensuring it isn’t spoofed by a rootkit?

tommy July 3, 2011 4:47 PM

@Nick P., a Quick-E.: 😉

“the firmware would do self-tests of the hardware, load/verify the software, and give it control.”

How does it “verify the software”? OS are constantly changing, as users make config choices. I could change a single bit in one Reg value from 0 to 1 or 1 to 0, and the OS will hash differently. Even BIOS has user-configurable settings, which would change a hash output.

So, are we back to AV-style signatures and heueristics? Not seeing how the verified firmware would “verify” anything higher up, though I’m sure you have a solution.

Back to the parades and fireworks! Catch you tonight or tomorrow.

tommy July 3, 2011 5:01 PM

@ Clive R.:

Meant to say (memory lapse), sorry to hear of your illness. Here’s to a swift and full recovery!

“Get-well-card” disguised as song parody, written to a friend:

http://www.amiright.com/parody/70s/gordonlightfoot136.shtml

The original song, about an actual US shipping disaster, isn’t so well-known in the UK, though it’s all over YouTube, I’m sure. I’d write one for you, but don’t know the details and don’t need to know. (also, it’s a holiday weekend here.) Please consider it adapted for yourself. ;-D

Dirk Praet July 3, 2011 5:23 PM

I wonder just how difficult it would be for BIOS manufacturers to allow for an OS-independent MBR backup/recovery feature throwing an error at boot time when checksums don’t match. For most computer literate people, that would be a dead giveaway that something really fishy is going on.

JJ July 3, 2011 5:24 PM

Hi Nick P.,

about how to get information from the main PC while it is running and while ensuring that the information is not spoofed by a rootkit, that is a good question.

I do not think there is a solution to this that will work in every case. Mechanisms such as communicating directly with the miniport driver are under consideration but because they all go through the host OS in one way or another, they stand the risk of being intercepted. With the exception of somehow detaching the hard disk, plugging it to the box and scanning it that way (which is not extremely user friendly).

Another thought was to use the box to scan for all open ports on the users PC. Sort of like a personal penetration testing tool.

Besides that, thank you for those points about DMA. This is one aspect that we have not had a chance to put much of any thought into yet.

By the way that other issue from my previous posting, about the code “that would be executed on [the surrogatus box] would still have to make it to the main PC somehow”, that would mainly work in cases where user is browsing the web (or their own computer) through a remote terminal session running on the surrogatus box.

If the box is just being used for tracing network connections from the PC (by connecting LAN cable from PC to the box, and another LAN cable from box to inet router), the PC will be just as vulnerable as without the box.

Best regards,

JJ

Nick P July 3, 2011 5:37 PM

@ tommy

“How does it “verify the software”? OS are constantly changing, as users make config choices.”

That’s not the software: that’s the system. This one word change makes a big difference. The verification is done on the code of the software, optionally the data as well. Maybe a whole image that’s copied directly into RAM, with some careful initialization afterward. Either way, the initial software TCB is digitally signed by the producer and the high assurance boot system makes sure the software matches the signature before booting.

An on-board signing mechanism, similar to a TPM, is used to sign any changes to the system. I usually force booting into an update or maintenance mode that ensures a secure system state before validating the updates, applying them, signing the new configuration, and storing it. Note that this level of protection is applied to the TCB of the platform, which the user rarely messes with. Other things are protected by the TCB in a robust way (like the Nizza architecture signing app) or just managed within a partition/application in low assurance way (legacy software, Linux API layer, etc.).

Note that we still have to trust that the software wasn’t modified before releasing. The whole platform has excellent security, but if they’re running Subversion on an off-the-shelf Linux PC then attackers might help their software live up to it’s name. So, here’s the trusted development path in a nutshell:

  1. Requirements must make sense.
  2. Every high-level design element must correspond to one or more requirements.
  3. The security policy must be ambiguous, meet requirements, and be embedded in the design.
  4. The Low Level implementation modules must correspond with high level design elements, at least one each.
  5. The source code must implement the low level design with few to no defects.
    (EAL6-7 requires all of this)
  6. The object code must be shown to correspond to the source code and no security-critical functionality lost during optimizations.
    (DO-178B Level A requires this & CompCert can do it.)
  7. At least one trustworthy, independent evaluator must evaluate the claims and sign the source code to detect later modifications by the developers or repository compromise.
  8. The executable should be produced using the compilation strategy above by no less than 3 mutually distrusting parties, with the resulting binary hashed, signed and signatures released. (My original scheme used government labs in US, China and France, although could substitute Japan for China if US complains.)
  9. Administrators can then download a binary from anywhere, get the hashes from each party, verify their signatures, and then use that trustworthy hash to hash/verify the binary. Like Schell said of GEMSOS, “you can buy it from your worst enemy” [and still know it wasn’t subverted]. This process is stronger.

This is essentially what it takes to make a software product trustworthy from conception to distribution. If we’re satisfied with a product and just worried about subverted binaries, then we can just do 6-9. That’s still more work than hardly anyone else is doing and a significant improvement in assurance if the source code gets lots of review, Linux kernel being the best example.

So, the software TCB must be made, installed, initialized, configured and operated in a trustworthy fashion to call the result a secure (trustworthy) system.

I’ve been thinking about making a SCM (e.g. repository) system my first high assurance implementation to bootstrap trust into other projects. The system would be designed for ease of use, compatibility with existing tools, utter confidentiality/integrity protection, and defeat all non-physical attacks. A distributed SCM would reduce risk of corrupt administrators by increasing the number and locations of SCM to be subverted. It would also reduce risk of loss of availability due to failure of a node.

Nick P July 3, 2011 5:40 PM

@ JJ

“Besides that, thank you for those points about DMA. This is one aspect that we have not had a chance to put much of any thought into yet.”

The tools already exist. They were used for hacking, but you can repurpose them. Have at it…

Firewire Toolkit
http://www.storm.net.nz/projects/16
(Site is currently down. Maybe tools are available elsewhere. Wayback machine should have the descriptions.)

RobertT July 3, 2011 6:56 PM

@NickP
“3. The security policy must be ambiguous, meet requirements, and be embedded in the design.”

I’ve got a feeling there are some letters missing, such as un- in front of ambiguous!

Nick P July 3, 2011 7:00 PM

@ RobertT

LOL! Thanks for spotting the typo. Yes “unambiguous” requirements. Typos are one easy way to tell when im writing in a hurry.

What are your thoughts on my breakdown? I left some aspects out because i was trying to describe the fundamentals. Care to write one up for SOC trusted from dedign to deployment. Is it even possible with current attack and defense capabilities?

RobertT July 4, 2011 1:19 AM

@NickP
“Care to write one up for SOC trusted from dedign to deployment. Is it even possible with current attack and defense capabilities?”

Interesting question. The completely honest answer is that it is impossible to guarantee SOC integrity using current methodologies and accessing typical commercial fabs. But this analysis is theoretical rather than a practical, because to date nobody has ever reported any evidence of an SOC hardware exploit, or exploit assist.

Having said the above, nobody is actively looking for signs of SOC corruption. This makes an SOC exploit like the perfect rootkit, because if it is completely unobservable, and never screws-up, than nobody will ever have reason to check the correctness of the exploited hardware

To be honest the best chance for discovering a hardware SOC exploit is when someone fixing another unrelated problem stumbles across metal traces that are wrong according to the reference “tape-out” database. But even this assumes that the error is introduced after the Tape-out, truth is it is more likely to be inserted ahead of this stage.

Consider the organizational position of anyone trusted by a company to develop specific anti-exploit hardware . You’ll hear a lot of statements like: We added that exactly this way because RobertT said “Add it”. He supervised it and this is exactly what he wanted.

One last conundrum: Almost everyone with an in-depth knowledge of EAL5-EAL7 hardware requirements, is indirectly affirming their affiliation, with a certain TLA’s, or the organizational equivalents within other countries. In other words they’re all sus!

BTW: if that’s not enough to depress you, think about who really issues EAL5+ certification? Now consider the problems:
1) Do they really want perfect security?
– Even if you are capable of it
– Especially if you are capable of delivering it.
2) What actual recourse iexists to contest an EAL7 certification fail.
3) What metadata, about your organization, is implied by a successful EAL6+ level certification.

Nick P July 4, 2011 1:40 AM

@ RobertT

Thanks for the reply. Well, all of that kind of sucks. 🙁 I guess subversion-free hardware development is the next frontier for those wanting maximum assurance or trust in their systems. Cheap, DIY fabs anyone? (I know at least one person working on something like that.)

You pose interesting questions about EAL5-7. I’d actually say that, at the EAL6/7 level, they really do want [near-]perfect security. The NSA spends tens of millions of dollars of its own money on a A1/EAL7 certification. The resulting software product is legally classified as a munition and subject to export restrictions, possibly domestic sale restrictions. The NSA also uses them internally for highly classified information. All of this together makes me think they want the product to be truly secure. (Note: EAL5 isn’t considered “secure” but “significant increase in assurance with minimal application of specialist security engineering techniques,” is how one document described it. “A step up from the norm” I’d say.)

That said, we do know they want backdoors and control. So, how can they achieve both? Well, hardware is the first issue. Running a secure OS on top of high quality firmware that the NSA vetted and has no source might lead to an “update” or “remote troubleshooting” feature that does an end run around the software TCB. They might also expect to get in using application-level attacks, TEMPEST, social engineering, physical attacks, or users not using the evaluated configuration.

I mentioned the configuration issue last because it’s the best: an evaluation’s results only applies to the evaluated configuration & people rarely use it. INTEGRITY-178B was EAL6+ on a PowerPC board with certain firmware. It’s security claims may not hold virtualizing Windows on a COTS x86 PC with vPro management enabled. I see this so often it’s barely a punch line anymore.

As for metadata and stuff, a company wanting to hide its procedures should certainly worry about that. A private evaluation by a lab accredited for EAL6-7 evaluations would be better for confidentiality and probably cost (red tape eliminated). If they want to sell to the government, they can just do a cheap EAL4+ evaluation in addition to the private one. I’m also concerned about any restrictions that the developer might have to agree to in order to get a high assurance evaluation going. Again, a private evaluation is better here too. The context of my efforts wouldn’t be like this: our organization would be more open and work closely with academia & other companies. So, it wasn’t a concern for me, but I see others being bothered by the issues you bring up.

tommy July 4, 2011 2:56 AM

@ Nick P.:

Sorry for my one-word confusion between TCB and OS. I think perhaps I shouldn’t post on holiday weekends. 😉

OTOH, I also noticed “ambiguous”, though I was later in replying. Perhaps we all need to watch ourselves when cheering the holidays. 🙂

The silver lining to the cloud was the nice enumeration of requirements, even though shortened. I especially liked the “three mutually DIStrusting parties” and “you can buy it from your worst enemy” — hadn’t heard those anywhere else.

Very unsure how US-China would play out. Big easing of tensions if it worked. But with all the talk of Chinese attacks on US gov and mil, would the Chinese even want to do this mutual-security thing with us? They might be the sticking point rather than the US. Idle thoughts.

RobertT July 4, 2011 3:40 AM

@NickP
As a business proposal do I really want one customer, who may or may not buy my product, telling me who I can and cannot sell it to. It seems to me that INTEGRITY has asked themselves the same question. They seem to imply buy our Eal4+ product with this specific PowerPC plug-in board and you’ll get EAL6+ performance (wink wink)

@tommy
“Very unsure how US-China would play out….”
In politics there is an age old adage “keep your friends close and your enemies closer”
These wise words can be easily parlayed into profitable enterprises, by anyone willing to live a little on the edge. But even such a cynic, would be wise to consider who’s really being fooled by whom, in the whole charade.

Richard Steven Hack July 4, 2011 3:45 AM

Clive: “however few forensic examiners have the time or knowledge to pursue meta-evidence”

Exactly. As long as your hacker can hide enough forensic evidence to make the examiner exceed his time limit and budget, he’s home free. I read somewhere that British police have something like 16 hours to do their forensics search, after which time they have to move on to the next case because they are so backlogged.

Presumably every other police (local or Federal) are in the same boat.

“Some namless software droid thought he was being ‘oh so clever'”…

Precisely the industry’s problem – too many “clever” blokes by half and not enough with common sense…or any concept of usability, reliability, security, or any other concept except “can we do this and get it out the door”?

And THAT will never change because THAT is human (corporate AND open source) nature.

“1) The thre golden rules of computer security are one; never own one, two; never turn it on, and three never use it!”

I just spent the weekend rebuilding a box for a client who is one of those people who should follow all three rules! I guarantee you he will find a way to hose the box within the next 48 hours…And he can’t afford to pay me until next month…

So this time I installed openSUSE 11.4 in a dual-boot configuration. As long as he only hoses Windows and not the boot sector, he has a chance of having a functioning OS to work with until he can afford to have me fix Windows…again…

As you know, Bob Morris’ son is also famous for having released the “Morris Worm” – although there are people (like me) who suspect that really was an NSA “pulse the system” test…

Bottom line of all the “secure hardware and software” stuff above:

1) Not going to happen because governments are afraid the secure stuff will end up in “unfriendly” (read: anyone else’s) hands;

2) Even if it is produced and deployed by one state, the specs will be stolen (by kidnapping the designers if necessary) and the systems produced by “unfriendlies” (read: everyone else), ergo you end up with the state’s worst nightmare: being unable to spy on everyone else.

3) and finally: someone will figure out how to defeat it – probably within 24-48 hours of first implementation – even if they have to develop a nanotech analysis device that crawls inside the hardware and specs it out, a capability that should exist within the next decade or two. Or as I said above, just kidnapping someone who knows…

Sooooo…we’re back to my meme.

It’s a fun challenge to try to develop something no one else can defeat – but it’s ultimately a waste of time, except to the degree that it can be implemented well enough to at least “keep out the riffraff”.

The question is how much do you want to spend in developing an entirely new form of computer and software technology just to keep out your enemies?

Once again, there is only one way to deal with enemies: don’t make any. Failing that, become their friends – then poison them.

Tommy: “Big easing of tensions if it worked.”

Actually, no. Those tensions exist for a reason and it’s not because either side is worried about being penetrated. They exist because there’s money (and power) involved in making them exist.

It’s like the anti-nuclear people never understood that all those nuclear weapons were never intended to be USED – just PAID FOR.

Richard Steven Hack July 4, 2011 4:30 AM

From another article on the rootkit: “Yaneza explained that by using repair tools found on the Windows system restore disk, users can repair the boot sectors targeted by the attack.”

Right, sure – not one of the clients I’ve ever had could do that. They don’t even know what a boot sector is. Nor do they have bootable recovery disks for Windows 7 or bootable XP boot CDs.

Which brings me to another point: Anyone tried to fix a Windows 7 boot problem lately? Supposedly all you have to do is reboot, wait for Windows 7 to detect a problem, select the OS and then select “Repair System”. If it can’t succeed in repairing it, you can drop to a command line and run bootsec or other tools.

Except what happens when the corruption is bad enough that Windows can’t detect the OS? I’ll tell you. You can’t break out of the “Repair System” process to get to the command line tools! Catch-22!

You have to boot from a WINDOWS 7 PE environment which is sufficiently complete that you have a DOT NET environment and whatever else is necessary to enable those tools to properly initialize themselves.

Microsoft won’t even let you run a “Repair Install” any more from the command line like you used to do in XP. You must run it from WITHIN a RUNNING Windows 7! How smart is that?

They claim the reason is that doing repair installs on XP made XP “less stable” (despite the fact that this was a recommended procedure for years when all else fails). So how is doing a repair install from within a running 7 going to make 7 any more stable? How often would anyone have to do a repair install for a system that is running fine enough to DO a repair install?

It’s a ridiculous statement. It’s lies, plain and simple.

It’s almost as if Microsoft wanted to make sure that no one could repair their systems if they were infected with something like this.

If this new rootkit gets spread around, I’m going to make a fortune cleaning or reinstalling Windows PCs. I better raise my rates…

Actually most people IF they even realize they have such a rootkit will probably just reinstall Windows and not call me at all (assuming they have an install CD or partition and know how, which is maybe one percent of the clients.)

Timothy: “Your best and safest bet is probably to backup your files with a known safe bootable OS and do a complete reinstall.”

If you know a home user who knows how to do this and actually does it, I’d like to know who. Most home users don’t even have their original install CDs, let alone a bootable recovery or live CD.

And what good is a reinstall when the user will immediately do the same thing that got him infected in the first place?

I just spent the weekend cleaning up a box for a guy like that (although I was really just installing a new hard drive since the old one had bad sectors). I had to do a clean install of XP however because earlier, despite what I told him previously not to do, he ran IE against a porn site. He managed to clean up the system (supposedly) with various anti-spyware tools but his XP was now riddled with problems.

He’s got a PC death wish.

RobertT July 4, 2011 4:55 AM

$RSH
OT: I just discovered my father-in-law has a TDL-4 infection, his Toshiba laptop, it got really flaky at starting. and powerdown so I recommend he scan for this with a LiveCD.. bingo!

What’s the recommendation:

Full re-format / reinstall
or
New HD + clean install

He is somewhat mobility constrained so I don’t want to force him to run all over town finding a laptop HD, unless it is highly probable that the HD is beyond normal repair

BTW: he has no user personal data on the drive and basically just uses a standard “out-of-the-box” MS load.

Richard Steven Hack July 4, 2011 6:27 AM

Run a Live CD like UBCD4Win or even better a Linux Live CD, use available utilities on the boot CD or the UNIX DD command to fill the entire drive including the boot sector with zeros. Then reinstall.

I’d say try to clean it but if he has no personal data on it, a reinstall is faster (although zeroing the drive could take a while depending on the disk size). Also if it’s Windows XP, it could take a couple hours to re-apply all the patches. I just did that the other day for a client and you’ll be applying around a hundred patches on top of an original CD install, plus Service Pack 3. If Windows Vista or 7, less so.

Keep in mind everyone has SOME personal data on a machine if nothing more than some browser bookmarks.

The important thing is to find out how he got it in the first place so he doesn’t do it again five minutes after he re-installs. Put Avast AV, Superantispyware, MalwareBytes Anti-malware, and ThreatFire 3 – all free to home users – on the machine. Only let him use the current Firefox 5.0 with NoScript and AdBlock installed, and never run IE for any reason. Beyond that, there’s not much you can do.

Richard Steven Hack July 4, 2011 7:45 AM

Yeah, I have several different boot CDs as well as several different Linux Live CDs. Hirens used to be mostly warez but they cleaned up in the last couple years and now mostly has freeware on it. There are several such repair CDs based on Windows XP PE environment with good collections of tools.

The thing now for me is to get a Windows 7 PE environment similar to UBCD4Win. The reboot.pro Web site is the center for that sort of thing. They have a couple Win7-based PE projects. I use their LiveXP project for WinBuilder when UBCD4Win has a problem with a particular machine’s hardware.

I need to build a Win7PE live CD that can handle .NET so I can use some of the freeware and shareware Windows 7 BCD fix tools like EasyBCD. Windows 7 seems to corrupt its boot system very easily – I had a brand new install of Win7 do that within 8 hours of its being put in production. Google for tons of reports of it doing that. It’s as unreliable at handling its boot process as XP was. And it’s hard to fix if the corruption screws up Windows System Repair function. And with this new bootkit malware, that’s going to be a problem.

RobertT July 4, 2011 8:40 AM

Thanks Richard, sounds like good practical advice!
I’ve actually got him using Ubuntu11 with FF as an interim fix from LiveCD. I’ll see if he complains if not he’ll get an Ubuntu load on the HD I find it much easier to load than XP or Vista. I’m not sure it has any better security, but it may be different enough to just make him a less attractive target.

Nick P July 4, 2011 4:13 PM

@ Richard Steven Hack

“1) Not going to happen because governments are afraid the secure stuff will end up in “unfriendly” (read: anyone else’s) hands”

It’s already happening and been happening: INTEGRITY-178B, VXWORKS MILS, GEMSOS, Boeing SNS, MULTOS, Caernarvon, etc. They might restrict some of the evaluated products stateside, but it wouldn’t happen in a few other countries. Could just keep the I.P. there and use ITSEC instead of Common Criteria for the high assurance evaluation. That’s how MULTOS was certified. Got ITSEC E6 to prove security, then a basic EAL4+ Common Criteria certificate just to sell it over here. Cheaper & dodges all that legal B.S.

“2) Even if it is produced and deployed by one state, the specs will be stolen (by kidnapping the designers if necessary) and the systems produced by “unfriendlies” (read: everyone else), ergo you end up with the state’s worst nightmare: being unable to spy on everyone else.”

My scheme was very open. It’s necessary to ensure security and subversion resistance. It doesn’t matter though. They would just worry about it, but it wouldn’t have those results. OpenBSD is a high quality UNIX that defeats many attacks, but the state can still spy on most people. Only a small segment would use the secure platform and only a small percentage would use it correctly. 😉

“3) and finally: someone will figure out how to defeat it – probably within 24-48 hours of first implementation – even if they have to develop a nanotech analysis device that crawls inside the hardware and specs it out, a capability that should exist within the next decade or two. Or as I said above, just kidnapping someone who knows…”

Dude, are you drunk? A ranting pessimist venting on the blog perhaps? Since when does knowing the source or design guarantee an exploit? I can give you a link to the source code for OpenBSD, AES-256 or good old PGP. And now you can automagically defeat them, right? Not a chance and less so for a high assurance product. Maybe hardware as it’s usually obfuscation but high assurance security works regardless of whether enemies have the source code or not. The thing that’s hidden is always small, like a private key. See Kerckhoffs’ principle.

“Once again, there is only one way to deal with enemies: don’t make any. Failing that, become their friends – then poison them.”

Ok, definitely alcohol. Probably something 50% if you think that statement even party applies to remote, unknown or nameless enemies. Bad analogy. I’d say make their exploits as hard as possible, force them to come in with physical attacks, and then unload on them with hydroshock. Or you can try your approach, poisoning them via TCP/IP… Somehow… Just down another shot, RSH.

“It’s like the anti-nuclear people never understood that all those nuclear weapons were never intended to be USED – just PAID FOR.”

Tell that to the previous residents of Hiroshima and Nagasaki, along with many Cold War folks who nearly shot them off a few times. They were definitely made to be used. Politicians just wisened up, realized it was a M.A.D. idea, and decided they’re better for posturing.

Richard Steven Hack July 4, 2011 5:42 PM

Nick: All wrong.

First, Hiroshima and Nagasaki were done with the first couple of A-bombs. A few thousand more were built to allow the US to bomb most of Russia, first on bombers, then on nuclear subs.

ALL the rest were built just to be paid for because everyone knows in a full-scale nuclear war all the US airfields, arsenals, and missile sites would be destroyed in the first 30 minutes (and the same on the Russian side.) So all those other thousands of stockpiled nukes were a complete waste of money (even allowing for rotation, degradation, etc.)

“Since when does knowing the source or design guarantee an exploit?”

You have great confidence in your ability to design a completely secure system, right?

Well, you’re wrong. I stand by my statement. Anything you can come up with will be defeated by someone else, no matter how much effort that takes or what kind of end-run around your technology has to be done. History bears this out and doesn’t even come close to bearing you out.

“remote, unknown or nameless enemies.”

While these are possible, they rarely exist in reality. Almost always, just like some who murders you, you know who your enemies are. Very rarely does someone come out of nowhere. You may not know precisely WHO they are until they reveal themselves but you know they exist and you know why they exist.

“I’d say make their exploits as hard as possible, force them to come in with physical attacks, and then unload on them with hydroshock.”

Yes, generally that is a good approach – unless they come in with their own hydroshock. Then you’d better be prepared to flee (or make sure they can’t find you in the first place) and retaliate when your tactical situation is better.

In other words, you’re not always the one with the superior firepower. This is especially true if you’re going up against a government.

Keep in mind that a given computer system with your “secure” technology isn’t being protected by “the government”. It’s one device being protected by a limited number of people, a limited amount of technology and whatever benefits your technology brings to the situation. It’s enemies may include thousands of civilian hackers of varying skills, an unknown but large number of hackers from other states, not to mention other forms of compromise.

The odds are NEVER in your favor in that contest.

“OpenBSD is a high quality UNIX that defeats many attacks, but the state can still spy on most people. Only a small segment would use the secure platform and only a small percentage would use it correctly. ;)”

And the same is true of any system you can devise. Which means the systems on which it is deployed are not secure – there are still ways to end-run the security, usually by tricking the end user. Unless your technology can prevent a human from screwing up (good luck with that), it doesn’t matter how secure your technology is.

You’re myopic. You’re too fixated on the hardware and software solutions to realize that that’s a very small part of security. And that’s not even including the fact that whatever you can build, someone can defeat – even if they have to build ever newer technology to do so.

It’s an arms race. It will always be an arms race. It’s that simple. At best, you can only be “secure” to a certain degree against certain enemies for a certain time. Nothing more.

In most people’s cases, that is adequate because the odds for most people is that they won’t be attacked because they don’t have specific enemies who want to attack them because they’re nobodies.

That doesn’t apply for people who have something worth attacking them for or who have made enemies the way the US (or Al Qaeda or Iran or whoever) has made enemies.

For those people, nothing you come up with will be sufficient forever. The best that can be done, again, is to “keep out the riffraff” and make it harder to penetrate your security.

I’ll repeat my meme: You can haz better security, You can haz worse security. But you cannot haz security. There is no security. Suck it up.

Richard Steven Hack July 4, 2011 5:53 PM

RobertT: “I’m not sure it has any better security, but it may be different enough to just make him a less attractive target.”

Trust me, it has better security. Enough to “keep out the riffraff” (i.e., Windows script kiddies) at least, which is the best you can do with any system. If a pro hacker wants to get in, he’ll get in. But malware isn’t as smart as a pro (until someone invents real AI), so it’s easier to keep out.

As long as he runs NoScript in Firefox he can prevent most browser hijacks while he’s running, and as long as he’s running from a write only media he can prevent system compromise.

Even installed on his hard drive, as long as he’s patched he would only be vulnerable to a zero-day, and very few hackers are searching for and finding zero days on Linux. That will only change if Linux gets more desktop share. Although you’d think since Linux has such a large server share that hackers would be more proactive about trying to learn how to breach it – especially since most computer security tools are developed and run on Linux rather than Windows.

It’s just a lot of work to become expert enough in an OS’ internals to develop zero day exploits and there aren’t enough Linux boxes to make that worthwhile when Windows is so much easier.

That’s why most hackers are coming in through Web sites these days – Web sites are programmed by people who aren’t competent programmers, still less security experts, and they’re full of security vulnerabilities that are much easier to spot and utilize.

No OS is invulnerable and no OS can be made invulnerable. But Linux is significantly more secure than Windows will ever be – as long as the end user doesn’t run as root. Almost every Windows home user runs as administrator on his box, violating PC security rule number one.

tommy July 4, 2011 6:08 PM

@ Robert T.:

“keep enemies closer” … also said by The Godfather in Mario Puzo’s novel, and, IIRC, the movie. The Mob knows a “little bit” about security, though they don’t have even the small restraints placed on LE (yes, oft ignored, I know.) I agree with your using that analogy, and share your cynicism about who’s fooling whom.

@ Richard Steven Hack:

I’ve previously voiced the same concern on other threads, about Gov not wanting to let high-assurance systems get into private or foreign hands. I share your cynicism in general, but will let Nick P. and you argue – uh, discuss the practicalities.

Re: Win DVD/recovery DVDs: Like so many OEM pre-loaded machines, mine doesn’t even include a retail Win DVD, only the OEM’s “recovery disk”, complete with their pre-loaded, paid crapware, and from which you can’t really extract one file or do a repair-only install, except for certain individual apps or modules on the pre-load system and DVD. Hence the need for true boot-and-recover CDs and backups. Acronis has been mentioned on this site — has worked for me — but there are several others out there. I don’t know how well the others work.

Most home users wouldn’t do the frequent backups — keeps you in business with your friend, and I say, don’t give him back the machine until he pays. 😉 They rely on System Restore, but if the HD dies, or “unmountable boot volume” (0x… ED) – I’ve had both — how do you get to SysRestore? Hence, independent boot/recovery is a necessity.

More advice for helping home users: Virtualization is beyond most, but they can be taught to use Sandboxie in only a brief session, especially if you configure it for them. (Just yesterday, helped a friend drill the required holes to allow bookmarks, NoScript permissions, etc. to be saved to HD.) I’ve mentioned it at the USB-found-in-street thread,

http://www.schneier.com/blog/archives/2011/06/yet_another_peo.html

No response as of last night.

@ All regarding USB Boot sticks:

Tried to make one a year or so ago. After hours of torture, I finally called OEM tech support, and was assured that none of this OEM’s laptops were USB-bootable, except for their top-end line. The mobos just aren’t made to boot from USB flash drives. Guess I should have called before trying… But it boots fine from USB-connected non-flash memory, such as an external USB-connected CD or DVD drive with bootable media. Go figure…

@ SM Builders and Moderator:

Yep, it’s true. Non-savvy users are lost.

And we hope you’re an owner or otherwise connected with the construction company in your sig, because the Moderator doesn’t take too kindly to spam (nor do the rest of us). Please let us know your connection to the company, or the post will probably be deleted. Cheers.

Richard Steven Hack July 4, 2011 11:19 PM

Tommy: Haven’t messed with Sandboxie yet. Probably should look into it.

I have several bootable USB drives – one with UBCD4Win, one with BackTrack 4 (soon to be upgraded to 5 on that stick), and one with openSUSE 11.4 – or is it still 11.1, can’t remember…:-)

I need to set up Windows XP all-in-one installs and Windows 7 all-in-one installs on USB keys as well. Because one problem I find in relying on CD and DVD drives is how often those drives are either defective on client machines or simply don’t like the CD/DVD I burned on mine.

Netbooks don’t even have CD/DVD drives. If your hard drive conks out on those, you’re really in trouble, at least in trying to install Windows XP. I think Windows 7 can be booted from a USB drive if I’m not mistaken (I can’t remember), but XP can’t be. Only a USB flash drive can recover you then.

As for some laptops not booting USB devices, frankly laptops are a crippled form of PC. They’re expensive, fragile, not expandable, missing useful features frequently, and generally otherwise crap. Nice to have, as are netbooks, for specific purposes, but should never be used as one’s primary machine. Despite all the tablet and smartphone talk, a desktop remains the best way to use a computer.

But even many desktops, older ones I constantly run into with my (cheap) clients certainly, frequently won’t boot a USB device.

At least Windows Vista and 7 allow you to install device drivers during install from a CD these days instead of a floppy disk. Otherwise with XP, frequently one has to resort to “slip-streaming” the drivers. Not difficult but another time-wasting step due to Microsoft myopia.

The latest trick is multiboot USB drives that allow you to boot any of a dozen or more Linux distros or antivirus rescue disks from one flash drive (given sufficient size). Not sure how often that’s really needed, but the capability is there.

USBs do have their problems. This weekend I put my 16GB in my client’s machine and Windows corrupted it so my Linux box couldn’t see it. Had to reformat it and lose everything on it. Almost all of it is replaceable – but I have to either remember what I had on it or decide to just reload new stuff anyway.

I have around nine or ten USB drives with various utilities, OS’s and other things on them. I could use a couple more. I carry them around in two Case Logic USB drive cases that hold six drives each – very convenient and insures I don’t lose them. When I take one out, I pull off the cap and leave it in the open case, so I remember to find it and replace it before I leave. Haven’t lost one yet! 🙂

tommy July 5, 2011 3:03 AM

@ Richard Steven Hack:

I would be interested in your analysis of Sandboxie (and of everyone else’s here), in context — not as the “perfect solution”, but as a here-and-now method to increase greatly the safety of non-tech users (or techies, for that matter) who are willing to spend 20-30 minutes to learn how to keep stuff in the sandbox, and how to get it out when desired. Once that’s done, and it’s configged to save bookmarks, whitelists, etc., it’s a no-brainer from there. Looking forward to all, whenever.

“one problem I find in relying on CD and DVD drives is how often those drives are either defective on client machines”

No kidding. The older machine’s optical has been highly intermittent for years (shop repairs/replaces it; fails next day), and the newer machine’s is pretty reliable on the read side, but if I try to burn my FDI-backups, it might stop in the middle, thus wasting a DVD blank.

So both are equipped to recognize an external CD/DVD read/writer that has been unfailingly reliable for years, small, lightweight, and not very expensive. If I need my Acronis restore, and I cannot boot from USB flash as noted, and the native optical drive fails …. hosed, except that the external writer will be recognized by BIOS and will boot/load/recover.

You are looking at the world’s greatest supporter of redundancy. 🙂 Note having two machines, in case one is in the shop awaiting parts. It’s happened. Could borrow friends’ machines, but can’t trust them – the machines, not the friends, lol.

“frankly laptops are a crippled form of PC. They’re expensive, fragile, not expandable, missing useful features frequently, and generally otherwise crap. ”

Yup. But not that expensive for low-end, which meets my needs, and not that fragile. This one’s been dropped twice. And don’t know what “features” are missing…

“Nice to have, as are netbooks, for specific purposes, but should never be used as one’s primary machine.”

Except I do. (Mine were expanded in RAM, up to 3-4x OOB, and the dead 80 GB HD was replaced with a 250, not because I needed it – I use less than 1 GB — but the 250 was in stock and cost not much more than waiting a couple of days for the 80.)

For a good while, I was seasonally employed. Roughly half a year here, then half a year a long distance away. Needed car and personal stuff — too long a time away to fly with enough luggage, and cost too much to rent a car at destination for six months. So everything had to fit in the trunk or passenger seats. Which means, “laptop”.

Not so anymore, but still, I wouldn’t spend so much time here, or anywhere else on the Net, if it meant being locked in front of a monitor in the den or office or whatever. Wireless — sit on the living room sofa, or I have an outdoor deck with a very pleasant view and fresh air, which is where I’m writing this now. And where I can write business documents and e-mails as well. (Which a smartphone won’t do.)

Wirelessly-accessed printer/scanner… ahhh! 🙂

“I put my 16GB in my client’s machine and Windows corrupted it so my Linux box couldn’t see it.”

Well, that bites. (no pun intended). Which is why everything essential on my USB sticks is regularly burned to CD/DVD “just in case”.

I switched to the capless variety, with the slide, but admit they’re more fragile. And I “lose” the darn things around the house so much (the sofa eats them), that I tied brightly-colored ribbons through the loops. They show up in My Computer as “Cruzer Blue (drive:) and “Cruzer Gold” (drive:) to match the ribbons. It all helps.

Clive Robinson July 5, 2011 3:14 AM

@ RSH,

“… only be vulnerable to a zero day, and very few hackers are searching for and finding zero days on Linux.”

That I’m not sure on for the simple reason most admins only know their machines are owned when abnormal things happen.

If the more intelligent black hats have woken up to the ROI on APT as oposed to losses on DDoS / SPAM bots then well I’ll let you join the dots (the Chinese and Russians already appear to have from what I can see 😉

“That will only change if Linux gets more desktop share.”

That applies more to low hanging fruit criminals and script kiddies, after a small return at best. If and when the Banks get woken up by the legislators then I expect the position will change as the nice ripe peach low hanging fruits of EBanking in small businesses will (hopefully) disapear.

“Although you’d think since Linux has such a large server share that hackers would be more proactive about trying to learn how to breach it”

Yes and no, if you think about APT types the server targets they are after are mainly intra not extra net and here MS still rules the roost in high value targets like lawyers etc (infrastructure high end engineering and high end finance still use *nix for “uptime” reasons).

Also as you note further down why go after the OS when the Internet facing apps are so baddly designed and implemented…

My personal view is that many *nix users and admins are much more switched on than there MS equivalents at any hierarchical spine point.

They generaly show an independent train of thought and seek out answers themselves rather than waiting for those large organisations in Washington State etc to spoon feed them solutions. This is especialy true of those who did not major in CS but other science/engineering/ maths subjects.

heat death of the universe July 5, 2011 5:34 AM

If and when the Banks get woken up by the legislators

In which century will the average legislator have the understanding of the average reader of this blog?

Clive Robinson July 5, 2011 6:36 AM

@ Heat death,

“In which century will the average legislator have the understanding of the average reader of this blog?”

The answer to that question is down to educating “joe public” as much as the legislators, And this will involve of necessitary the press.

And thereby hangs the problem, the press are in general ill educated on matters technological and prone to look for any “angles” to push the story up the stack to the front page. In this respect they are often like the little boy who cried wolf.

Thus educating the general press is going to be the first uphill struggle, and the second trying to stop them sensationalising what is often trivia.

The legislators do react to the press sensationalism however only once or twice and then usually badly. Often due to the fact they don’t talk to respected experts in the field of endevor only those that look and behave as the politicos do or come from Public Service Agencies with considerable vested interests. Further when they do get beyond the three letter agency fiefdoms they usually only get as far as the lobbyists of the major vested interests that in turn only promote what protects their bailwick.

It is only when the politicos position is threatened in some way or there is a clear public out cry do the legislators actualy act for the voters, and then usually incorrectly and badly due to the afor mentioned reasons.

So the answer to your question might be best answered “come the revolution”.

Nick P July 5, 2011 7:39 AM

@ Richard Steven Hack

“Nick: All wrong.First, Hiroshima and Nagasaki were done with the first couple of A-bombs. A few thousand more were built to allow the US to bomb most of Russia, first on bombers, then on nuclear subs.ALL the rest were built just to be paid for because everyone knows”

Blah blah blah. Straw man argument because you changed your original claim, which was bullshit. Let me remind you what I replied to:

“It’s like the anti-nuclear people never understood that all those nuclear weapons were never intended to be USED – just PAID FOR.” (RSH)

You’re claim, as stated, applied to all nuclear weapons or nukes in general. So, let’s look at history. The Manhattan project was started with the intention of producing a powerful WEAPON (not a financial instrument). The weapon was tested to ensure it would do the job. The weapon was used on two cities, killing hundreds of thousands of people. They were also preprogrammed to hit a bunch of cities if we detected a barrage coming at us during the Cold War and a few incidents nearly had that happen. So far, the facts are against “never intended to be used.” Your argument might have been defensible if you said “most of the current arsenal was never intended to be used.” However, recent plans to use low-yield nukes to hit bunkers even undermines that. So, you’re first point was BS and your recent one is quite debatable.

“Anything you can come up with will be defeated by someone else, no matter how much effort that takes or what kind of end-run around your technology has to be done. History bears this out and doesn’t even come close to bearing you out.”

Bears what out? That low assurance methods will produce systems that continue to be broken? Or that my systems won’t be absolutely secure? I think this is another straw man because I usually talk about “high assurance/robustness” systems, not perfectly secure systems. I’ve also given definitions of the term several times on this blog: “Defeats attacks by sophisticated, well-funded, attackers with high confidence.” This means that they would have quite a hard time trying to compromise it and such a machine might go for years without a flaw being found. That’s a hell of a lot better than our current situation, ya think?

“You’re myopic. You’re too fixated on the hardware and software solutions to realize that that’s a very small part of security.”

You’re… missing the whole point of what I’m doing. OF COURSE there’s more to it than the hardware and software. The issue is called economics. This force makes specialization superior to generalization. My specialty is designing secure systems, especially the software side. Someone ELSE has the specialty of putting together the physical protection mechanisms, training the users, etc. I don’t know about the best alarm system, TEMPEST shielding, etc. If we’re talking about high assurance software in general, why would I specify all of that without a set of requirements or the necessary expertise? Makes no sense.

Besides, what good is all that shit if a 13 year old can remotely own your computer from 10,000 miles away using a kit someone else made? My designs focus on eliminating remote or software-based attacks, which are the majority today. Someone else with the necessary expertise can or will do the rest. This certainly happens when I implement one for a client. All things are considered.

“At best, you can only be “secure” to a certain degree against certain enemies for a certain time. ”

No shit lol…

“In other words, you’re not always the one with the superior firepower. This is especially true if you’re going up against a government.”

Red herring. Again, whose even talking about that? The level of physical and personnel security is determined by the opponents’ resources, which varies case by case. Most of my designs are made to defeat online attacks. EAL7 defeats the very best in this regard. EAL7 also covers trusted distribution, installation, configuration and maintenance. History proves me out on this one: not one high assurance system has ever been compromised on record or failed to do its job during serious natural faults. And you can bet some customer would have complained if it did. An example in security is the Boeing SNS Server (also called “MLS LAN”) and in availability would be an IBM’s System/390 mainframe going 20-30 years before a reboot is required. Wow…

“And the same is true of any system you can devise. Which means the systems on which it is deployed are not secure – there are still ways to end-run the security, usually by tricking the end user. ”

It’s not my problem. I don’t care about them. Like I told tommy, I design these things for people who do care about security & are willing to make the necessary sacrifices to immunize themself to certain classes of attacks. That’s also the stance of the OpenBSD team, who inspired me to take that approach. If the users screw it up, it’s not my fault & doesn’t make my designs any less secure. If anything, the number of times dumb users’ PC’s are compromised and used to launch attacks gives me more reason to do my part in this.

“For those people, nothing you come up with will be sufficient forever. The best that can be done, again, is to “keep out the riffraff” and make it harder to penetrate your security. ”

Exactly. That’s what high assurance development claims to produce. The difference is that it actually produces it most of the time. 😉

Richard Steven Hack July 5, 2011 8:32 PM

@ Richard Steven Hack

“You’re claim, as stated, applied to all nuclear weapons or nukes in general.”

The two – “all nukes” and “nukes in general” – are not identical. My claim applied to the second. Q.E.D.

“The Manhattan project was started with the intention of producing a powerful WEAPON (not a financial instrument).”

History lesson no longer relevant since 1) my claim is established to be about nukes in general, and 2) I already discussed this part.

“So far, the facts are against “never intended to be used.” Your argument might have been defensible if you said “most of the current arsenal was never intended to be used.” ”

That’s what I said in my response.

“However, recent plans to use low-yield nukes to hit bunkers even undermines that.”

I was referring to strategic weapons, not tactical weapons, obviously. And it’s not even clear that bunker busters are intended to be used, since the effects of such weapons if used specifically against Iran or North Korea would likely include serious collateral damage and contamination as far east as India.

Once again, I submit these devices are mostly intended to be paid for, not actually used.

And certainly not all of them are intended to be used since the manufacture order is usually orders of magnitude higher than the likely number of targets. How many nukes did the US stockpile – and did Russia (and China combined) really have that many targets worthy of a nuke strike? I don’t think so.

“Anything you can come up with will be defeated by someone else, no matter how much effort that takes or what kind of end-run around your technology has to be done. History bears this out and doesn’t even come close to bearing you out.”

“Bears what out?”

What I said explicitly.

“I usually talk about “high assurance/robustness” systems, not perfectly secure systems.”

“High assurance” is one of those industry PR terms which are completely meaningless if it’s not measured in specific metrics – which in security terms means long and competent penetration testing by people other than the developers.

“I’ve also given definitions of the term several times on this blog: ‘Defeats attacks by sophisticated, well-funded, attackers with high confidence.'”

And this was tested by who? Those sophisticated, well-funded attackers were identify as who?

This is the idea that if it hasn’t been penetrated YET it works. That’s nonsense.

“This means that they would have quite a hard time trying to compromise it and such a machine might go for years without a flaw being found.”

And it might go two weeks.

“That’s a hell of a lot better than our current situation, ya think?”

Obviously – if it can actually be done – which is the point under discussion.

“You’re… missing the whole point of what I’m doing.”

No, YOU’RE missing my point completely.

“My specialty is designing secure systems, especially the software side. Someone ELSE has the specialty of putting together the physical protection mechanisms, training the users, etc. I don’t know about the best alarm system, TEMPEST shielding, etc. If we’re talking about high assurance software in general, why would I specify all of that without a set of requirements or the necessary expertise? Makes no sense.”

I’m not complaining about any of that. I’m complaining about your talking about your specific technology as being “high assurance” when in fact there is zero evidence to prove that it is absent long and sustained actual success in the field actually repulsing competent attackers.

And I see zero evidence of that. Can you provide it? Just because some of the systems you cite have been fielded in a low number of government systems (which ones? Protecting what? From whom? And when were they attacked, by whom, and what was the outcome – certified?) does not prove those systems are either “high assurance” or anything stronger.

“In other words, you’re not always the one with the superior firepower. This is especially true if you’re going up against a government.”

“Red herring.” I was referring to your reference about bringing attackers into physical range. You initiated that part. Now you’re saying I’m conflating that with your emphasis on remote software systems. I didn’t – you did.

“Again, whose even talking about that?”

Your reference to hydroshock, I believe.

But beyond that, my point stands. My point being: no matter how well your “high-assurance” solution works against certain types of remote attacks, there are other ways to compromise a system. And I’m not even talking about something that requires a physical penetration. I’m talking about things like social engineering and even more extreme methods like kidnapping the guy (or the kids of the guy) who knows the info rather than digging it out of the computer itself (better, having him dig it out because he’s being blackmailed.)

So in response to what good is physical security if some kid can remote own you, what good is a “high assurance OS” when the guy running it is compromised?

It’s six of one and a half dozen of the other. If it’s not ALL covered adequately, there is no security. A security system is only as good as its weakest link – rule number one.

“Most of my designs are made to defeat online attacks. EAL7 defeats the very best in this regard. EAL7 also covers trusted distribution, installation, configuration and maintenance. History proves me out on this one: not one high assurance system has ever been compromised on record or failed to do its job during serious natural faults. And you can bet some customer would have complained if it did.”

Now we get to the meat of it. How many such systems have been deployed, to whom, to protect what – and most importantly, how do we KNOW they have never been compromised?

What you just said is that to date, historically, these systems HAVE BEEN invulnerable. That’s not the same as saying they will ALWAYS be or that they CAN be.

In other words, this historical evidence is essentially anecdotal (read: worthless).

Again, without real evidence of sustained attack over significant time by competent attackers against these systems, no security system can be regarded as “successful”. Success in the real world is all that counts.

“An example in security is the Boeing SNS Server (also called “MLS LAN”) and in availability would be an IBM’s System/390 mainframe going 20-30 years before a reboot is required. Wow…”

Uh, “availability” is not a security issue vis-a-vis penetration, so that isn’t even remotely relevant.

“It’s not my problem. I don’t care about them. ”

LOL.

“If the users screw it up, it’s not my fault & doesn’t make my designs any less secure.”

And we’re back to you’re only fixated on the hardware and software which is what my point was.

My other point was that no matter how good you are at devising such a system someone else can defeat it. Which you still can’t prove BY DEFINITION other than to say, “Well, they haven’t beat it yet – and without providing any evidence to establish that anyone competent and well funded has actually tried.

“Exactly. That’s what high assurance development claims to produce. The difference is that it actually produces it most of the time. ;)”

Except when I talk about security not being possible, I’m not talking about “riff-raff”. I’m talking about competent motivated attackers.

It’s like terrorists. The only reason the US is still standing is because ninety nine percent of terrorists are incompetent, ill-equipped, and inadequately motivated, not to mention stupid.

I suspect the only reason your systems have not been compromised is that no competent attacker with a mandated objective has ever really tried. Or at least we don’t know for sure whether that is the case. If you do, cite the case if you can do so without breaching someone’s security. And even one case proves nothing – only a significant length of time (i.e., several years, longer probably isn’t feasible given technology changes) during which a number of competent attacks by more or less identified known competent and motivated attackers were identified and repulsed in the real world would establish actual “success”.

I submit there is no such proof.

Richard Steven Hack July 5, 2011 8:48 PM

Clilve: ME: “… only be vulnerable to a zero day, and very few hackers are searching for and finding zero days on Linux.”

YOU: “That I’m not sure on for the simple reason most admins only know their machines are owned when abnormal things happen.”

I’m basing my opinion on what I’ve read about hacker emphasis. It just doesn’t seem like most hackers are really trying to penetrate Linux given the preponderance of Windows and the relative ease of penetrating Windows.

And as you say, I think UNIX sysadmins are more likely to be aware of a compromise than a Windows Server “mouse monkey”. You just don’t hear about a lot of Linux server compromises – outside of the Web facing side at least.

“If the more intelligent black hats have woken up to the ROI on APT as oposed to losses on DDoS / SPAM bots then well I’ll let you join the dots (the Chinese and Russians already appear to have from what I can see ;)”

I’m sure that’s true. But again, the profit is in Windows systems since Linux personal systems are only 1-3 percent of the market, and Linux server systems are last I heard maybe 30% (with UNIX taking up another 20% or so).

“That will only change if Linux gets more desktop share.”

“That applies more to low hanging fruit criminals and script kiddies, after a small return at best.”

True. But if Linux had a 30% desktop share instead of a 3% share, there would be more exploits devoted to penetrating Linux in order to create the same botnets as are based on Windows. And that would drive more Linux penetration research on the part of hackers.

“If and when the Banks get woken up by the legislators then I expect the position will change as the nice ripe peach low hanging fruits of EBanking in small businesses will (hopefully) disapear.”

I also doubt that’s going to happen, or at least not any time soon. Even if the Feds mandate more secure e-banking, it will take time for small businesses to convert over and a lot might never. If the Feds mandated something that cost SMBs a lot of money (rather than some $30-100 device like we were discussing in an earlier thread) and required modifying their normal business practices, you can bet it would be resisted mightily even if would reduce such losses.

“Yes and no, if you think about APT types the server targets they are after are mainly intra not extra net and here MS still rules the roost in high value targets like lawyers etc (infrastructure high end engineering and high end finance still use *nix for “uptime” reasons).”

True. But APT is designed to bypass the external OS defenses entirely. So it almost doesn’t matter what the OS defenses are. If you can get someone to drop something in their Linux personal account that allows you to capture their domain resource access passwords, you can access any data they have access to. No need for root or even logon access at all. Same with Windows. All you need to do is identify the OS to determine what applications they’re using and look for an unpatched application flaw. The OS itself doesn’t matter except to indicate which application version to exploit.

Nick P July 6, 2011 4:07 AM

“The two – “all nukes” and “nukes in general” – are not identical. My claim applied to the second. Q.E.D.”

Yeah, afterwards. At first, it was an overly broad claim without a context. Anyone can say.. after the fact… that they implied or meant to say something that made a then-poor argument make sense. Like you did. Yet, your first argument was still wrong and your recent one not nearly as open-and-shut as you believe it to be.

“”High assurance” is one of those industry PR terms which are completely meaningless if it’s not measured in specific metrics – which in security terms means long and competent penetration testing by people other than the developers.”

Not quite. I see a problem here with this and many other parts of your posts. I see you don’t know too much about why these high assurace, such as EAL7 or DO-178B designs, invoke so much confidence. Testing? PLEASE! EAL7 uses formal (mathematical) verification to prove correspondence because “testing can only prove the presence, not the absence, of bugs” (Dijkstra). And they use requirements to design to code correspondence proofs, source testing, reviews, code analysis, covert channel identification, pen testing by top notch attackers, an independent lab doing the same thing, and years of effort for the evaluation. The result is usually that, whatever its security target said, it WILL do almost EVERY time. By almost, the industry norm is to say that there will not be a critical failure in the field for a long time, if ever. History proves this out.

The bigger question in my mind is why you, a smart guy, don’t seem to know much about this and just talk about testing & guys doing reviews that you describe that sound like basic code reviews, not strong mathematical techniques. Why do you see no “historical” evidence or “metrics?” There are TONS of papers showing metrics, cost analyses, effectiveness of verification, and even defect counts of certain processes. How could you miss all of this?

The answer: Richard Steven Hack doesn’t believe in security and for this reason the only thing he looks for is evidence that there is no security. You have no shortage of stories about how this failed, how these lawmakers schemed, etc. While I cite both security successes and failures, you only seem to post about failures of security claims. Anyone looking for the opposite would have found the likes of Cleanroom, Fagan’s Software Inspection Process and Praxis’ Correct by Construction. These pop up in lots of places and show very low defect counts measured with mainstream statistical techniques.

You might have also found things like the LOCK program that broke down specifically what the extra high assurance techniques accomplished, among other examples. You apparently found none of this. It’s there. I’ve mentioned the names of these methodologies plenty (and most are the “medium” end in spite of these great metrics). So, why do you still say “PR terms which are completely meaningless if it’s not measured in specific metrics” and “And this was tested by who? Those sophisticated, well-funded attackers were identify as who?”

Try 2 to 5 years of NSA, SAIC and Cygnacom’s best people. Try products that withstood attacks for years with no signs of compromise, after source & design level validation couldn’t find anything useful either. Try products, like Praxis’, where the defect rate was 2 1/2 better than the Space Shuttle control code and no serious defects were found. Cleanroom empirical studies, which you’d seen had you Googled for this stuff, consistently show low defect density and many fielded systems that had no significant problems far as the users were concerned. (User-centric reliability & meeting requirements is usually the goal of Cleanroom, hence that focus.)

Why I mention mainframe? Availability is part of security, especially where fuzz testing and online attacks are concerned. 20 years of both accidental and malicious problems without a reboot is significant evidence to my claims that certain engineering processes can produce systems that meet their goals with high confidence for long periods of time w/out failure. The old Orange Book A1 class systems and current EAL7 class systems just apply an even more extreme version of these principles to achieving confidentiality and integrity (read: security in their context). Strange as it seems, these are easier to verify with high assurance because they can be easily modeled in mathematics (e.g. Bell Lapadula, Biba, Lattices & Type Enforcement).

“I’m complaining about your talking about your specific technology as being “high assurance” when in fact there is zero evidence to prove that it is absent long and sustained actual success in the field actually repulsing competent attackers.”

I promote approaches to high assurance: principles, methodologies, design approaches, etc. I also call certain products, like GEMSOS or INTEGRITY-178B, high assurance with regard to what their security targets claimed to accomplish, the evaluations they survived, and what they accomplished in the field. My designs are made to be consistent with high assurance design principles and certifiable to those levels if a high assurance development process creates a concrete design, implementation, etc.

So, you could say the designs have the potential to be developed and certified to high assurance, but they aren’t high assurance by themselves. However, most designs people present have problems that prevent them from being certified to high assurance, esp. complexity. Also, my designs’ use of secure design principles (see Saltzer & Schroder, Parnas, Schell, etc.) certainly eliminates many classes of attacks. Like I often say, eliminate the low hanging fruit first and raise the rest much higher.

“My point being: no matter how well your “high-assurance” solution works against certain types of remote attacks, there are other ways to compromise a system. ”

Again, WHO CARES. If we can’t get the system itself right, then those methods never have to be used. They can always just do a few keystrokes and be on their merry way. Of course, extreme physical attacks would defeat such systems. Guess what? Systems designed to deal with that are different and take that into account. But, the vast majority of attackers WILL NOT “kidnap,” get face-to-face with, have a shootout with, or try any other physical attack on their victim. The majority of attackers do what they do because the risk is low, the battle is easily won, and the expenses are minimal. (Carders, drive-by downloads, ACH fraud, and others come to mind).

Change these things and you defeat the majority of attackers. If they get physical, newsworthy exploits just give me all the market demand I need to come up with something that beats that too for most physical attackers. Most won’t get physical though: they just drop out of the race and the damage done drops that much. The vast majority of attacks right now take place due to poorly configured discretionary controls that allow simple social engineering attacks and software that sucks. High assurance designs with mandatory controls raise the bar so high that few would even attempt to attack them. That translates to significantly lower risk for businesses and governments.

“And we’re back to you’re only fixated on the hardware and software which is what my point was.”

You’re point was that that was a bad thing. That’s like saying a lock designer is “fixated” on designing locks and this is bad because people might loan bad people keys. Tell that to one and he’d laugh at you. He’d tell you, as I am now, that the purpose of this focus is to give users the tools they need for a specific part of their security and guidance on proper usage. Users choose what to do with them. I’m not trying to give security to people who don’t care about it: they’ll always subvert themselves for the most part. I’m creating the tools and techniques that responsible people can use to improve their security. Others create different tools and approaches. They are combined for overall security. There’s no “fixated:” there’s just “We each do what part we’re good at and try to make our parts work together to achieve security as a whole.”

“I suspect the only reason your systems have not been compromised is that no competent attacker with a mandated objective has ever really tried.”

I keep thinking you only say this stuff because you failed. If I recall correctly, you got caught. So many of us were smart enough to dodge those that chased us. The statute of limitations has passed… no guarantee, there. Back when I was younger, I humiliated plenty of admins, property managers and occasionally political bigwigs. I caused no actual damage to decent people. The high end opponents, who I think you’ll know, apparently didn’t pursue me en masse. (Luck or proper choice of targets?) Those that tried (and we certainly detected them), couldn’t find jack because we were good at dealing with that. Paranoia pays off sometimes I guess.

Academics have replicated some of my methods in theory and prototype, but they still work in practice due to the lag between academic theory and widespread adoption. Anyway, just because you weren’t good enough doesn’t mean all others will fail. “There [truly] is no security” for Richard Steven Hack. I can believe that. But, many of us have accomplished far more in practice, maybe not forever. We instead say “practical high security is hard and painful to accomplish, but possible.” You just seem to be projecting your own failures on everyone and everything else. That’s what I keep feeling.

jacob July 6, 2011 9:53 AM

@nickP, man you make my head hurt. this thread is going to make me research for days. My understanding condensed would be the basic premise that operators and admin screw up in execution and operation of systems. There is no perfect security. You weigh the threats, act accordingly and have losts of backup/recovery plans. oh, and that software is written poorly. If a professional wants in your system they can. Anybody can read this stuff from clive, you, or bruce and many others. The problem it is not applied. There should be licenses to own a computer.
That goes out to all the “family” IT people out there. “what did you do?”

the ugly stepchild of access control is physical security. I have seen spaces that a two year old could kick in through the wall (but readers, no latch guard)It’s enough to make you scream or “professionals” not even thinking twice about using an unsecured wifi at starbucks.

For me, the more I learn the less I have to admit I know. 😉 Bowing to the greats in this thread.

Miriam R July 6, 2011 10:21 AM

[warning – n00b post]
I’m a little surprised that no one here has mentioned the Qubes Project. Given the discussion of trusted computing and sandboxing, evil USB and access to MBR from within the OS, it seems that Qubes would be part of the solution.

Yes, the trust chain starts with Intel TPM, but it’s a start at creating a functional desktop for end users that has isolation built in.

jacob July 6, 2011 1:16 PM

@clive my guess that would depend on whether you wanted to keep people in vs. out?? Just stirring it up. 😉

Richard Steven Hack July 6, 2011 2:36 PM

Nick: “The two – “all nukes” and “nukes in general” – are not identical. My claim applied to the second. Q.E.D.”

“Yeah, afterwards. At first, it was an overly broad claim without a context. Anyone can say.. after the fact… that they implied or meant to say something that made a then-poor argument make sense. Like you did.”

And anyone can misunderstand the first statement. As you did.

OBVIOUSLY I have never believed that ALL – i.e., every single one – of existing nukes were never meant to be used. Since two WERE used, that is a historical fact. It’s irrelevant to my argument.

You’re just complaining now because your response was overly broad and irrelevant.

“Testing? PLEASE! EAL7 uses formal (mathematical) verification to prove correspondence because “testing can only prove the presence, not the absence, of bugs” (Dijkstra). And they use requirements to design to code correspondence proofs, source testing, reviews, code analysis, covert channel identification, pen testing by top notch attackers, an independent lab doing the same thing, and years of effort for the evaluation.”

Nonsense. Yes, I said nonsense. There is no way a sufficiently complicated system of that type can be formally proven. It’s too complicated.

The only relevant part is your “pen testing by top notch attackers”. The problem with that is those attackers, while being reasonably well motivated, are by definition NOT as well motivated as the people who REALLY want to get into that system, not because they are being a salary but because they are either criminally or ideologically motivated.

I repeat – only a history in the real world of resisting ACTUAL attacks by ACTUAL attackers – not people pretending to be attackers – can one assume one’s security is – for the moment – adequate. The real world is the ONLY validator.

“Why do you see no “historical” evidence or “metrics?” There are TONS of papers showing metrics, cost analyses, effectiveness of verification, and even defect counts of certain processes. How could you miss all of this?”

Because I have other things to do than read them. I know they exist, that is sufficient.

What I do NOT know exists is formal proof of a complex operating system. It is my understanding that formal proofs have only been applied to relatively small systems, single programs, and the like. I do not believe – correct me if I’m wrong – that anyone has applied a formal proof to a multi-million lines of code operating system. If you can cite where that has been done, I’ll take your word for it.

“The answer: Richard Steven Hack doesn’t believe in security and for this reason the only thing he looks for is evidence that there is no security.”

Because that is what you do with security – look for the failures and potential failures. Because if you have failures, you have no security. Again, Q.E.D.

You have the mindset of a designer. I have the mindset of a criminal penetrator. And that is precisely the guy you have to worry about when designing your stuff. That is the guy you need to have testing your stuff – not some mathematician doing a proof.

“show very low defect counts measured with mainstream statistical techniques.”

And they’ve never formally proven a multi-million line OS to be mathematically correct. Am I wrong?

“Try 2 to 5 years of NSA, SAIC and Cygnacom’s best people. Try products that withstood attacks for years with no signs of compromise, after source & design level validation couldn’t find anything useful either.”

Once again – deployed where, protecting what, and attacked by whom – external attackers, not government guys – and when?

“many fielded systems that had no significant problems far as the users were concerned. (User-centric reliability & meeting requirements is usually the goal of Cleanroom, hence that focus.)”

And once again, you’re blowing smoke up my ass with generalities about “reliable systems” which is irrelevant to my question. You still haven’t cited one single system deployed to protect something of importance that actual attackers have repeatedly attempted to access which has withstood such attacks for a minimum of say, three years without a single compromise – and that success has been verified, not just guessed at because no one detected it.

“Availability is part of security”

No, it is not.

“20 years of both accidental and malicious problems without a reboot is significant evidence”

No, it is not. Not without direct evidence that malicious attacks by competent attackers attempting to obtain something of value were verified not to have been successful. Again, this is the ONLY criteria which can be applied to prove success.

“I also call certain products, like GEMSOS or INTEGRITY-178B, high assurance with regard to what their security targets claimed to accomplish, the evaluations they survived, and what they accomplished in the field.”

And the only relevant part is “what they accomplished in the field”. Which means you list the attacks that were made against them, the nature and competence of the attackers involved – again, real world attackers, not pen testers – and verify that not one of those attackers ever made it into the system over a period of at least three years, if not longer. Cite that and I will believe that your systems are “high assurance” – for now.

My designs are made to be consistent with high assurance design principles and certifiable to those levels if a high assurance development process creates a concrete design, implementation, etc.”

Yeah, yeah, blah, blah. Answer the question: where was it deployed, what was it protecting, who attacked it, and is it certified that it was never ever penetrated for at least three years?

“However, most designs people present have problems that prevent them from being certified to high assurance, esp. complexity.”

Exactly what I said above. No one has proven a multi-million line operating system to be provable correct – and it’s unlikely outside of an AI approach that anyone ever will.

“If we can’t get the system itself right, then those methods never have to be used.”

Never said one shouldn’t try. Although there are limits to how much money and effort should be expended to try, depending on the value of what one is trying to protect. Just don’t believe you succeeded until you have years of actual real world success against real world opponents. If a system remains un-compromised until it is retired for a new and better system, then you can claim it was a complete success (IF in fact it was attacked repeatedly – a system that was never attacked or only attacked by “riff-raff” is not a success on the level we’re talking about.)

“High assurance designs with mandatory controls raise the bar so high that few would even attempt to attack them. That translates to significantly lower risk for businesses and governments.”

Once again, this is irrelevant to my point. My point is about committed, high competence attackers, not riff-raff. You don’t spend millions or billions on high-assurance systems to stop script kiddies.

“I’m creating the tools and techniques that responsible people can use to improve their security.”

I’m not talking about that. I’m talking about your assumption that because you design such systems that you believe they cannot be compromised by committed, high competence attackers. I’m telling you that isn’t true.

And there are a hell of a lot of supposed “unbreakable, un-pickable” locks being defeated daily – or bypassed all together.

“There’s no ‘fixated:'” The fixation comes when you believe that any given part of security is BY ITSELF unbeatable. Which is precisely what you are claiming when you say these “high assurance” operating systems have done twenty years without compromise.

“I keep thinking you only say this stuff because you failed. If I recall correctly, you got caught.”

I know precisely how I got caught doing a physical crime. I also know that if I had done a little more research – in fact, if I had come out of a criminal milieu where other criminals could have informed me of the proper methods for dealing with radio transmitters – I would have succeeded. Which means I would have defeated a technological means which the bank and law enforcement believed made them more secure but which in fact could be easily evaded and in fact is frequently so evaded.

“Paranoia pays off sometimes I guess.” It does, indeed.

“Anyway, just because you weren’t good enough doesn’t mean all others will fail.”

It’s funny that you’re citing my point back to me. Just because your systems may have worked for some people against some attackers doesn’t mean all other attackers will fail. That is the whole point: you can NEVER know that you have security until the day you drop dead – from natural causes 🙂 – without having been defeated.

Not even Miyamoto Mushashi achieved that. Legend has it a ninja with a fan defeated the greatest Japanese swordsman who ever lived. Which is perfectly relevant to what we’re talking about – someone with the technological equivalent of a fan might defeat your “high assurance” sword if they’re clever enough.

“We instead say ‘practical high security is hard and painful to accomplish, but possible.'”

And I don’t say it’s totally impossible to have “high” security. My meme says this explicitly: You can haz worse security. You can haz better security. But you can’t haz security.” Because there is no absolute known as “security”.

And the reason for that is because of what could be called the “uncertainty principle.” Or simply, “shit happens”. As I said elsewhere, stop trying to control the universe. Because you can’t. Instead, concentrate on dealing with what happens.

This is a principle long known in martial arts. You don’t master a few techniques and then use them to deal with everything. You learn to merge with what’s happening and do what is necessary to turn it to your advantage.

“You just seem to be projecting your own failures on everyone and everything else. That’s what I keep feeling.”

Unfortunately, your feelings aren’t relevant here. I make an argument which is based on logic and the real world of security and the history of actual attempts to achieve “security”, not theoretical proofs.

“Bowing to the greats in this thread.”

That would be me. 🙂

Nick P July 6, 2011 4:53 PM

@ Richard Steven Hack

It’s starting to become clear this is a religion for you, not a science. Less worth a debate. I’ll address a few points.

“Nonsense. Yes, I said nonsense. There is no way a sufficiently complicated system of that type can be formally proven. It’s too complicated.”

You have proof, right? As opposed to a statement of faith that there is no possibility?

“The only relevant part is your “pen testing by top notch attackers”. ”

Utter nonsense. Design & code reviews by pro’s catch most of the problems in a rigorous design process, as well as internal testing. People like Smith, of LOCK program, also reported that merely using a formal specification eliminated many problems just because it forced them to be unambiguous & make the design more straightforward. Covert channel analysis also found tons of leaks & choked attackers further. The “new” Jitterbug & DNS attacks would have been caught that way. Finally, numerous project leaders, esp. SCOMP & LOCK, noted that using formal proofs found flaws in the security model and implementation.

Then, they started the pen testing. Even an amateur developer would laugh at your claim that only pentesting is important. The previous methods force a system to be correct, with high likelihood, right out the door & prevent defects all throughout. It’s no surprise that subsequent evaluations & pen testing never found any significant problems.

“Because I have other things to do than read them. I know they exist, that is sufficient. ”

Paraphrased: I know there exists papers that contain empirical evidence about these claims, but I’m not going to read them & will instead find evidence supporting my own view point. Reminds me of creationists debating style…

“What I do NOT know exists is formal proof of a complex operating system. It is my understanding that formal proofs have only been applied to relatively small systems, single programs, and the like.”

The first sensible thing you’ve said! Absolutely. There’s efforts right now at systems with a million lines of code. However, the traditional approach is to use these approaches to make a big system verifiable: layering; decomposition into small modules; precisely specified interfaces and interactions; then modeling of each module to prove it’s correctness, then use those results & a model of component interactions to prove system correctness as a whole. Real-world examples: GEMSOS’s kernel was a bunch of non-looping layers & interactions were verified; BLACKER VPN modeled each component as a system and then the overall VPN combined those results into a network-level model.

So, like with any complex system engineering, it’s all about turning a big problem into a bunch of little ones, solving the little ones, and tying it all together. It’s been done plenty of times and the reports often give very specific metrics. More recent reports, like L4.Verified, are careful to say when an assumption is made & exactly what the proofs apply too. Again, it’s why designers try to keep the TCB to a minimum so the formal processes are focused on that.

“Because that is what you do with security – look for the failures and potential failures. Because if you have failures, you have no security”

No, that’s one aspect of security. Another is thinking in terms of information flow and transformation, which is the original model of security (i.e. Shannon). Every effective access control mechanism tries to control how information flows, the process management mechanism tries to isolate information into compartments, & control of write privileges ensures integrity of information. So, it’s all about controlling information. The failures just taught us about approaches that didn’t work. You also have to look at successes & there have been many.

Reusing a successful approach against another instance indicates likely success. Bell-Lapadula model, for instance, has been ensuring multi-level security for two decades now. It will probably work in the next system. The type-safe, memory managed language defeats buffer overflows at application level by design becaues they are logically impossible. (We focus on VM & libraries instead.) So do I have to pen test my next Java application to ensure each of it’s variables won’t overflow? Or can I trust its design and previous successes to tell me that? So, there’s definitely more ways to prove & produce correctness without just thinking of failure. That should be just one element in the process.

“No, it is not. Not without direct evidence that malicious attacks by competent attackers attempting to obtain something of value were verified not to have been successful. Again, this is the ONLY criteria which can be applied to prove success.”

“And the only relevant part is “what they accomplished in the field”. Which means you list the attacks that were made against them, the nature and competence of the attackers involved”

The first problem with these repetitions is that you deny every measurement I give and yet you fail to give any practical alternative. How exactly does one look at a system, know it’s had an advanced compromise, know what type of attacker did it, and know how often it’s happened over three years? You’ve simply made the criteria as such that’s impossible to provide evidence that you would accept.

My alternative is easier. You talk about failure modes. Let’s apply them to the mainframe. How would an attack happen? An application-level attack that get’s a new process started, possibly crashing the old app? A ton of extra network traffic leaving the mainframe to an unusual IP? A bunch of login attempts? There’s only so many general and specific avenues of attack and most get noticed as the attacker probably doesn’t know the system and networking configuration ahead of time. That no admin’s report a compromise and the system stayed stable for 20 years indicate it resisted whatever was thrown at it.

As for GEMSOS & INTEGRITY-178B, they were deployed in the field protecting highly classified information with no evidence of a compromise. GEMSOS was a MLS system, so any residual covert channel only let 10 bits per second through & was audited. Three years field use? Try ten for INTEGRITY-178B. It got it’s EAL6+ certification in 2008, after 2 years of pen testing. It’s 2011 and still no failure. Boeing’s SNS was certified to high assurance by NSA in 1994, with every improvement making the grade again. It’s been deployed to protect high value assets from hackers for 17 years now without a compromise & they’re doing an EAL7 evaluation just in case.

These systems were built with so-called Correct by Construction approaches and, like you said, what happens in the field says a lot about the approachs’ claims. Medium assurance approaches have achieved similar success. MULTOS was targetted at a lower level of assurance, is deployed in tons of smart cards, and nobody has compromised the OS. OKL4 4.0, medium assurance, has been deployed in hundreds of millions of phones without a reported compromise or OS-related failure. BAE’s XTS/STOP line reused the SCOMP OS & built a medium to high assurance platform on it. Originally certified to EAL6/B3 in 1992, it’s got no public compromises and currently protects at least 200 sites according to their web site.

So, we have several systems using the approaches I advocate that have survived over ten years in the field. So, yes, these approaches obviously work, it has been done in 20th century, it can be done now with superior technology, and the systems are usually trustworthy for years afterward. It’s not absolute, it’s not perfect, and it’s not the only part of security, but it’s there and does its part.

Repeating “there is no security” repeatedly while ignoring evidence of high security in certain contexts doesn’t change the truth: in all likelihood, you nor anyone else will ever hack something designed like this & if you do it will take years. That’s what I mean when I usually say “secure,” because I honestly get tired of typing “high assurance.” History proves me out, as I’ve illustrated. We can’t take this approach for all systems. We can do it for the core functionality of critical systems, networks, etc. Pessimism with religious zeal is a poor excuse not to apply methods proven to work to do what we can.

tommy July 6, 2011 8:48 PM

@ Richard Steven Hack and Nick P.:

Wow, this blog has always had high information value. Who knew it had high entertainment value as well? ;-D

Seriously, it’s good to see these things hashed (no pun intended) out, and the more links and potential search terms of what’s out there, the better.

@ Nick P.:

“Reminds me of creationists debating style…”

Did you mean, a group of creationists sitting around, watching Bravo, and talking about next season’s fashions? …
hmm, no, I think you meant,

“Reminds me of creationists’ debating style…” i. e., the debating style used by creationists. 🙂

Of course we all knew what you meant, and it’s a blog, not a legal contract. But it’s a cool example — not at your expense; everyone does it, self included, at blogs and such — of why I’m a rigid “prescriptivist” in grammar, derided by modern “descriptivists” as being old-fashioned or OCD or anal or whatever. One single apostrophe, to show possessiveness, or the lack of same, completely changes the literal meaning.

People who write code for a living understand that, then turn around and write their English with no such care or respect for syntax. I’m not talking about you or me or this blog, but about professional, academic, scientific, didactic, or other “serious” works. (Which is why I’ve offered proofreading service several times.) OK, you guys have had your rants; now I’ve had mine. 🙂

Speaking of the creation/evolution debate, here’s my satiric take on the whole thing:

“Where Do I Begin? (Theme from “Love Story”)” by Andy Williams =

“Where Did Man Begin? (Creation / Evolution)” by Your Humble Servant, Tommy Turtle:

http://www.amiright.com/parody/70s/andywilliams5.shtml

jacob July 6, 2011 9:15 PM

@NickP Hopefully not stirring up a hornet’s nest.
1. People who are pessimistic may be guilty of assuming that because something “can” be defeated that a security system has less than perfect value.
2.They don’t understand the concept that security is not perfect and never will be. You evaluate the risks, value of what you are protecting, and how much loss you are willing to swallow.
3. Security is more than computer code. It involves physical security, access control, user rights, audits, pen testing (yes i said it), and many other things.
4. If a space with servers had physical security with readers, electric strikes, a heavy duty door, etc. and has no extended wall but drop ceiling it’s a security risk..If users don’t have the ability to operate the mouse, chances are they can send a confidential document by mistake.
5. It’s like putting a camera in place, the thief knows it and wears a mask or that snatch and grab. It’s pretty hard to defend against in some cases.
6. The security industry is pessimistic by nature but we risk blaming customers or each other for a “perfect” solution that does not exist and never will.
7. I am a noob so to speak but always trying to learn and realize that are many bright and more experienced people than me out there. I try to talk and listen to them. I know how Bruce feels about certs but I would certainly give someone with a GSE and proven track record the job for evaluation or hopefully he/she could help with the project for an enterprise deployment.
8. Sometimes I have seriously considered taking cd drives out, disabling the USBs and putting resin in the holes. Less chance for mischief. I have VERY strongly advocated encryption for practically anything. esp. notebooks and usb devices. walling off info, and procedures.
9. Most of the time it is not how much I know or even remember that is important. It is a matter of who I bring in to project and who can explain to the customer.
10. I have thought for years that say 10 small businesses could pool their resources and hire an IT security team or contract one for the group. They may not be able to afford a crack team (pun intended) alone but together could do it. Just my thoughts.

tommy July 6, 2011 9:33 PM

@ Jacob:

“8. Sometimes I have seriously considered taking cd drives out, disabling the USBs and putting resin in the holes. Less chance for mischief. I have VERY strongly advocated encryption for practically anything. esp. notebooks and usb devices. walling off info, and procedures.”

Don’t take out the CD drives. We had an extensive discussion about raising the security of online banking,

http://www.schneier.com/blog/archives/2011/06/court_ruling_on.html

and the idea of booting from a live CD, which is non-writeable and hence non-infectible, came up a good bit. Also, tools like Acronis create full-disk-image backups and emergency boot CDs that can reboot a totally hosed, non-bootable computer and restore it to a pre-infection state, if that date is known. Other CD tools can remove malware that Windows or whatever can’t even see while it’s running.

I’ve recommended Sandboxie for unknown USB sticks, optical discs, and all browsing (as I am doing right now), with appropriate disclamers (not connected with the company; I can’t be liable for your results). Low-cost, low-tech, available now. Not perfect, but a quantum (no pun) leap in average-user security.

Nick P July 6, 2011 10:33 PM

@ jacob

“hopefully not stirring up a hornets nest”

Hardly. Richard and I often focus on the opposite extremes of the situation. I started firing posts at him because he was basically ignoring, not reading, etc. any evidence against his claims. People rarely do that on this blog, so the discussions are more civil. Another reason for the content in my posts was to provide useful, verifiable anecdotes for anyone else reading our posts who wanted to know what has been achieved and what’s achievable.

“3. Security is more than computer code. It involves physical security, access control, user rights, audits, pen testing (yes i said it), and many other things.”

Oh, absolutely. Like I told Richard, we all have our specialties. Mine are system analysis, pentesting (surprise!), software security, system-level security, subversion and covert channels. I have medium knowledge of most of the other issues, maybe just a bit more or less than a CISSP or GSEC cert requires. Real implementations involve physical and information security. I also have to tell users about proper usage of the system, what risks they might face, illustrate them (seeing is believing), and tell them how to respond to each. This must also be codified in policies for legal reasons. There’s testing, maintenance issues, etc. There’s usually a team of us doing all of that.

These are also all required to be addressed in a robust way for a high assurance product evaluation. Just look up Orange Book A1 & Common Criteria EAL7 requirements if you doubt this. But, my focus is on my specialties. If my designs were productized, all the other stuff would be addressed, probably by domain experts. Would you really want a guy like me to try to give advice on what constitutes “high assurance” physical security? Nah, better to leave that to someone else. 😉

“Sometimes I have seriously considered taking cd drives out, disabling the USBs and putting resin in the holes. Less chance for mischief. I have VERY strongly advocated encryption for practically anything. esp. notebooks and usb devices. walling off info, and procedures.”

All good ideas and not as paranoid as you think. Executives have been known to pull batteries out of their cell phones during confidential meetings and physically plug USB ports. Matter of fact, I have done exactly what you described: removing potentially buggy devices; disabling DMA on IDE with a jumper change; clogging up ports; buying a Core i7 and disabling all but one core, among other BIOS options. The last might leave you puzzled but most paradoxes have an answer: i7’s had fewer security-critical processor errata (see Kris Kaspersky’s CPU bug presentation) and the use of multiple cores sharing one cache allows covert leaking of information. So, a one core i7 kills two birds with one stone.

But, for a general-purpose PC, tommy is right in recommending you keep the CD-ROM. So long as your BIOS is intact, a CD is one of the cleanest ways to boot to a secure state. Might be to bank online, restore your computer, whatever. The important point is that read-only, bootable media is inherently more trustworthy than something your software can change.

“Most of the time it is not how much I know or even remember that is important. It is a matter of who I bring in to project and who can explain to the customer. ”

Yes, user or customer acceptance often makes or breaks a security opportunity. The higher the cost, the more justification is required. To make it even harder, lay people naturally have a hard time understanding these kinds of risks and the security industry’s F.U.D. makes it hard to earn customer’s trust.

“I have thought for years that say 10 small businesses could pool their resources and hire an IT security team or contract one for the group. They may not be able to afford a crack team (pun intended) alone but together could do it. Just my thoughts.”

tommy, he’s stumbling on one of my business models. 😉 The concept is doable. My previous thinking about these ideas led me to think about creating a nonprofit that targeted small businesses and non-profits. For a certain membership fee, they get access to the services based on need. There might be a higher fee the first year to cover the extra costs. The services end would focus on the core services that provide the highest security ROI, like secure configuration of critical servers or apps.

The nonprofit might even be started with grant money & offer to advertise which big companies are “helping small businesses secure their network.”

tommy July 7, 2011 12:24 AM

@ Nick P.:

“I have thought for years that say 10 small businesses could pool their resources and hire an IT security team or contract one for the group. They may not be able to afford a crack team (pun intended) alone but together could do it. Just my thoughts.”

“tommy, he’s stumbling on one of my business models. ;)”

And on one of mine, proving once again that GMTA. You’ll recall, I hope, that in discussing the Bank-Only Live CD idea, and the Bank-Only Secure VPN idea, I mentioned that the more banks adopted any given idea, the more cost-efficient it becomes, as third parties or inter-bank joint ventures could achieve economies of scale in production, maintenance, etc.
Those particular ideas may have not survived the cut, but the idea of pooling to share expense surely did?

Anyway, I think your idea is superb — and even more important, feasible and economically doable. I’d strongly encourage you to pursue that further.

Speaking of economies, here’s a word to save you a few keystrokes:

“productized” … how about “produced”? 😀

Seriously, I think jargon-speak turns off customers, and I try to keep my presentations in plain English when dealing with customers who are not in my specialty, finance/economics. It isn’t always easy, and certainly very hard in IT — Stephen Hawking doesn’t have to sell his ideas to high-school dropouts or Liberal Arts majors. No charge for the consult, especially given all I’ve learned from your posts and messages. 🙂

More usage guides, “there, their, they’re” with super-mnemonics:
http://www.amiright.com/parody/60s/thebyrds27.shtml

“Who/Whom”:
http://www.amiright.com/parody/60s/thebeatles2006.shtml

“Fewer/Less”:
http://www.amiright.com/parody/60s/thebeatles2007.shtml

Puctuation:
http://www.amiright.com/parody/60s/thebeatles1719.shtml

Clive Robinson July 7, 2011 3:55 AM

@ tommy,

“I try to keep my presentations in plain English when dealing with customers who are not in my specialty, finance/economics.”

Yup a wise thing to do, however there is also the issue over being overly formal in the presentation.
Legislators are realy bad offenders then standards bodies and those writing specifications. Sometimes you have to “legalize” it as well as crossing the T’s and dotting the I’s, and that almost always makes for a dull read if not eye and mind straining to the point of migrain.

With regard the Finance / Economics do you drop in on the Financial Cryptography blog?

@ Nick P & RSH,

No system is perfect not just by definition but the laws of physics as well.

However with (nearly) determanistic systems there are two basic methods of finding attack vectors on an unknown (ie blackbox) system. These are the old “brut force” and “random” methods.

As normal we give them fancy names as befiting our field of endevor, however the best results are usually had with a mixture of both as in directed fuzzing. This can be further improved by having some knowledge of the system in advance of testing to better direct the attack.

But there is a time issue to consider, at a low level a “stateless black box” is assumed to consist of a number of inputs and a number of outputs, the problem is determaning the likley internal circuit by observing the outputs from actions at the inputs. To do this on a system without internal state means cycling through all the input states of each and every input whilst determaning the state of each and every output.

With a logic system where all inputs and outputs have a binary state you can show that the number of possible logical circuits for each output is 2^(2^n) that is 4 inputs gives 2^16= 64K potential logic circuits per output (although half the number are inverses of each other and likewise half of those are the sequence in reverse, have a look at Walsh functions to see how to analyse for the minimal circuit)

Thus it can be seen that you cann’t realy know what is in the black box just an aproximation based on deduction.

When you then add state into the blackbox problems realy arise. Firstly because you have to work out which circuits have state in them and which circuits contain the control functions of the “latch”. Even with very few inputs realisticaly there is not enough time within the expected life of the product to do a brut force test.

When you then move to a black box not just with state but feedback around the state etc the time scales start aproaching that of the life times of stars. but also you will be constrained long before that by the lack of state in the system you are using to analyse the black box.

Thus it can be seen it is not possible to black box test something like an 8bit CPU core in the lifetime of a product.

As engineers we resolve this issue by breaking the internal circuit into logical blocks that we can test and testing those and assume that if each small block has passed then the whole system has passed.

However there are a couple of problems. Firstly I said “(nearly) determanistic system” even though we call them “logic circuits” they are not they are analog circuits designed to behave like an aproximation to a logic circuit. You can see this in the design of some oscilator circuits where a “logic inverter” is actually used as an amplifier with the crystal or RC circuit used as the feedback element to define the operating frequency. You will also see in old circuit designs Hex Inverters in CMOS (40 series) logic packages used as not just as analog amplifiers (with about a gain of ten) but also as other analog functions such as frequency translators (mixers). Motorola in one of their engineering notes on their MC40XX range showed how to do a number of analog functions. Even 74HSxx TTL packages could be used to make not just oscilators but FM modulators as well, which alowed a 74HS13 30Mhz Xtal and an electret microphone to make a nice little FM band (third harmonic) bug.

These “unexpected analog” functions apply to all logic circuits and cause non determanistic behaviour (very) occasionaly. Which at the chip level gives rise to the question of if the “analog behaviour” can be exploited in chip etc. It is a discussion I”m having with @ RobertT, over on another thread, and the answer appears to be yes. That is you can design a logic circuit that passess all the “logic” testing but has hidden “analog” function that can then be exploited in some way.

Thus the bottom line is you cannot trust the chips in the system

The second issue is that of Systems On a Chip (SOC) as seen in the likes of mobile phones. These are so integrated and made up of so many logic macros that lie on top of each other that realisticaly it is not possible to know anylonger where some macros are used and have been absorbed into larger macros and thus into complex fuctions. That is even the designers of SOC’s don’t know what is happening at gate level they pick high level functions out of libraries, most of which have never been looked at for anything other than “wanted functionality”.

That is we have know way of knowing if even the logic design is secure, let alone if there are extra anaolog side channels there.

The bottom line is the chips cannot be trusted.

And this gives rise to the question of how to design a system that can be verifiably secure even if it does use chips that have been “backdoored” either accidentaly or by design.

And the second question of what we mean by “verifiably secure”…

jacob July 7, 2011 9:35 AM

@NickP, Clive.
It’s reasuring that I’m not an idiot, thanks,
I am project manager in the security industry (think more like constr rather then pen testing).
I have to bring in and talk to a lot of different people from the construction worker to suits and IT. It is not boring I can assure you.

My dream is to get my CISSP and GSE. Not career-wise but because I enjoy it. It probably would help with opening up some horizons, but the knowledge not even the cert is important to me.
Thanks

Pete Dumas July 7, 2011 9:53 AM

As far as the eralier mentioned DHCP server that apears to be their preferred method of route hijacking, I can tell you based on my own analysis that they appear to be using zeroconf and mDNS to achieve their results.

Multicast and Anycast is playing a big part in all of this and traffic analysis should be focused on these protocols.

Capturing interprocess communications will reveal this. On an infected machine, the “undernet’ network that they are running can be made visible on a windows machine by viewing particular process strings of an infected service (meaning you can actually see their real-time communications).

The rootkit is without a doubt placed in a LOCKed state and is only removed via a new hard drive…low level formatting (HDDErase) does not work. My experience with assembly is very limited, so it was cheaper in time to simply get a new hard drive and build from scratch. Hope this might help any “experts” that are reading this.

Also, I noticed that high-priority targets (i.e. Engineers & SysOps) have a kill switch which renders the drive inoperable after activation. This kill switch was seen on my machine by very quickly pressing the num-locks, cap-locks, and scroll locks immediately at system startup during the POST sequences.

I am also really worried about the ease that BIOS is able to be flashed. Making ROM just that again will go a long way in protection from these sophisticated attacks. The average home user doesn’t need this functionality and even experienced technicians continuously update their BIOS. If you think that the state-sponsered black hats aren’t proficient enough to alter BIOS, your just kidding yourself.

Pete Dumas July 7, 2011 9:57 AM

I apologize about all the typos in my previous post. I also meant to say that even experienced technicians rarely keep their BIOS continuously updated.

Clive Robinson July 7, 2011 11:09 AM

@ Jacob,

“… but the knowledge not even the cert is important to me.”

A word of caution from a jaded old xxxx (I ‘ll let Nick P fill in the appropriate word 😉

The more you learn, the more you will find you have to learn…

And learn the fundementals not the “latest tools and methods”, fundementals will always work for you the other toys tend to come and go like mayflys.

Oh and although the tech world appears to change overnight, at the end of the day the people that pay the bills, businesses will always want to talk to each other, so err on the side of Comms for longevity in your career.

jacob July 7, 2011 11:44 AM

Oh absolutely. More than 1/2 my time is spent on communication. Learning is the challenge and the fun part.

You and I are old enough to remember 8088 processors.
I just recently upgraded an old system that customer was still using and didn’t want to upgrade. It was isolated so alright, but I didn’t like it. It recently crashed hard and we finally were able to upgrade it. It was a windows3 sytem. How many commands do you remember? Look at who I’m talking to? you probably do and have a system tucked away somewhere. 😉

On another note: comms of another kind are big now. Now people are talking about SCADA and “cloud” computing. Or I think data servers is better than a fad moniker. The risks and loss of data is the growing pains of putting these things on exterior access.

The SCADA are practically bulletproof. I have seen PLCs that were churning 20yrs later. The problem came when people began to put them on networks or access from outside world. Some apps required one heck of a throw and thumb size voltage/amps wouldn’t work. Now software can do more, but still. The software was never expected to be exposed to outside influence.

The law of unintended consequences…

Nick P July 7, 2011 1:42 PM

@ tommy

“Those particular ideas may have not survived the cut, but the idea of pooling to share expense surely did? Anyway, I think your idea is superb — and even more important, feasible and economically doable. I’d strongly encourage you to pursue that further. ”

I’ve noticed that when enough people start independently coming up with the same solution to a hard problem it’s usually worth a deeper look. I appreciate your feedback. Yes, this was similar to one of the business models I came up with for high assurance. I’ll elaborate further in email.

” ‘productized’… how bout ‘produced’?”
“Seriously, I think jargon-speak turns off customers, and I try to keep my presentations in plain English when dealing with customers who are not in my specialty”

Yes, I figured that out the hard way a while back. You think IT is hard to communicate to lay people? Try high assurance methodologies, security engineering or covert channels. Hard to break down into lay terms, so esoteric. I’ve found visually illustrating things with simple diagrams, especially animated, helps.

“No charge for the consult, especially given all I’ve learned from your posts and messages. :)”

Likewise, although I think my balance sheet wouldn’t be in the red if we were both charging for posts. I’d pull a Kurt Russel and charge by the word, but with others actually getting value in return. Clive would probably manage to get a disproportionate share of the wealth, although I get a cut of it for transating his posts to English. (Which is even funnier considering he’s in the country that invented modern English…)

Nick P July 7, 2011 2:06 PM

@ Clive Robinson

“And this gives rise to the question of how to design a system that can be verifiably secure even if it does use chips that have been “backdoored” either accidentaly or by design. And the second question of what we mean by “verifiably secure”…”

Indeed. In our previous discussions, I told you I stop at the VHDL and Netlist level. Maybe the semiconductor design houses can secure other layers, but most researching hardware security don’t have the resources to prove it that far down. It’s another reason why I focus on software secure against remote attackers and hardware secure against people who aren’t running electrical engineering labs. It’s about the best I can do while still keeping costs down.

Even with the macro issue, I’ve found that high assurance software on high quality hardware seems to work out fine in practice. The PowerPC’s Integrity-178B were certified on aren’t high assurance, but the final product was considered to be if no hardware failure occurred. It worked out that way in practice. Same with SNS & GEMSOS, which used x86 with customized firmware. So, it seems we can get away with using non-perfect hardware so long as it doesn’t introduce any problems the software can see & it’s regularly tested to ensure it operates within its specs.

This is not to say high assurance hardware doesn’t exist. It does. AAMP7G has formally verified isolation and microcode. VAMP is formally verified to an extent with MIPS-like instructions (DLX, specifically). But they stop at VHDL, Netlist, etc. I’m not a hardware guy, but I understand that those are compiled further into more primitive stuff that goes on an ASIC, correct? So, the best approaches currently can only go so far down. But, so long as the stuff further down works ok, this is good enough for most applications. Now, designing hardware free of side channels is another issue entirely & needs full bottom-to-top analysis.

jacob July 7, 2011 6:44 PM

@NickP
Just to see if I understand. You are saying that VHDL and Netlist, etc. are good enough? Back in the day you could always check gates etc and be reasonably sure you had checked throughly. Now with the complexity of the chips you have to take it to fab in order to have any hope of “absolutely” checking them out. Just too many “transistors”.

I thought about this back when lenovo was bought out and people were flipping out on procurement, etc. What if the Chinese did something? My thought was that could be a two edged sword for them. Then they started talking about in-house o/s. I tend to think that the two edged sword threat would tend to keep players honest from a blatant attempt like cooking the chips (pun intended). It would be possible to turn that right around and compromise the attackers.

Does the NSA and CIA have backdoors, zero day exploits, or side channels? Yep, I’d bet on it. It’s a game for the big boys. I would quote bruce willis in Armageddon, “Don’t you got people just thinking up s***?” I hope so. But then again, maybe the answer is no and some digital asteroid is just waiting and a smoldering silicon crater is in our future.

The arrest headlines are just low hanging fruit. There are gifted people out there that will never show up for most analysis. They are in and out without anyone knowing or able to track. I hope that govts thought carefully before unleashing sux* on infrastructure. That was a rather interesting work of art. It could have gone undetected. And probably more is already worked up for future use.

I pay attention to news/papers/etc. esp. for timing. I don’t think it was coincidence that one week it was discussed about Chinese sponsored hacking and then 2 wks later about how vulnerable Chinese infrastructure was to hacking. Warning shot?

People have a tendency to read stories or white papers, etc. and not look at bigger picture to get a sense of what agendas may be in play.

For example, I am paying very close attention to TSA stories and Chinese economy and military stories. The sheer volume of discussion about chinese economy should hint that something’s afoot there and sides are jockeying for position. Same with using GPUs for computing power.

In the context of security. Cloud computing and how do you secure it is either hot, or beat to a pulp trying to monetize it depending on how you look at it. 😉 Sorry so long. I tried to keep it on security with limited success.

tommy July 7, 2011 7:11 PM

@ Clive Robinson:

“I try to keep my presentations in plain English when dealing with customers who are not in my specialty, finance/economics.”

“Yup a wise thing to do, however there is also the issue over being overly formal in the presentation.
Legislators are realy bad offenders then standards bodies and those writing specifications. Sometimes you have to “legalize” it as well as crossing the T’s and dotting the I’s, and that almost always makes for a dull read if not eye and mind straining to the point of migrain.”

Sorry I wasn’t more clear. I was referring to the verbal presentation, whereas the actual documents, contracts, etc. will of course dot the i’s, cross the t’s, and are full of legal boilerplate. But I think I do a fair job of explaining all of that in a way that a non-lawyer, non-financier can understand. And yes, I’ve gotten many headaches proofreading my own work, my partner’s, or that prepared by third parties. ;D

“With regard the Finance / Economics do you drop in on the Financial Cryptography blog?”

Wasn’t aware of it, but the security of data in this field is atrocious. Others will send the potential customer’s name, DOB, Social Security number, etc. via regular e-mail. I prefer them to fax it, since the concept of encryption is foreign to them. (Yes, finance people are as tunnel-visioned as everyone else.) If I have to discuss it with my trusted associate other than in person, then either land-line (not cell or smartphone), or PGP mail.

@ Nick P.:

“One picture is worth a thousand words” — probably even more so in IT and, as you said, esp. in HA, though not so much in finance, where one spreadsheet is worth a thousand words. Someone sent me a proposal with tons of 3-D bar graph projections, etc. It was junk. Usually, the more distracting graphics in such things, the less substance. Only the hard numbers count.

tommy July 7, 2011 7:16 PM

@ jacob:

The first time I heard of the concept of cloud computing, I thought, “That sux.” Hard enough to keep my own apps and data secure. Darned if I’ll hand it over to someone I don’t know, and about whose procedures and security I cannot possibly know, despite their reassurances. When it comes to storing or processing data, I’m like Dorothy in “The Wizard Of Oz”: “There’s no place like home”.

IMHO. YMMV.

jacob July 7, 2011 8:03 PM

@tommy It’s worse than that. Given current events they want to charge you money to store it insecurely. And those assurances you speak of? On what basis can we trust them? Many major players have been hit including RSAsecureID. And their entire business model is security. Citibank? Lockheed, it just goes on and on.

Cloud computing on Google. Doing it for free? Why? Trust them with information? ug. I think I threw up in my mouth a little. LOL

It’s a hackers holiday with hits on Sony and many others. I thought the hits on Sony were a little funny at first. I hate their policies and actions for the last 10 years with a purple passion. But, it’s gotten to be a movie punchline. Standing over the lifeless body, kick em and then say “he moved”. \m/

Also, contrary to the fad tag, it’s remote access data servers…not cloud computing.

Nick P July 7, 2011 10:17 PM

@ tommy

Yeah, you should definitely check on the Financial Cryptography blog. You’ll benefit from it more than most of us.

Financial Cryptography blog
https://financialcryptography.com/
(Note: You will probably get an SSL issue. It’s not an attack… unless you matter to them than most visitors 😉

tommy July 7, 2011 11:35 PM

@ jacob:

Agree.

@ Nick P.:

CACert was indeed new to me. I see that they’re trying to meet the standards to have their authority included in Mozilla browsers. I didn’t need the SSL for just browsing the articles, though.

I could have told anyone what was wrong with bitcoins – it’s the same as what’s wrong with any fiat currency (artificial). Esp. since “The currency’s architecture is designed to inflate its value over time.” You don’t design currencies to do this or that. The market does that for us all, the market being the sum of us all.

Someone wants a new global currency? Go back to the old global currency, the one that was used for the past 5000 years, and with good reason: gold. Most econ troubles start when currencies are disconnected from actual value (gold or silver).

You’ve already seen this, but for anyone else interested, here is my thesis-disguised-as-song-parody, explaining the economic history of the US; why the current mess, both within the US and globally; why gold’s abandonment caused it; and only a return to a gold standard can save it. (may Don McLean forgive me, but it’s “Fair Use”):

“American Pie” by Don McLean =
“American Pie Shrinks As More Slices Handed Out” by Fiddlegirl and Tommy Turtle

http://www.amiright.com/parody/70s/donmclean152.shtml

Thanks for the link, Clive and Nick. Will keep an eye on that site, but much is stuff I already know, and apparently, they’re just learning – or don’t know yet.

One sage did compare it to my fave example of the “last-fool” philosophy, namely, the bubble in Dutch tulip bulbs. But no one learns, regardless of whether the tulip bulbs become railroad stocks or Microwave Communications Inc. (MCI) or dot.com stocks.

JJ July 8, 2011 9:55 AM

a question…

If the PCs, smartphones, or their operating systems were actually secure in the sense that has been discussed here…how much would that affect the ability of The Government to access them?

(Besides the access of companies such as Apple-Google?)

Wouldn’t it be a potential “problem” to make them really secure?

Pete Dumas July 8, 2011 12:42 PM

Mr. Schneier would probably agree that “security through obscurity” is a proven model of Trust that does not work (I am making an assumption here, which truly makes me a dum-ass…go ahead and laugh, you know you want to). But, the fact is that this is an antiquated model that somehow continues to apparently remain a viable metric of “risk management”.

On one hand, you have Law Enforcement who could quite possibly have the purest and best intentions at heart in safeguarding it’s citizens from people who seek to harm the most defenseless of our ranks (the children) and then in the other hand, through military grade backdoors implemented at the firmware level (ACPI I’m looking at you) you now have a separate collection of society that can covertly access any internet connected machine at will, which in essence has the potential to facilitate the same crime (although abuse of power is such a rarity these days, wouldn’t you agree Strauss-Kahn?). This just brings us back to the original issue that “security through obscurity” is inherently flawed. We already know this.

As the engineer of an Autonomous System, I can no longer discern the good guys from the bad guys. They are both guilty of the same crime which is the unauthorized access of a private individuals personal computer, which gives them access to family photos, work related projects, individual likes and dislikes unique to each and every one of us, and financial information. The absolute ease of this is trivial (which provides you with your low hanging fruit). Do you really want strangers viewing your family’s photos?

I strongly urge individuals to change their perception of the Internet and use extreme caution and discretion before writing any personal or private documents or data on a machine connected to the Internet. The fact that you are surrounded by a relatively comfortable setting (i.e your home) has no bearing on your privacy or safety. This applies exponentially for children and teenagers. Think of the contents of any Internet connected device as being relative to putting everything you currently have stored on that device up on a publically accessible facebook account.

I really apologize about the “fear mongering”. I hate people like that too. I’m not saying this is the end of the world as we know it, I just think that it would be a good idea to make a habit of keeping personal and family oriented material on a separate machine that is intentionally kept as a form of replacement for the old “home filing cabinet” or “photo album” and ensure that that machine is never connected either wirelessly or wired to “the internet”. This advice not only applies to individuals, but is also helpful to SMB’s and Enterprises alike. You cant steal something that is physically rather than “logically” (hint of sarcasm) inaccessible.

I am giving my full name because I am not, nor have I ever been Anonymous. While I am able to see the logic behind the anti-sec movement, I have always been what they call a “selectiva”. In other words, it’s my job to attempt to deflect as much of this garbage as I legally can to keep it from reaching my end-users, so these guys are a thorn in my side too. My ongoing war with them is comical because of the giant headstart that they had on me as well as the overwhelming odds and technical superiority that they display…talk about trials by fire.

Pete Dumas

Jacob July 8, 2011 1:14 PM

@JJ It is a two edged sword.
1. If secured the big agencies could still get in with patches, updates, search warrants for records. Even keyloggers. Years ago I asked FBI about how prevalent cracking encryption, etc. was. He said never have to bother.
2. I am more concerned/irritated with the data mining of habits or info with the goal of selling information. I don’t want suggestions on what I might be interested in.
3. Another concern 4th amendment. One justice summed it up as the right “to be left alone”. I rather like that sentiment.
4. For most people, the basic rules apply.

@pete You need to loosen that tin foil hat I do and have put isolated systems in place for just the reasons you state. BUT anytime to go online you leave fingerprints. Anytime you move those pictures from one computer to another or email them you are doing the same thing. Precautions need to be taken absolutely. But your aunt sue might scan an old picture of you and put it up on facebook. Guess what? It’s now in the wild. Not very many people are that interested in your pictures or mine for that matter. Fewer still are willing or able to go off the grid.

Hackers want your cc info to buy stuff and companies want to sell you stuff. That and business data is probably 90% of anything applicable to us. Unless someone is involved in a criminal enterprise, of course.

I’m teasing you a little but deep breathe and enjoy a cup of tea. 😉

pete July 8, 2011 2:19 PM

@jacob

🙂

I made no illusions that it was anything more than intentional “fear mongering”. It’s just that perception can be very easily misconceptualized in the digital world. I get nervous when we begin baseing our character judgement for a given individual solely on a consumer based data model or other form of automated analytics….whether that be BI or criminal profiling. Thank you for taking the time to weigh in on my comment!

Andy July 8, 2011 8:25 PM

@JJ, “If the PCs, smartphones, or their operating systems were actually secure in the sense that has been discussed here…how much would that affect the ability of The Government to access them?

(Besides the access of companies such as Apple-Google?)

Wouldn’t it be a potential “problem” to make them really secure?”

Do they need access to the device, A computer that isn’t pluged into the internet would have minimal ways to effect the world.
If it is pluged into the internet any thing that goes over the wire promisc sniffer(no routing or what not) can be read in plain text(send to backend number cruncher).
In thoery I wouldn’t mind whats on my computer as long as its not sent on the internet

How good are the program that goverment want to put onto your computer….if pariond one dropped screen frame the black helicopters are after you and from the start of the invesercation they are wereing a bell.

Lahjah July 11, 2011 8:06 AM

For some time, I used Safeboot (forwarded by McAfee Endpoint and now, Intel-McAfee Endpoint) , that was a criptography tool capable to encrypt the whole disk. This softw change the boot loader by one from itself. When booted, first of all, is loaded a decrypotgraphy module, in a way to read the disk and continue the windows ordinary boot. Question: it is a proactive solution against these kind of virus to use it ?

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.