Comments

anonymous November 21, 2014 5:08 PM

What if singer Taylor Swift was also an information security professional?

https://twitter.com/swiftonsecurity

InfoSec Taylor Swift
@SwiftOnSecurity

One time I tried to explain Kerberos to someone. Then we both didn’t understand it.
10:00 AM – 21 Nov 2014

Sometimes I worry about cyberwar with China. Then I remember they’ll still have 40% XP marketshare in 2030.
7:06 PM – 20 Nov 2014

It’s 2014 and we have antivirus on our Linux phones and Windows tablets you can’t run Windows programs on. Crazy world.
3:35 PM – 20 Nov 2014

InfoSec Taylor Swift retweeted
In the future, everyone will have their 15 minutes of privacy.
12:03 PM – 20 Nov 2014

When my heart gets rained on, I make sure it becomes a public cloud.
7:39 AM – 17 Nov 2014

I have 4 records and millions of fans. Symantec Antivirus has millions of records and 0 fans.
8:09 PM – 7 Oct 2014

Android: The malware compatibility layer for Linux.
8:07 AM – 13 Aug 2014

BoppingAround November 21, 2014 5:38 PM

In the future, everyone will have their 15 minutes of privacy.

With the exception that they really won’t.

Jacob November 21, 2014 6:36 PM

A young Israeli company, WaveGuard Technologies (http://waveguardtechnologies.com), has developed a surveillance system that tracks mobile phone users regardless if they switch SIM cards or phones. They do that by a sophisticated behavioural analysis of the user. The system monitors and records locations data to within 6 meters and conversational meta data of all the mobile phones in the country, and keeps the collected data for 1 year. When someone becomes a target, the authorities can go back in time to analyse the related data.

They sold the system to a central-Asian country who has been known for its HR abuse. When asked about the morality of such a sale, the company replied “we work within the law”…

Frederick J. Ide November 21, 2014 7:47 PM

Bruce Is there a reason that you have not covered Axcrypt on your site? I feel that many of us use it and would like an opinion. F. J. Ide

jpr602 November 21, 2014 8:41 PM

Overheard at the airport today, a guy into his phone: “It was supposed to be finished last month, but we forgot the encryption.”

Alex November 22, 2014 1:12 AM

Bruce, why don’t you attach to this blog a mini-discussion-board (a mini-forum) dedicated to cryptography? The whole thing would be a reference, like a the central point of internet security.

Clive Robinson November 22, 2014 2:33 AM

@ Wael,

For many it’s not “Butt-dialing” because our clothes don’t have rear pockets… I guess it brings new life to the old joke about that venerable company that makes voice recorders for dictation back in the 50’s when things were a little less PC in the employment market… Like Hoover, Dictaphone became a noun, then a verb such is the price of fame/monopoly.

@ Jacob,

Yes, this new form of tracking technique has it’s advantages and for the unwary it’s pitfalls.

However as I’ve indicated in the past the claims are based on a false axiom that the phone is the user, when clearly it is not.

Which means that a wary person can find ways around such systems…

It is now not uncommon for even traffic lights to have their own GSM phone in them, and GSM radio units are getting cheap and can easily be connected to low cost single chip computers such as the raspberrie Pi and other educational systems via “shields”.

Less well known is that the likes of pagers can also be connected up to such systems as well, and unlike phones have the benifit of using receive only technology…

As I’ve noted before linking them together is not that difficult, as is connecting to a VoIP system to do voice mail etc.

With a little fore thought and technology you can distance yourself from your real phone that can remain mainly switched off, or not even have one if you can find a decent WiFi hotspot.

Such systems are now being openly developed and plans/instructions are available by the likes of Hackaday etc, and they have a legitimate use for those who “Coffee Shop Work” or need “call screening”, “meeting diversion”, or just plain old “peace to think time”, and don’t have the staff or don’t want to employ the staff to do it the old fashioned way.

An observation made long ago and still true to day “No matter how smart technology gets, it’s still dumber than a smart human”, thus those who are smart can appear as “regular Joes” when infact they are anything but. The hard part with creating such “legends” is all the nitty gritty details of OpSec, such as “credit/debet card” usage, avoiding CCTV, licence plate readers, home power consumption etc. Oh and most definitely avoiding the latest gimicks and gadgets that might just betray you via the IoT or their own GSM phones.

Clive Robinson November 22, 2014 8:15 AM

@ Nick P,

How’s your blood preasure?

If it’s high don’t read the following,

http://www.wired.com/2014/11/protection-from-hackers/

Lest it cause the red stuff to squirt out your ears 😉

I find it mildly insulting that the journo has failed to do their homework, solutions to the “blue-pill” problem were being discussed and thought about considerably befor the “eight years ago”. One solution which is better than the Quebs solution is the “too small to hide” principle that is the reason for the unusuall use of an MMU in my prison design…

But apart from the novel use of the MMU the idea of reducing resources to the point an attack is not possible goes back as far as I can see to the 70’s or earlier. It’s also a fundemental issue in other hardware designed systems to reduce TEMPEST / EmSec issues.

I could go on at length about what was around before the “blue-pill” idea –she claims,– that was designed either to prevent the issue or to detect it not least being it was one of the factors behind what we call more broadly “Virtual Memory”. Likewise I could point out issues to do with the Quebs “sand boxes” being able to communicate due to side channel and other issues.

On another issue there are claims going around that Linux “as we know it” is going to cease to exist as initd gets replaced with the very flaky alternative, which lacks not just stability but has failed to learn the lessons of history… some argue the result will be that Linux will become not just less stable but also a lot more insecure. The advice being passed around is to stick with RHEL…

Clive Robilnson November 22, 2014 11:09 AM

@ Jacob,

Speaking of tracking mobile phones, it appears a judge down in Baltimore is getting tough on the local police and their “can’t tell you because of a Non Disclosure Agrement” with either the Feds or equipment manufactures or some other as of yet unstated agency,

http://www.baltimoresun.com/news/maryland/baltimore-city/bs-md-ci-stingray-officer-contempt-20141117-story.html

On a discovery issue the defence lawyer cross questioned an officer about how they knew where a phone was. When the officer clamed up, the judge offered him bed and board down below for contempt. The prosecution had a brief conferance and withdrew the evidence including that found at the dwelling.

Apparently on another occasion the judge threw out other phone tracking when the police witness mentioned the DHS…

I just wish there were a few more judges with the gumption to throw the nonsense out.

Unfortunatly with Republicans controling the houses I suspect legislation changes will be waved through such that the right to face your accuser will be emasculated, or some “secret court” nonsense to which the defendent and their legal team will either not be privy or worse will have to hire a special “government aproved lawyer” who will make a secret arangment with the judge. Both of which are “rights striping” actions which are difficult if not impossible to defend against.

CallMeLateForSupper November 22, 2014 11:32 AM

The Slashdot post linked by @IsThereAnyValueInThis (re: “Tool To Combat Government Spyware”) doesn’t let on – and neither did the Guardian article I read first – that the tool is for Windows only. No Linux user need apply. 🙁

bitstrong November 22, 2014 1:15 PM

If you’re not that into political oped stuff here, try the Bristol crypto blog for more crypto tech.

Nick P November 22, 2014 2:00 PM

Modern virtual desktops are far from secure: A historical perspective & case study
(inspired by this news article)

There’s little new or Matrixy about this stuff. In a nutshell, a privileged mode or component was added to the processor without security in mind, it was attacked, and the system was compromised. This is the M.O. of INFOSEC going back to original MULTICS security evaluation [1]. It showed the many layers from hardware to apps that could be attacked, subverted, or critically fail on their own. This eventually led IBM in 1976 [2] to apply such lessons to their VM/370 hypervisor product. The resulting product split the system into virtual machines running on a hypervisor, KVM/370, with a security kernel embedded. Even before that, you could run the VM hypervisor on top of the VM hypervisor. (“Matrix shit” in… the early 1970’s??? Wait, where’s the novelty?)

This didn’t stop. Karger, one of the MULTICS evaluators, designed in the early 1980’s a secure VMM [3] to virtualize “VAXen” for users of different clearances. Their A1 security engineering techniques included layered design, minimally complex components, mathematical proofs of correctness, pentesting, processor microcode modifications, and other established techniques for increasing whole system security. That the system was usable is evident because they developed it on itself & got positive mention from beta customers. It was canceled for business reasons such as time-to-market issues. Technologically, the KVM/370 and VAX VMM set the baseline for how one should begin securing a virtualized system. Most modern work doesn’t even meet that baseline, although newly discovered attacks and issues demand an even stronger baseline.

Later on, the development of GUI’s, desktops, and many UNIX’s led to Compartmented Mode Workstations [4]. Many of them took a tiny security kernel at the lowest layer, some trusted OS-style software at a higher layer, made modifications to support isolating different security levels, made it more user friendly, leveraged hardware rings/segments for extra protection, and then deployed that in the market. Trusted Xenix is an example which also added immunity to some issues (eg Setuid vulnerabilities), covert channel analysis, & more NSA pentesting. Best designs, although not GUI’s,broke OS into components running as separate domains with kernel enforcement. Examples were XTS-400 and KeyKOS. Modern CMW’s like Argus Pitbull unfortunately leverage monolithic kernels. At least Argus warns of that risk.

So, fast forward a decade or two to Invisible Things Lab. The team demonstrates an actual attack that leverages SMM risks described in mid-90’s papers on x86 security. They also show privileged layers in a system with no security applied can be attacked. This kind of obvious stuff somehow makes headlines in IT media despite being ancient, sometimes totally solved, problems in published INFOSEC research. They intended to build something better. At the time, state of the art in academia was a mathematically verified isolation processor (AAMP7G), several secure microkernels (eg EROS), several trusted GUI’s (eg Nitpicker), and several implementations of user-mode Linux on microkernels with one having device driver wrappers for compatibility (eg OK Linux). INTEGRITY Desktop, commercially deployed in (2002?), had already put virtualized Windows, Linux, and native-on-kernel apps all in the same system running on a microkernel with isolated networking, robust transfer between VM’s, processor security extension support, etc.

QubesOS project begins. They start with the Xen virtualization platform. It has a large TCB in Dom0 which depends on Linux code, which has a bad security track record. They then added a subset of CMW features (eg labels & colors) with a virtualization approach similar to the INTEGRITY Desktop and Dresden work. They then tried to make their VM launching very fast like Dresden had already done (sub 1s on cheap hardware). They wisely put the networking and firewall stacks in a separate domain like microkernel & MILS kernels do. They’ve probably added plenty more since I last looked at it. The resulting system is quite easy to use, supports legacy hardware, will limit vanilla malware that isn’t targeted at them, might stop some malware targeted at them, and has other benefits of virtualization.

So, what of the security? The Five Eyes, Israel, Russia, and China have all shown expertise at hitting every layer of a system. I describe a bunch in my own framework [5] I used to guide my security engineering and started sharing publicly as a response to Snowden leaks. Sophisticated attackers hit every one of these layers. Hence, they must be developed or protected with extremely rigorous development processes: Orange Book B3/A1, Common Criteria EAL6/7, or something similar. Empirical evidence going back decades shows anything less produces software full of 0-days waiting to be discovered along with insidious covert channels. Systems designed with the high assurance processes at least survived strong pentesting at the time and a few like Boeing SNS Server have gone unbroken (far as anyone can tell) for decades.

Let’s look at Qubes in terms of what real security takes. Qubes runs on blackbox microcode, firmware, and so on partly developed in countries with either active espionage or that can compel backdoors in secret. Subversion should be easier than some architectures not tied to such countries. The Linux-derived Dom0 is a risk factor given that both black hats and nation states regularly find Linux 0-days. Neither the code nor the development tools they use are developed with high assurance techniques: mostly low to medium. There’s also been little to no independent evaluation by qualified, whole-system pentesters. (Their team is talented at review, though.) The covert channels I predicted, due to no mitigation by about any VMM vendor, finally turned up in academic analysis of the underlying Xen platform. If that sounds minor, let me rephrase it: IT security field just recently noticed a problem in VMM’s that was in official certification criteria and most landmark INFOSEC papers going back 30 years. This happens depressingly often in both commercial and FOSS INFOSEC development…

All in all, another failure to learn from the past. The project doesn’t apply the critical lessons learned from security design/evaluation of previous work: MULTICS, IBM’s KVM/370, VAX VMM, CMW’s, microkernels, MILS separation kernels, etc. The TCB is overly complex, not rigorously developed, lacks POLA, leverages black box firmware of unknown quality, and runs on a possibly subverted processor. The system should be considered insecure against medium to high strength attackers until the problems are removed. It’s doubtful that the problems can be removed if they continue to leverage Xen, vanilla development libraries, and support as much COTS hardware as possible. A shortcut that might knock out some issues is porting it to something like Cambrige’s CHERI processor, which already runs FreeBSD with Capsicum. Many code injection issues might be knocked out at Xen & above immediately, while stuff below can be reduced or eliminated during an ASIC conversion.

I might still use it for desktops that don’t contain assets of significant value. The QubesOS team did great work on usability, performance, and compatibility with a reasonable amount of COTS hardware. VM’s also make for easy recovery if problems occur. I might also use it for rapid development & testing of special purpose, low TCB VM’s that can be ported to higher security architectures later. It’s just not suitable for protecting my data or system from the kinds of attackers that matter. I can’t recall off the top of my head any FOSS solution that is without a lot of custom work. That won’t change until developers of security-critical systems learn the lessons of the past and actually apply them. I also encourage FOSS types to improve on them like some academics and commercial firms are already doing.

Note: This is not to single out QubeOS. Security-wise, they’re doing much better than many mainstream virtualization packages. There are others doing better than them. It all varies project by project, year by year. The critique is of the common denominator: none of them can be secure due to lack of highly assured secure development process with right requirements in place. Additionally, better techniques have been around and aren’t used. Yet, in the linked article, the writer was mystified by a low assurance rehash of old technology and thought that would stop blackhats (including NSA). That’s a very bad, common mistake I aim to prevent with writing like this.

[1] http://www.acsac.org/2002/papers/classic-multics-orig.pdf

[2] http://www.dtic.mil/get-tr-doc/pdf?AD=ADA109316

[3] http://www.cs.dartmouth.edu/~ccpalmer/classes/cs55/Content/resources/vax_vmm.pdf

[4] http://web.ornl.gov/~jar/doecmw.pdf

[5] https://www.schneier.com/blog/archives/2013/01/essay_on_fbi-ma.html#c1102869

Nick P November 22, 2014 2:26 PM

@ Clive Robinson

Not quite blood boiling, but irritating to see the same BS repeat. Yeah, the stuff is very old and had more to do with improving timesharing than virtual memory itself. A lot of good science and improvement in virtual memory resulted, though. I figured it was worth a write up with links to historical work on virtualization security that people can learn from. They also get a reminder about how much more usable their computers are than machines from the mid-70’s. 😉

Nick P November 22, 2014 3:52 PM

Update on research for 2014 in secure virtualization. Some good finds.

Multi-tiered security architecture for ARM via the virtualization and security extensions
https://www.sec.in.tum.de/assets/Uploads/lengyelshcis2.pdf

Like Qubes and server solutions, they leverage VM’s on top of Xen. They make special use of Xen’s mandatory control capabilities. They also include an ARM TrustZone component to do boot-time system integrity and run-time hypervisor integrity. ARM allows you to customize what’s in hardware, potentially mitigating the Dom0 problem with other published techniques. This will top out at medium assurance for for software and hardware.

Detangling Resource Management Functions from the TCB in Privacy-Preserving Virtualization
http://ramsites.net/~lim4/Esorics.pdf

This is innovative work in cloud-style virtualization that aims at increasing flexibility and privacy while minimizing TCB. This work, MyCloud SEP, starts with prior MyCloud work with claimed 6KLOC TCB. MyCloud SEP puts everything outside the hypervisor mode TCB except access control, memory isolation, and the scheduler. Management of disks, VM’s, etc is in a less privileged mode that hypervisor can restrict. It’s an interesting design that might get medium-high assurance on non-x86 architecture due to its simplicity. The problem is x86 platforms are complex and possibly subverted. Makes their PRISM reference funnier and means you shouldn’t trust it if you don’t trust x86. If you’re just worried about black hats, this has potential.

Secure Virtualized Stack
http://www.fbcinc.com/e/cif/presentations/Nahari_Isolation.pdf

Excellent presentation of issues and tactics for ARM TrustZone. Proposes a strategy for using it with the Secure World component being around 23Kloc and FOSS. Interesting work.

One-way isolation execution model based on hardware virtualization (Chinese)
http://pub.chinasciencejournal.com/JournalofSoftware/22656.jhtml

This one looks like it might be interesting. Problem is it’s mostly in Chinese. It would be great if someone translated it for an international audience.

Security and virtualization – a survey
https://tel.archives-ouvertes.fr/hal-00995214/document

A great summary of virtualization approaches, attacks, and some specific designs. Interesting that many of the advances in virtualization all cite one work in the 70’s that did all of that at once and was forgotten. Haha.

NIST draft on hypervisor protection recommendations
http://www.securityweek.com/nist-seeks-comment-hypervisor-security-guide

They’re encouraging people to give them feedback. Anyone have practical suggestions that the target audience (businesses/government) would apply go ahead and tell them.

Jacob November 22, 2014 4:56 PM

@ Nick P

Great references on secure virtualization.
I especially enjoyed reading the french “Security and virtualization – a survey”. Good stuff.

Thanks.

Wael November 22, 2014 5:52 PM

@Clive Robinson, @Nick P,

Lest it cause the red stuff to squirt out your ears 😉

Funny you say that! You are worried about @Nick P shooting blood out of his ears at a time when you consumed enough salt (Lot’s wife) to make you squirt blood out of your eyes ? 🙂

Wael November 22, 2014 6:01 PM

@Nick P,

Update on research for 2014 in secure virtualization. Some good finds.

Excellent! Thanks for the updates… Good weekend reading material.

Justin November 22, 2014 6:03 PM

@ Nick P

I do apologize for flaming you on the other thread… But in regards to virtualization:

Theo de Raadt has this to say about it. And XEN has had and continues to have numerous vulnerabilities.

What security objective are we trying to accomplish with virtualization? We are in effect admitting that the applications we are running and the operating systems we are running them on are not secure, and instead of improving their security, we just put another logical layer in there around that stuff, and somehow that’s supposed to make it more secure.

Even if virtualization could be done securely, what does that gain us? So our VM is compromised that we are doing sensitive stuff on, and the solution is just destroy it and spin up another VM if we get hacked?

I don’t mean to knock the concept of virtualization; it is very useful for many reasons, but it doesn’t by itself address the security of whatever you are running inside the VM.

Wael November 22, 2014 6:20 PM

@Justin,

Theo de Raadt has this to say about it

He is referring to a certain implementation of virtualization — not to the concept of virtualization. In addition, he attributed the weakness to incompetency.

What security objective are we trying to accomplish with virtualization?

Many, not least of which is the application of the principle of compartmentalization and segregation, and the ability to start from a known state. There are more reasons…

and the solution is just destroy it and spin up another VM if we get hacked?

Yup! But it also depends on the nature of the “hack”.

but it doesn’t by itself address the security of whatever you are running inside the VM.

The same can be said about other “principles”. Replace “virtualization” with POLA, and your statement will remain true.

kisssolution November 22, 2014 7:33 PM

There are very simple solutions to compromise an operating system. Windows especially easy as M$ provides power tools. Poking around I noticed a certain certificate for a M$ windows OS file in Win 7 that could happily be attached to other M$ Win 7 OS files (and previous Win versions). I wrote a little script with notepad, renamed it and with power tools and a little trial and error attached said certificate to file, made it replace part of the bootloader and copy itself into Volume Information and MBR. So I tested it and it would write to the system every reboot and corrupt it. Brilliant, only solution format MBR and partitions. Cool OK let’s see how good my banks security is, so I emailed them “Dear Bank found weird file attached to fake email which was supposed to be from your bank, I’ve attached in zip file and password is “password”. Bang, next day half the bank’s network starts going down, did I mention this modified system component copies itself and starts trying to copy itself to all networked systems and you can also included a self mailing feature too if you wish, and it has a valid M$ OS file certificate (maybe it came with a Windows update, I really don’t remember. Never added a BIOS modifying component, which would be quite easy as there are plenty of free BIOS modding tools. Surely they’ve patched the system a little better since then as this was quite some time ago and I don’t really remember exactly how I did it. I was just bored and poking around at all the OS files, plus those systems got the complete MBR and Partition format, and I don’t remember if I copied it to a USB stick or whatever, best nuke stuff that wrecks my systems I figure). I’m also a bit of an idiot, my compiling skills I quite consider poor, and I have fallen on my head a couple of times.

There are a couple of tools you can use too look for hooked files in the OS, though if the spies have certificates well they may not really help you, but looking never hurt anyone (unless it was a hit job I guess). Sysinternals has a bunch of free programs, Process Explorer, TCP view which is handy for looking at what connections your Win OS is making as the Windblows command line is anything other than a proper terminal, and a few others. RogueKiller or TDDSkiller, or something similar which allows you to look at system hooks and partition info. You will see a bunch of legitimate files for programs you may have, and the odd one will show up as “unknown” for it’s publisher, and anyway like I said the spies probably have a bunch of signing certificates they use. But You should probably at least get an idea of the processes running in your system on a daily occurrence, and at least disable a few unneeded system services, auto update features (you should really be updating everything yourself as it’s a good habit and the less useless phone home crap running the better), and also useless extra junk that runs in your Tray at startup.

Now when you press Crtl+ALt+Del and look at the Task Manager (or use Process Explorer) you’ll actually know everything that should be there and what it does, and the list should be small hopefully now that you killed off the unneeded crap. There are lots of good guides for unneeded Windows Services that can be Disabled, or at least set to Manual so they only run when needed, a few need to be left to Automatic. You should also remove any unneeded Linux daemons. But you know, the government decides dveloping all it’s new sites using SQL is smart so there are heaps of “geniuses” out there.

Nick P November 22, 2014 9:55 PM

@ Justin

Apology accepted. I have my moments too. 😉

re virtualization

There are a number of problems that virtualization can be a useful tool in solving. Here’s a benefit sheet to start with:

http://www.infoworld.com/article/2621446/server-virtualization/top-10-benefits-of-server-virtualization.html

And remember that, although some custom OS’s might not need it, we largely use virtualization for the likes of Windows and UNIX/Linux that have “needs clean slate redesign but we’re stuck with it” written all over them. Two of the first uses of virtualization were improved resource allocation and legacy.

Legacy. So, you have to run apps that are tied to a particular OS, configuration, or hardware. You have little or no control over that aspect. If you bought it a while back, these might not be made anymore. A virtual machine on modern hardware lets you run your beloved or mission-critical apps without any need for source code, OS, or even their hardware. That software just needs to be designed once and maintained on modern hardware. This makes it cheaper than rewritting every app. Also lower risk of breaking stuff. Real-world examples include old mainframe apps (eg VM/370), modern “Amiga compatibles,” Charon’s VAX/Alpha VM’s, DOS emulators in Windows, and even wrapping up apps tied to old versions of Windows on a current version. I’ve seen all in practice & a business case for it.

Resource allocation. OS’s have efficiency issues. They might stall the system waiting for I/O, have a poorly designed scheduler, not be so great at memory management, etc. Putting any or all of that in control of a VMM might let you give (or reclaim) unused physical resources to those instances that need it most. This is how virtualization can sometimes improve utilization, giving you more hardware ROI and/or performance.

Isolation. You can put an entire system into a file or piece of physical memory. You can install a bunch of untrustworthy (insecure or unreliable) stuff into it. You can run it without serious problems spilling outside the box. This is great for testing 3rd party code, apps, or especially OS’s. A secure virtualization system can pull this off in face of malicious apps targeting the system. Hence a need for those.

Testing. This builds on isolation but there’s more to it. A virtual machine can let you run arbitrary code on an arbitrary configuration. You might have a bunch of different instances representing different machines. You can load them, change them, inspect effects of running code, revert to clean slate, dispose of them, etc without messing with a single piece of extra hardware. These days, even hardware circuits are often simulated in chips run in software or on FPGA’s to reduce cost of designing them. The most common example, though, is testing a new version of an app for N different machine setups… using one machine. You can even have a test running in a VM instance while you compile other things or automate them with scripts.

Uptime. Offerings like mainframes, NonStop, clusters, etc already give us plenty in this. The use of virtualization allows us to use normal OS’s that may or may not be designed for uptime*. Not designed for it, yet with other compelling reasons to use them over those more expensive solutions. The virtualization solutions can do a number of things to keep sites running, including live migration. That’s cool shit, too.

  • “Five 9’s FreeDOS VM’s available for $1,000/year. Anyone? Anyone? Bueller?”

Faster server provisioning. This was in my response on the Qubes article. Qubes and Dresden’s demos demonstrate instant creation of new machines. This naturally has deployment advantages if you set the system up on a few large machines with plenty of resources and you’re only charged for what you use. That’s the mainframe model. I’ll add that you get improved security/efficiency if you extend it to be able to drop unneeded privileges after startup and also selectively include code. That it, the same base code is already in RAM for all to draw on but each machine has a configuration profile with the minimum it needs to do its job. And that is set to be unchangeable before it starts running its production app.

Data center footprint and energy savings. Remember that’s there’s a lot of stuff on the board besides CPU, memory, and I/O. And utilization often sucks for a regular server OS. So, the resource allocation and consolidation benefits that get you more out of the system also reduce financial or environmental waste tied to that system.

Desktop virtualization. This is a complex field with a variety of benefits and gotchas that kill many projects. The benefits include most of the ones you see above except with additional benefit of centralized control over something usually centralized. The thin clients are also easier to lock down. One success story I read involved moving over 200 desktops to one Bull mainframe (with extra for testing) and they reported things performed well. Virtualization schemes are basically emulated mainframes so they either can or eventually will be able to reliably offer this capability. Biggest security benefit outside centralization & utilization is automated deployment of patches to all users without messing with their individual machines. Even many individualized machines might be scripted modifications of a core image and this benefit still applies.

Multiple untrusted parties. This builds on isolation and adds security primitives along the lines of MLS/MSL/MILS. The idea is that shared resources are controlled to prevent leaks of information or attacks through them. The VM’s themselves are also controlled in terms of resource use and capabilities. The result is that the same machine can process stuff with different classification levels, belonging to different companies, and so on. The military would love this as it would save them a lot of hardware.

Ephemeral computing. A virtualization system’s isolation and resource management properties can be used to ensure it knows where application data is. It might even encrypt that data as it enters or leaves the processor as numerous research prototypes have done. The entire systems contents are connected to one or more keys in the TCB. The virtualization system can wipe a partition or the entire system in micro to milliseconds. Residual information from released memory is also encrypted as such that a VM acquiring it has no leaked secrets even if it wasn’t overwritten. There’s numerous benefits to this sort of thing.

So, there’s numerous benefits of virtualization that you get without changing the code running in it. Changing the code, as in VM-aware apps or paravirtualization, can get you even more benefits like isolated I/O or accelerated computation even in OS’s or apps that don’t natively support it. The last thing we want, though, is to try to get these benefits and have those evil hackers “pown” us in the process. So, if virtualization is beneficial, then securing it makes a hell of a lot of sense.

And there’s a lot of good work in that direction.

Nick P November 22, 2014 10:09 PM

@ Justin

Forgot to comment on the two links. Theo de Raadt’s post was hilarious and true for common x86 virtualization schemes. Yet, those designed for higher assurance had vastly less code (and privileged code) than others. LynxSecure and Nova microhypervisor have had formal methods applied to them among other things. Going by vulnerabilities per lines of code these should be vastly better at isolating things that typical OS’s. And if they have lightweight containers for software components + for OS’s that run side-by-side that’s even better.

The Xen link is better. I had no idea they’d found so many attacks on it. I figured it was a lower two digit number if it was as good as proponents said. They really went all out in the low assurance methods to hit almost 150. In all fairness, the vast majority were denial of service attacks that just crash or slow things down. Not all vulnerabilities are created equally. I usually consider an attack that stops an app or crashes my system a lame annoyance compared to one that gets code injection. There were a much lower number of those than in most software and so Xen still has a security benefit over them for that reason. Again, there’s still better stuff than Xen and there’s efforts to boost the security of Xen itself (eg Xenon).

So, Xen and common x86 virtualization issues aren’t arguments against virtualization. They’re arguments against those virtualization schemes that also show virtualization isn’t any easier to get right than other things. It has benefits but developers must work just as hard to have them without exposure to many attacks. I’m certainly not ever recommending that people just throw something in a VM and they’re secure. At best, I recommend that for protection against reliability issues, behind the scenes bloat, or vanilla malware not targeting your VM software. Still waiting for something virtual I’d truly trust on modern hardware and running modern software.

Chris Abbott November 22, 2014 11:23 PM

@Nick P.
@Clive

Forgive me for being a layman here, I only have a little experience when it comes to virtualization. This is the most over my head thing I’ve seen on this blog. Obviously, you would have to trust the hardware, a point you guys made earlier. But, in terms of having everything run in a separate VM, how would this be better/different than having multiple OS/partitions on a single machine, each using FDE, making it more difficult for malware on each to affect the other (aside from boot sector issues, maybe you could use a secure boot type scheme or have the bios run a hash test on it before it loads)? If they’re all in the memory at the same time (like VirtualBox) this would seem to be pointless, and if they’re not, then how is it different than multi-boot? Nick, I like the idea of things being encrypted going in and out of the CPU. That sounds like it could be useful. Aside from all that, I understand the concept in the article, but it seems to me like it’s problematic and I don’t understand everything else being discussed.

Daniel November 23, 2014 12:46 AM

@Chris Abbot

What do you need a persistent OS for? The real value of VM for the individual user (not business/government) is that the OS is not persistent and so one can make sure one is starting with a fresh, uninfected and compromised OS. Sure, one can try to achieve the same thing with a multiboot system but why bother? Make a clean OS on a VM, copy it, and when done with the copy destroy it and start again with another copy. An OS should be like a child’s diaper–soon as its gets dirty, throw it away. A VM doesn’t solve every problem, of course, especially if someone breaks through the VM and gets to the host machine but it is a huge step forward.

Thoth November 23, 2014 1:02 AM

@Nick P, Clive Robinson
It would be a nice idea if the seL4 kernels could be virtualized using a base seL4 kernel and then instances of it and of course secure virtualization techniques made with highest assurance to prevent as much side channels as possible.

If that is going in the direction of CC EAL 7+, then formal proof, formal lifecycles and all that gotta be considered as well but it would be highly beneficial.

Yet another method to reign back the downward spiral of ITSec and all that stuff and to put the ability to better control their own life into the hands of everyone.

Subito November 23, 2014 8:33 AM

@Daniel
“An OS should be like a child’s diaper–soon as its gets dirty, throw it away.”

Very appealing idea in principle. But what do you do about security updates? Do you download a new OS image, configure it, download your additional packages, patch whatever needs patching and burn the whole thing into a new DVD as your read-only virtual image every time a new update is released for any binary in your system? In that case, you could be repeating all this process up to two or three times a day.

Grauhut November 23, 2014 9:50 AM

Using virtualization as an exo-skeleton for insecure operating systems would imho be a good idea. The less resources this skeleton needs, the better.

My personal choice would be some kind of microkernel and hypervisor allowing to run the to be protected os with vt-d like direct hardware access to the graphics hardware and input devices.

In parallel there should run an instance of a security analyzer os with read access to all network traffic, the memory and storage of the main os able to automatically recognise the main os. The security analyzer would send information about recognized threats and alive pings to some kind of fifo buffers read accessible to the main os. All communication with the microkernel system and the security os should take place via a serial console ports. The security scanner should be able to trigger ips activity.

There could be a second security subsystem for vpn and proxy jobs.

The whole framework should work like some kind of antivirus and network security sandbox suite with behavioural anomaly detection, completely inaccessible to malware of any kind. Should be started by a bios replacement like coreboot.

Is there something like this available somewhere? 🙂

Grauhut November 23, 2014 10:22 AM

@Nick P “Still waiting for something virtual I’d truly trust on modern hardware and running modern software.”

What about KVM-L4?
os.inf.tu-dresden.de/papers_ps/peterschild09_vtds_virtual_machines_jailed.pdf

Daniel November 23, 2014 12:12 PM

@subito.

You are missing the entire point. With a disposable OS it doesn’t matter if someone hacks it, install malware on it, whatever, because just like the OS is not persistent neither is the attacker’s efforts. It all goes poof.

Of course this doesn’t prevent any damage the attacker might do while the ephemeral OS is live or if they should get access to the host OS. But there are other ways to deal with that issue; one doesn’t use a VM as one’s sole security measure. So while one would want to update the original OS from time to time just to not make any attacker’s life easier, it wouldn’t be the focus.

Jason Soule November 23, 2014 1:00 PM

I just wanted to say that I really enjoy reading the comments on these squid posts. They are very interesting.

Apples November 23, 2014 1:19 PM

23 Nov 2014: Regin

“An advanced piece of malware, known as Regin, has been used in systematic spying campaigns against a range of international targets since at least 2008. A back door-type Trojan, Regin is a complex piece of malware whose structure displays a degree of technical competence rarely seen. Customizable with an extensive range of capabilities depending on the target, it provides its controllers with a powerful framework for mass surveillance and has been used in spying operations against government organizations, infrastructure operators, businesses, researchers, and private individuals.

It is likely that its development took months, if not years, to complete and its authors have gone to great lengths to cover its tracks. Its capabilities and the level of resources behind Regin indicate that it is one of the main cyberespionage tools used by a nation state.”

http://www.symantec.com/connect/blogs/regin-top-tier-espionage-tool-enables-stealthy-surveillance

http://www.symantec.com/content/en/us/enterprise/media/security_response/whitepapers/regin-analysis.pdf

(Yes, Windows)

Separately, @Nick P, thanks for the virtualization links.

Subito November 23, 2014 1:24 PM

@Daniel
“Of course this doesn’t prevent any damage the attacker might do while the ephemeral OS is live or if they should get access to the host OS. But there are other ways to deal with that issue”

Very, very bad idea.

cepha November 23, 2014 1:47 PM

Daniel, what you’re saying doesn’t make any sense. You’ve got an unpatched vulnerable system, you get pwned (e.g. you get doxed, your credit card details and passwords are stolen, etc.), but it’s ok because after you reboot you will revert to your original vulnerable state once again. WTF?

Nate November 23, 2014 2:26 PM

@Apples: Nice going Symantec, it only took you six years to decide to report on this malware tool first seen in the wild in 2008. That’s some stunning front-line response you have right there.

I guess they finally got the memo saying ‘okay you can burn this one now, we’re done with it’?

Justin November 23, 2014 2:35 PM

@ Wael, Daniel

I think Subito and cepha have a valid point here, and one which virtualization does not address.

@ Nick P

You mention running legacy apps as a major reason for virtualization in the name of security. But many of these legacy apps and operating systems run on x86 architecture, which is complex and very difficult to virtualize securely. So I think Theo de Raadt’s argument has at least some merit.

Nate November 23, 2014 2:48 PM

@nickp: Virtualisation has always seemed strange to me. If you have to virtualise an entire OS (and we do), isn’t it basically an admission that the OS itself has utterly failed in its prime mission of providing virtualised access to the hardware and separating the processes on the machine? That’s what an OS’s only job is, and it can’t even do that properly.

We’ve already got the ‘process’ as an abstract unit of virtualisation (going back to the 1970s), but it’s a leaky abstraction because of the filesystem and other machine resources. So we went up one layer to ‘virtual machine’ (also a concept invented in the 70s), and called the new OS kernel the ‘hypervisor’. Now our machines are networks of dynamically spawned machine images communicating via TCP/IP. But I’m guessing that won’t be enough for long, and then we’ll need an … ‘ubervisor’, maybe, to emulate a private network of hypervisors emulating a cloud of machines running processes running language interpreters running language stacks running frameworks running objects running ….

Didn’t we decide (also in the 1970s!) that this arbitrary levels nonsense didn’t scale, and we should settle on one unit of abstraction, allow you to link any number of them up in any way you want, decouple them all from the hardware and call it a day? And didn’t we call that unit of abstraction the ‘object’ and invent Smalltalk?

And then we immediately threw it away and created C++ and went right back to binding everything to the hardware and we’re not yet done learning the painful lessons the 1970s taught us.

As a programmer, one of the first rules I learned was ‘impose no arbitrary limits on computation’. Either do something not at all, do it once, or do it an unbounded number of times; and any component that can’t be recursively nested is probably broken. I also learned ‘shared global variables are very very bad, use private namespaces instead.

And yet, what are our operating systems based on? Per-machine filesystems, and Internet services, which are giant shared global variables. So when we install software, whooomp, there goes the global namespace, all consistency is broken and it’s like debugging raw machine language in the pre-Algol days. And of course you can’t nest global namespaces inside other global namespaces, so there goes recursion also.

As operating system designers we’ve not even bothered to play by the same rules we taught programmers, and that just seems… insane!

Nate November 23, 2014 3:04 PM

What I would like would be a language that could compose any system at any scale using exactly the same primitives (and very few of them, with well-defined behaviour):
* Gates on a chip
* Chips on a circuit board
* Records in a database
* Pages in a website
* Comments on a web forum
* GUI controls in a window
* Windows on a desktop
* Processes in an OS
* Machines in a network
* Services on the Internet

More and more our programs are networks; certainly all the levels ‘below’ and ‘above’ our application programs are much more parallel than they are serial. We need to be able to ‘program the network’ in exactly the same way we program our software, especially as we move into software-oriented networks and clouds. And we need the same software development toolset – editing, debugging, versioning, forking, merging and reverting – for every kind of data and configuration data at every level.

Imagine if we were doing maths not just with Roman numerals, but with completely different number and base systems for numbers of different sizes! But that’s how it feels like to program web and cloud systems today.

It seems like it would all go a lot smoother if we stopped reinventing the ‘how to compose networks out of components’ wheel entirely with incompatible languages and semantics for each level, and just did it once, and did it right.

Justin November 23, 2014 3:34 PM

I like that idea, Nate. Is that the rationale behind that new NSA-funded programming language called “Wyvern” ?

Nate November 23, 2014 4:03 PM

@Justin: I hadn’t heard about Wyvern. I guess this would be this one? http://www.cs.cmu.edu/~aldrich/wyvern/

I’m a bit iffy on strict typing to be honest. I like the idea of strict guarantees of behaviour of software components; but most type systems seem to deliver far less than that, for far more complexity. And complexity is always the enemy of security. I would love to see something like a type system but that simply declared and proved generalised logical assertions. That heads towards something like Prolog, which I think hasn’t seen nearly as much development since the early 90s as it deserves.

There are several problems I see with type systems as we have them today.

One, you often can’t specify the information you actually care about: I can say, eg, that I have a function that takes two integers and returns a third. In good type systems (like Ada) I can restrict it further and even say ‘this takes two Airspeeds and returns also an Airspeed’. I might not however be able to declare that ‘it is always the case that the third value is positive, nonzero, and is the sum of the second two plus one’. That mathematical relationship might be what other components depend on to be true, yet I can’t assert it as a requirement of the interface to the component, and fail the component as broken (and fail everything up to and including the source code of the build) if it gives me something else. That seems like a problem. The type signature of the function/method call gives me less information than I need to verify correctness, reliability and security.

Two, unless you have PURE functional programming, any function or method call you make, even with the correct type signature, could make any arbitrary side effect, changing any part of the operating system or even the Internet, and you wouldn’t know. This is a HUGE security problem IMO. We try to restrict it in various ad-hoc ways with capabilities and credentials and security rights, but we don’t have any standard formal mathematical model for doing so. I think we need to go full pure-functional to have any hope of correctness in future.

Two point five: I think to make pure-functional I/O work correctly, we really need to go to a ‘reactive’ system such that we can define functions over time-varying I/O channels and then let an underlying signal engine work out when to fire them. There are various problems still to be sorted, but it seems like the only ‘provably correct’ approach to me, and it seems like it would do a lot to reduce the complexity of GUIs and Internet services.

Amazon’s new ‘Lambda’ service intrigues me for this reason as it looks like a move toward reactive in the cloud. But I want to see reactive at every level of the software stack, especially at the desktop where GUI programming is just an awful mess of insecure, overly baroque nonsense.

Three: Type systems really, really don’t play well with Object Oriented Programming’s idea of ‘classes’. A class is not a subtype and that’s all there is to it. The problem comes from two things: the fact that a class can override any inherited method (so it becomes no longer a member of the inherited type), and also that OO is still imperative and has side effects, so all of the problems of Point Two above. We fudge and hedge around this and play pretend games, but for mathematical proofs of correctness it’s nonsense to assume a class is a type. I don’t know how to solve this but we should probably rethink the whole idea of inheritance and move to a model of composition instead; it seems like the only approach that has a nonzero chance of success.

tldr: the software industry since the AI Winter has for the most part simply avoided thinking about provable correctness at all, and just hoped we could get by with typechecking, testing, and after-the-fact scanning and patching. Reality has proved we can’t. Now we get to reinvent everything from the 1980s on. We have a teachable moment here; I wish we could take a good strong look at the fundamental semantics that we’re building everything on before we build too many more layers on top.

Nate November 23, 2014 4:39 PM

Looking briefly at Wyvern, I guess it’s about creating typed domain specific languages (eg, TSLs). That’s a start, but I hope it doesn’t end there.

I think DSLs are really important for the future (every Internet protocol – in fact every interaction between any software component at any level) could be seen as a DSL, and being able to define protocols directly as languages I think would reduce a lot of complexity. Or at least, would allow us to

But again… Hindley-Milner typing is probably better than nothing, except when it’s worse than nothing. Once again, I think we’re making this all far more complicated than it needs to be, and yet not quite providing ourselves the tools to define the information we really care about.

What is it that we talk about when we talk about a ‘type’? I’ve not yet seen a particularly good answer to that question. I suspect we would do much better to think about arbitrary sets, and generalise our thinking to fully-fledged set (or relational) operations. Perhaps a type (in computing) is something like ‘a set with a definition that is proved to terminate, ie, composed of provably-terminating operations over such provably-terminating sets’. But wouldn’t it be simpler all round if we just said that to start with instead of creating unclear concepts that don’t let us do general purpose programming with them, then bolting that as a black-box subsystem onto the side of an unrelated general purpose programming language? Perhaps we don’t need both ‘types’ and ‘functions’ but one concept which does the work of both – ‘set’ or ‘relation’.

After all, in real-world computing, even in generalised Turing-complete languages which are theoretically non-terminating, we really only care about results that terminate – if it doesn’t terminate, it smashes the stack. We’re not doing abstract mathematics over categories of infinity, we’re doing something more like constructive logic with finite values. So our main language and our type system both need to terminate. It feels like both our languages and our type systems are reaching towards the same set of specifications, but from different directions. If we let the two intersect, perhaps we’d have one language which was exactly what we wanted rather than two which don’t quite work, and which interact in unexpected ways.

Nate November 23, 2014 5:02 PM

Shorter Nate: A language which can’t be used to talk about itself, isn’t actually a language.

Because I just said that in English, you all understood what I meant, and nobody hit a Russell paradox trying to parse it.

If we believed type theorists, no English dictionaries or grammar textbooks could be written in English – they’d have to be written in a separate, special ‘meta-English’. And yet, there they are.

We’re doing something wrong if our compilers can’t do this.

Daniel November 23, 2014 6:06 PM

Nate asks the following question, If you have to virtualise an entire OS (and we do), isn’t it basically an admission that the OS itself has utterly failed in its prime mission of providing virtualised access to the hardware and separating the processes on the machine?

No.

The underlying reason to use virtualization is convenience. There is nothing a virtual OS can do that a regular OS cannot do–the difference is that an VM can do it better, faster, cheaper. From a security point of view the real advantage of a VM is not their software capabilities but the way they expand behavioral possibilities for the user.

A properly configured and used VM makes life more difficult for some attackers. It especially makes life more difficult for the kind of attacks that the general public sees most often: malware, root kits, trojans and the like. It provides less protection if someone is targeting one’s machine specifically.

Let’s take a real world scenario. Imagine there is Bill P. who wants to do on-line banking every Wednesday. He’s a regular old man and during the week from time to time he visits websites that might contain hostile files: porn sites, political blogs, etc. Now he knows his browsing habits aren’t the safest but he doesn’t want anyone to steal his financial details because of a careless mouse click. So there is several things he could do.

One thing Bill P. could do is load up his OS with a zillion different types of malware detectors: Windows Defender, MalwareBytes, McAfee Antivirus, that type of thing. The problem is that all this shit is bloatware. He has to learn it all which is a pain, so he probably configures it wrong, and even if he does configure all his software correctly a good hacker will write around these common programs and test to make sure his hostile file doesn’t get caught.

Another thing Bill could do is that every week before his banking session he could wipe his hard drive completely and reinstall his OS. The good news about this is that he doesn’t have a lot of bloatware to deal with and he knows that any root kit or malware is gone because it’s been completely deleted and overwritten. The problem is that it’s a PITA. He has to make sure his drive is wiped, even the boot sectors, and he has to waste time reinstalling the OS and all the updates.

So here comes a VM to the rescue. All he has to do is make two VMs. One VM is his dirty VM that he plays around with during the week. He never uses this VM to transmit personal data. The other VM is the one that he uses only once a week for the 1/2 hour when he’s doing his banking. He only uses that VM to visit the banking site and never anything else and he never uses his dirty VM to do banking. He now has the best of best worlds–no bloatware and a guaranteed virus free OS.

Now is this set up foolproof? No. But for the average home user it’s (a) cheaper (b) simpler (c) faster and (d) by no means any less secure than what he was doing before. But the best thing about this set-up is that it shifts the security model from one of hardware/software to one of behavior. If Bill is really cautious he can rotate out his VMs however often he felt like it. In fact, for the tinfoil hat crowd one could create a new OS for every single website visited.

If one used a unique VM for every single website visited there is no malware in the world that can compromise your data unless (1) it can bust the VM container and infect the host machine, which is wildly improbable unless someone is after your host machine specifically or (2) it can intercept your data during the individual website visit which requires both that the website be infected and that the virus is coded in such a way that it does not require a reboot for the OS changes made by the virus to take effect. And even if this does happen by definition the damage is limited to the data transmitted to that website and nothing else because the next website visited has a brand new OS.

The point isn’t the specific scenario I just spun. The point is that VMs offer tremendous flexibility. Everything that Bill P does with a VM he could do without a VM. He could simply have two different physical machines and accomplish the same thing. But he couldn’t accomplish it as easily.

Clive Robinson November 23, 2014 7:33 PM

@ Nate,

You are looking at the issue the wrong way around, that is you are coming from the 20,000ft view hoping to find something solid to land on. You realy should be looking from below the ground to find solid foundations to take off from first…

Fundementaly our computers are switches that are arranged into logic gates which are further arranged into functional logic groups. These are usually registers and ALUs and control units that implement microcode from what is effectivly a ROM that expands an array of bits into a much larger array of bits that have their state sequenced in a –supposadly– reliably defined way to perform a basic data move, integer arithmetic or logical operation.

From a security perspective, if you cannot ensure the reliable behaviour of these basic functions, then everything you build on them is going to be as unreliable.

However the opposite is not true, that is if you do ensure these basic functions are reliable and correct, neither the reliability or correctness guarantees correct or reliable operation at higher levels.

Take data for instance, it consists of one or more bits of information stored in some format in memory. How do you ensure that memory is not tampered with by the CPU under a different set of instructions than those you think it should be?

The simple answer is unless you enforce the behaviour in hardware some how than you cannot. Whilst we have MMUs and the like that might do this with large regions of data we cannot do it on mainstream hardware for simple data types. Whilst it is possible to do it for some simple data types, there are so many data types that it would not be feasible to do it for more than a tiny fraction of simple data types. It does not matter a jot what you do in the software because the hardware is always going to be able to over ride it in some way intentional or otherwise.

Thus the reality is that typing of any form in software is not, never was and never will be a security function. It’s just there to stop programers tripping over their own shoe laces. Sooner rather than later all programers will go beyond the data types in the language they use, and will invent their own, for which there will be no enforced hand holding. If the programer does not add the required data type protections into their code for their new data type then there is nothing to stop errors at that level and above.

The problem is that the data type protections need to go in at a lower point in the software stack than that of the language the programer is using. Thus they cannot have reliable data type protections at the level available to them…

There is no way around this lower layer issue it is a side effect of a few awkward observations that boil down to the statment you hear occasionally about “No sufficiently complex system of logic can describe it’s self”. If you want to be a little more rigorous then you can use similar methods to Godel and Turing, and use a system to convert the protection mechanisms into numbers and then apply the diagonal lemma, the result if you treat the mechanisms as theroms is Godel’s, as computable functions it’s Turing’s and brings you to the halting problem.

Nick P November 23, 2014 8:06 PM

@ Chris

“But, in terms of having everything run in a separate VM, how would this be better/different than having multiple OS/partitions on a single machine, each using FDE, making it more difficult for malware on each to affect the other (aside from boot sector issues, maybe you could use a secure boot type scheme or have the bios run a hash test on it before it loads)?”

If VM’s instead of partitions, you can run multiple VM’s simultaneously on the same machine, do copies/backups as easily as moving files, have multiple versions of the same apps without DLL hell, and even run the same system on other people’s PC’s so long as you install a VM app on it. There’s also the benefit that OS’s or software not supporting your hardware might be able to run in the VM.

@ Thoth, Grauhut

The seL4 development also came with a user-mode Linux to run on top of it. Most of the L4 family has paravirtualized Linux layers. Verifying that the isolation properties apply to arbitrary inputs from the Linux layers is a shortcut to a near EAL7 virtualization system. Look into Nova microhypervisor as well because it’s designed for VM’s, tries to leverage formal methods for correctness (not necessarily security), and is open source.

Additionally, the LynxSecure commercial hypervisor leverages Intel VT for full virtualization and paravirtualization with Linux. They even went with my suggestion to port their DO-178B OS to run on their hypervisor. Secure hypervisor layer and at least a safe, resource-controlled OS layer for apps. VxWorks MILS kernel does the same thing.

And, Grauhut, thanks for the L4/KVM paper! I can’t believe missed it given all the Dresden papers I’ve posted and referenced. I figure I saw KVM and got scared away. Looking at it now, it’s amazing they added it with only 100loc worth of modifications.

@ Jason, Apples

Appreciate it. 🙂

@ Subito, cepha

The proper use of virtualization involves splitting your activities between sensitive and risky stuff. The risky stuff is in a VM. Your sensitive stuff doesn’t go in it. Risky stuff VM getting compromised doesn’t compromise your system as a whole and you can restore it to clean slate easily. You can actually do that regularly for either system just in case. Other measures or tools are used to try to prevent attacks from hitting you. An updated Firefox with HTTPS Everywhere, NoScript, and Sandboxie is pretty good by itself, for instance. In a VM or VM tied to a LiveCD, even safer.

@ Justin

Theo de Raadt will argue with anyone who doesn’t like it his way. He hates virtualization, mandatory access controls, etc. Let’s leave him aside except for his point that it brings in an extra layer that can fail or be attacked. And people that can’t get OS’s right will probably not get it right. I agree with those points, yet the solutions are obvious: make that layer a lot simpler that what’s above it and code it well. Nova or an L4Linux would be good examples.

x86 is actually kind of a separate issue. It really is crap from a virtualization standpoint, yet that didn’t stop VMware from implementing all sorts of functionality. Intel realized the problem and implemented Intel VT among other features as a solution. Leveraging that rather than vanilla x86 is possibly the best approach. Hypervisors like LynxSecure and Nova microhypervisor do that with a strong focus on simplification and code correctness.

Another option, taken long ago by Transmeta and recently by Loongson, is to emulate x86 at the hardware level. There’s quite a performance hit to doing this so it’s not for performance critical apps. However, it can be useful for legacy and one can even build in advanced security techniques at the hardware level to spot likely attacks (eg stack pointer overwritten). The cool thing is the chip might be a better-than-x86 chip by default whose microcode is changable. That lets it be used as RISC with custom instructions, an emulator or something else.

@ Nate

” Virtualisation has always seemed strange to me. If you have to virtualise an entire OS (and we do), isn’t it basically an admission that the OS itself has utterly failed in its prime mission of providing virtualised access to the hardware and separating the processes on the machine? That’s what an OS’s only job is, and it can’t even do that properly. ”

That’s not really an OS’s job. An OS’s job in the marketplace (or even FOSS popularity contest) is that it does what users want. This might be run certain apps, have certain features, support some standard, run on certain hardware, and so on. So, OS’s evolve a feature set that meets the demands of their users. (Hopefully…) Demand for personal computers was for them to constantly be faster, cheaper, prettier, and support more personal applications. So, this became what they were good at. Similar for servers, except more focused on running business applications, supporting standards (esp Internet), and costing way less than mainframes or enterprise RISC systems. Note how the kinds of things virtualization do don’t really fit these trends at all until hardware got cheap enough for it to fit servers and corporate desktops…. sometimes.

Virtualization was originally invented as part of the effort to make time sharing practical on hardware mainly used for batch processing. The CP/CMS system on IBM System/360 virtualized all the hardware of the system. Each user running the VM could have their own custom system without changing out the hardware. Just think having an account on a LAMP web host vs having a virtual private server. You get more control, more flexibility, possibly better performance with your custom OS, no extra cost of the machine (charged for use), and resources are still controlled by an underlying software. Mainframe OS’s supported some of these benefits, but not those that required changing the OS itself.

That team even created the System/370 version of their product, then called VM/370, by simulating a System/370 on a System/360 using their virtualization tool. That you can customize everything from hardware to OS to software with a virtualization tool comes in handy in many use cases. VM alsi ended up being one of the main ways IBM maintained backward compatibility with arbitrary legacy apps and OS’s on later hardware. Proves virtualization is great for ROI: write once, keep running them. There are apps from the 1960’s still doing their jobs in some companies. And now we’re seeing Windows and NIX servers being done the same way decades later.

Back to your point, though, on a better design. This is totally true and there were better designs. Most were prototypes in academia or failed commercial products. One thing they had in common was to create a standard representation for implementing CPU, memory, I/O, and protection mechanisms. These need to be easy enough for hardware to support and flexible enough to do a lot of stuff. Then, you separate their use from how they’re managed and push the how to upper layers in their own components. The capability-based systems probably did better than most at achieving this. System/38 -> AS/400 -> IBM i shows they can occasionally make it in the market and for a long time. I’d look into those. (Esp look into KeyKOS.)

Even still, IBM added virtualization (PowerVM) to the i Series. The reason was for running legacy and mainstream OS’s side-by-side on same machine. Mainstream OS’s whose source they mostly don’t control. And this is why virtualization is relevant sometimes even if you have a system that wouldn’t need it for any feature you want. The responsibility for the majority of a codebase can be shifted to others while you put a box around it for resource controls and fault/malware containment. Virtualization isn’t going away.

So, the best approach is to simply integrate it with legacy technology and make the kernel/VMM/whatever quite secure. The L4 family led the way, esp OKL4, by creating a kernel that can run native apps on small TCB, do device drivers, borrow them from legacy OS’s in VM’s, run VM’s, and so on. Quite flexible. Look into OKL4, OK Linux, Device Driver Environment (I think that’s the name), etc. Look at DROPS OS and Nizza work, too. You’ll see how you can have the same hardware and core OS/kernel underneath while being flexible enough to do almost anything.

re one language

LISP and 4GL’s already did what you describe for most of that. The trick is to have strong metaprogramming to allow DSL’s to be built (LISP) or build DSL’s on top of a very HLL (4GL). The 4GL’s were also much easier to use and more readable on the stuff they were meant to do. Haskell DSL’s represents an evolution of them where the advantages of functional programming and DSL’s can merge. This handles correctness, conciseness, amenability to formal analysis, and concurrency it seems. More research into improving its performance and assessing its security issues is needed. Yet, if you’re looking for ideal, it’s going to be something like 4GL’s that’s better than Haskell or LISP at integrating diverse ones. Or an improved Haskell or LISP. 😉

EDIT: I just read Justin’s and your follow up posts. It seems you figured it out before I got you an answer. Least that confirms my thinking is more likely to be on the right track.

Chris Abbott November 23, 2014 8:09 PM

@Clive

I’ve suspected before that if the hardware doesn’t enforce some kind of security mechanism, then, it’s probably impossible to make an application totally secure (I know it’s impossible to make anything totally secure, perhaps reasonably is the word I should use). I know people have had issues with old, but simple, fast languages like C. C has been around since the 60’s. Yet, if I’m right, programs that are dependent on a lot of mathematical operations almost always use C or C++. I’ve written math intensive programs in .NET and they suck as far as performance goes, comparatively.

My question is: Is there a way to make programs more secure regardless of the current architectures (x86 and what not) without taking a huge performance hit for math intensive stuff, and not adding significant complexity?

Chris Abbott November 23, 2014 8:24 PM

@Nick P.

True:

“If VM’s instead of partitions, you can run multiple VM’s simultaneously on the same machine, do copies/backups as easily as moving files, have multiple versions of the same apps without DLL hell, and even run the same system on other people’s PC’s so long as you install a VM app on it. There’s also the benefit that OS’s or software not supporting your hardware might be able to run in the VM.”

My concern was with VM’s using whatever machine they’re on to spillover whatever evil they’ve been tainted with to other VM’s/everything else. It seems much less likely that they would do that, which is more secure compared to what we have now, but if this becomes a security mechanism in the future, do you think there would be malware that would evolve into doing that? I know things always evolve and we can’t create a totally foolproof system, but just as a matter of curiosity, is this a long-term solution security-wise?

Chris Abbott November 23, 2014 8:27 PM

@Clive, @Myself

Not implying that the .NET framework is really secure or anything.

What are everyone’s thoughts about MS Crypto API? Just curious…

Nick P November 23, 2014 9:15 PM

@ Chris Abbott

Long term solution is difficult because of the demand for backwards compatibility. If we keep standards, but disregard compatibility, then we just create clean slate secure hardware and software systems. If we stay compatible, then we’re looking at containment, detection, recovery, etc for stuff that’s going to get hit. That’s the default right now. Virtualization, especially if strengthened, is a cheat right now to at least try to keep that mess in one box and maybe also do other things on another box. That other box might even be clean slate.

The current, mainstream virtualization tools have had all kinds of flaws. Malware authors will shred them if they become popular. Actually, many security researchers are doing exactly that due to widespread deployment of some in cloud applications. The good news is there are commercial, academic, and open source efforts that I mentioned above that are trying to make strong, easily evaluated hypervisors. The Genode OS is an example of a clean slate design with more secure native apps, support for legacy ones, and supporting numerous kernels. One it supports is the Nova microhypervisor.

Clive Robinson November 24, 2014 3:55 AM

OFF Topic :

William Gibson and other SiFi authors have just taken part in a discussion on the UK’s BBC Radio 4 this morning. It ended 9:45GMT which means it will now be available on the BBC iPlayer service for the next seven days for those who were not able to hear the FM broadcast.

Markus Ottela November 24, 2014 4:46 AM

Thoth • November 14, 2014 11:47 PM
“Here’s my temp pub key. Just a heads up, please do assume I am not safe as you noticed 🙂 . We will find a better sec comms channel after all these are done. Markus Ottela’s device would be an interesting use later on.”

Please make sure you always check Github for updates prior to use and audit the code, as Github’s TLS is vulnerable to state-driven MITM attacks (even though it’s ECDHE). A lot of the system’s security comes from relaying the mindset of what good security is, to user. Hopefully the documentation succeeds in that and user can then intuitively derive and understand what the source is and isn’t supposed to do, and detect any tampered code.

I pushed out the next version (0.4.12) of polycipher TFC CEV today. There are some minor changes in security implementation, such as how the counter of Twofish operates (the counter is now hashed to increase the hamming distance of counter value that is XORrred with the key to produce IV for CTR mode encryption). The genKey.py program now adds extra layer of whitening by hashing the key through Keccak. User can add unlimited amount of entropy with keyboard that is also fed to the hash function. I dropped the Diffie-Hellman key exchange completely, because in the end I found it to promote bad security practices: If key compromise is suspected, users should have a backup plan to inconspicuously agree on another physical key exchange, and not have / create less secure backup keys.

Message padding scheme was changed and NH.py now recognizes messages based on TFC-specific header. This allows the ciphertext length to be randomized on Tx.py, so OTR obfuscation will become easier. NH.py could further assist obfuscation by randomizing the length and delays of sent messages. This could add significant protection against traffic analysis in the sense conversations could look exactly like standard OTR encrypted conversations. Unfortunately, it doesn’t prevent adversary from exploiting the NH just to see if TFC is indeed used. At best the obfuscation only increases the number of computers that need to be compromised, and I’m predicting this will become more common in the future.

The feeding of noise to message stream to hide the fact when communication is taking place, as Nick suggested is also easier to implement, although it’s going to need some research, as I’ve been thinking it should create a persistent random pattern that resembles the way an individual would type. This way XMPP server could still be used without risk account being banned and it’s harder for traffic analysers to detect.

The main goal for this version was increasing stability and improving error handling, especially graceful exits with descriptive warnings.

The second goal was clarifying the structure and improving the readability of source code. I postponed separating encryption functions as they are a separate logical section on top of other code and thus not on the way of auditor. Before I go about separation, I’ve to double check that no security issues arise.

Thoth November 24, 2014 5:01 AM

@Markus Otella
Code signing would be useful. You can generate a signature of every file inside and then sign the signature file itself.

Using XMPP would be just a temporary solution. Using a properly designed protocol with security in mind. MinimaLt can be used as a guide.

Markus Ottela November 24, 2014 5:53 AM

@ Thoth:

Re: Code signing
That would only work if I could deliver all users my public signing key in person, or have it signed in person by many enough known and trustworthy people who know how to physically secure their private signing keys. Key servers suffer from the same problems as Github: the CA-based chain of trust is subjectible to NSLs, exploitation (Diginotar, Comodo) and government administrated CAs.

Thanks for pointing me the MinimaLT protocol. I’ll have to look into it.

Clive Robinson November 24, 2014 6:38 AM

@ Nicolai Brown,

Linux has been getting hammered lately. Bash bug, strings systemd, less, etc

Yup and it will get a lot worse befor it gets any better.

The reason is to do with it’s increasing popularity. For those with a long enough memory, there was a time when Apple Macs were recomended because they had so few exploits compared to the various Micro$haft OSs.

The reason was pointed out at the time that the lack of malware did not represent the quality of the code but the fact that the Macs were not being used in either sufficient numbers or ways that would make the ROI for malware writers sufficient to devote any kind of serious effort.

However that changed as more users started using Apple Macs on line and for financial related activities, thus the Malware ROI reached a tipping point and it became rather quickly noticeable as the fan boi’s squealed.

Well it’s now the turn of GNU and Linux. The important thing to note is it’s the GNU utilities that are “bashing” the security head in and thus it’s not just “GNU on Linux” but “GNU on *nix and any other platforms”, which can mean MS WinDoze as well as the BSDs, and just about any other platform where the GNU utilities have been ported by developers, for developers etc…

For instance I have a Win2K machine that has bash on it and it’s just as vulnerable as *nix machines with bash, the difference being the moderatly different exploit payload.

It is one of the downsides of FOSS that unlike much propriatory closed source it gets ported to way way more platforms, and thus in effect creates a “monoculture”. The upside is of course the usability prior to such attack vectors becoming known and usually the very fast time taken to either mitigate or remove the vector once it is known. Thus my money is still with FOSS where ever it’s reasonable to use.

As for my Win2K box with bash, no I’ve not yet fixed it and may not do so, it’s hardly used these days so is powerd off most of the time, and even when used it’s only ever in standalone / airgaped mode and stuff gets shifted on and off it by floppies that only get used with a guard / sluice / diode.

The reason for this apparent security is not because of paranoia but mainly because of the age of the hardware and that I used to develop software for organisations that were “paranoid about insider trading” which back then was the “big fine fear” and thus set rules they actualy came out randomly to audit…

Out of The Box Idea November 24, 2014 10:31 AM

Hi I have been experimenting littlebit with p0f tool that makes passive fingerprinting of the operating system, this together with Certificate Patrol. This is not done with a plugin
so its manual and it should work however I would hope that someone with programming skills could perhaps implement the idea to a firefox plugin.

In a way that it alarms you when a hosts passive fingerprinting hash has a missmatch
Just an idea that I came up with some time ago

Maybe stupid maybe not, I also have another idea but it will create alot of delay and false indications using traceroutes and alarm if the route is changed so I dismissed that idea quite fast.

There are however many more ways to fingerprint a destination server so I think the idea is not that bad.

Chris November 24, 2014 10:39 AM

Clive Robinson: Re Windows 2000
– Cool to see someone else using the OS I use it alot infact
for Winblows its superfast it kindof does what you tell it to do
and its one of my favourite operating systems to use for various tasks
since the Image is so small

//Cheers

Chris November 24, 2014 10:55 AM

Re:

@ Chris

“But, in terms of having everything run in a separate VM, how would this be better/different than having multiple OS/partitions on a single machine, each using FDE, making it more difficult for malware on each to affect the other (aside from boot sector issues, maybe you could use a secure boot type scheme or have the bios run a hash test on it before it loads)?”
-Hi I am not exactly sure if it would but the way I use VM:s is similar approach as making a VDI solution, meening that you use a Golden Template that is where you perform your updates etc. when you are set with that you change the version to 1.0x to a new version document the changes do a copy of the template and make the OS non-persistent.

I dont know how sandboxed a non-persistent vm is but I still think its better than nothing
this is the way I allways do and have done perhaps I am not using it as all other people
but I allways use the VM in a non-persistent mode:
I think the Non-Persistent terminology is VMWare language in Virtualbox you make the Virtual Disk inside the VM “Immutable” but the end result is the same

Then ofcourse I use Sandboxie within that VM as well
I dont use any antivirus protections or dep or hpa protections or stuff like that, perhaps stupidly but I still havent got affected by anything so the approach has worked and works

But yes I do understand that if the Hypervisors gets more common there will be some patchings to do but as of now it works for me. Then on the other hand I am not a target so … yeah

I am more worried about Bios or Hardware firmwares that get bad than not relying on the Hypervisors at this point, however I will check out the Hypervisors you pointed towards for sure. Sounds very intresting!

// Cheers Chris

Chris November 24, 2014 11:06 AM

Hi guys for longtime I searched and I think I am soon at the goal
It seems that the clipboard functionality in winblows is embedded in user32.dll

I have allready an os where I have managed to disable it but I would like to totally
remove the code with hexedit from user32.dll that has to do with clipboard

Why would you want to do that 🙂
Well I am bored and I dont like clipboards I write faster than cut and paste …
No but seriously I dont like it to exist in a computer that needs to be somewhat safe
thats the reason, if I manage to hex-or the user32.dll in away that it works and doesnt crash
and disables the function I might pass it along to who ever might want to have it.

//Chris

Chris November 24, 2014 11:10 AM

Story about malware from e-cigarettes.

“The made in China e-cigarette had malware hardcoded into the charger, and when plugged into a computer’s USB port the malware phoned home and infected the system.”


This was way cool, not only because its very inovative thinking but I like the idea
If I would be a blackhat that would be totally cool to use this method.

I have 5 different usb chargers for theses things called e-cigarettes I definately want to check it out, really cool !
//Chris

Chris November 24, 2014 11:15 AM

Daniel • November 23, 2014 12:46 AM

@Chris Abbot

What do you need a persistent OS for? The real value of VM for the individual user (not business/government) is that the OS is not persistent and so one can make sure one is starting with a fresh, uninfected and compromised OS

  • Hi exactly how I use it and thought all people did, perhaps people dont know the feature?

Think about it as a VDI platform where you have a “Golden Template”

That template is where you need to be super careful when you use it, thats when you patch it and do the changes etc. when that is done, you make a clone of it and imediately make it non-persistant/immutable

But as soon as the non-persistant os is rebooted its back to status quo

//Chris

Chris November 24, 2014 11:26 AM

Jacob: Reading about the VM hack I will check it out

Superhappy that VM is on the discussion now since most of my work I do is allways on Hypervisors

My laptops use Linux in Host and nowadays Virtualbox from Oracle as the HV, only reason is that in my opnion its faster than VMWare and takes less resources but I am not going to argument againts anyone of those. its a personal habbit. I do know both quite well though.

On my servers I use Xenserver not XEN but its more or less the same except it has a Citrix License attached to it, which is free for noncomercial users, and so like the idea of Qubic OS but both Xen and Qubic are quite much harder to learn and handle than the Workstaton hypervisors so you need to be patient and you need to understand that you will run into problems 🙂 and you would do well documenting all the problems and workarounds that will come as surprices when the shit dont boot anymore, believe me it will happen 🙂

However there is very good documentation on Xen also from Citrix since they use Xen in another name called Xenserver which is free for normal users that dont use it for business
I prefer Xenserver since the tools are better but I think its pretty much the same
However I do recall that you can use the Citrix tools with some workarounds on a normal XEN
but I cant tell for sure ?

Now its time to hit the bed, very late here again and the redwine is making its way to the pillow

//C.L//

Chris November 24, 2014 12:06 PM

Frederick J. Ide • November 21, 2014 7:47 PM

Bruce Is there a reason that you have not covered Axcrypt on your site? I feel that many of us use it and would like an opinion. F. J. Ide


Totally agree, its intresting tool and has been around for long time, i recall Swedish origin
I like/liked the Swedish before but nowadays I am not exactly sure 🙁
//Chris

Chris November 24, 2014 12:11 PM

Benny, you wrote sometime ago about a connection to NSA regarding the miniduke/onionduke
If you look at the investigation of the malware there are in my opinion possible clues
towards German Special Forces, not to say that there is but there are somethings there
that I thought looked suspicious, is it so that German Special Forces has a working Cyberwarfware project ? Just my 5 cents

//Chris

Chris November 24, 2014 12:37 PM

Hi ok the last message for today sorry I am spewing out stuff today.
But tcpcrypt, I have never managed to get it working inside a hypervisor to get it encrypted to internet boxes.

It works within the hosted networks etc but when you try to connect from a vm to the internet towards another machine using tcpcrypt it somehow doesnt work, any clues to what could be wrong, I havent investigated it thoroughly and I might have done something weird but I dont think so. any ideas ?

//Chris

Anura November 24, 2014 1:28 PM

@Chris

Re: USB Charger malware

I think the best course of action is for any device you are only charging, not actually syncing with the computer, is to get a USB/AC adapter. I find my phone charges a lot faster from the power outlet than from my laptop anyway. While theoretically possible to send sigals through the AC port, it’s probably not an issue in this situation.

Nick P November 24, 2014 1:45 PM

Recursive InterNetwork Architecture: A scientific alternative to TCP/IP

http://www.martingeddes.com/think-tank/network-architecture-research-tcp-ip-vs-rina/

That was a great article and find. The regulars here should review it and the presentations. Its simplicity and recursive nature might allow all kinds of programming, acceleration, and assurance techniques to be applied. Combined with the new OpenFlow model, we might have networking under control. I’m still keeping things like Amoeba OS and Globe toolkit in my back pocket in case old lessons can be applied to the new developments. Or in case the new one’s aren’t needed.

Anura November 24, 2014 2:20 PM

“While theoretically possible to send sigals through the AC port”

Did I seriously say “AC port” instead of “AC socket”?

Grauhut November 24, 2014 2:31 PM

@Chris: tcpcrypt, try bridged network mode in your VM setup instead of NAT.

Reg. fast VM image switching in case of an attack: What works once mostly works twice… Could produce a funny shutdown/copy/restart/reinfect ring of bs. Such a strategy end to often in a selfdos.

Benni November 24, 2014 3:36 PM

Now Kapersky has an analysis of Regin:

Kapersky writes http://goo.gl/VdSjx4 that Regin was the malware with which NSA hacked the mathematician J. J. Quisquater (we know that it was NSA, because Regin communicated over encrypted links with Belgacom servers whom we know from Snowden documents to be hacked by NSA http://goo.gl/OimLWL ) Kapersky notes that Regin has some familiarity with stuxnet and that Regin targets are located in Germany and Belgium. Most interesting I think, is the news that Regin attacks the mobile phone network gsm, according to Kapersky…

According to Symantec, Austria is an additional target: http://goo.gl/gBlFGf

The malware attacks research institutions, educational institutions, the energy industry, the mobile phone network gsm, telecommunication providers, banks, governments, airlines, and the hotel business…

ThisIsWhatItComesDownTo November 24, 2014 4:35 PM

Amazon details what permissions are needed on an Android app:

http://www.amazon.com/gp/help/customer/display.html?nodeId=201357670

In particular, from that web page…

“Apps require access to certain systems within your device. When you install an application, you are notified of all of the permissions required to run that application. Please read and consider the permissions carefully. See below for a list of permissions and what it means for the application:

Cost Money

Used for permissions that can be used to make users spend money without their direct involvement.

Development Tools

Group of permissions related to development features.

Hardware Controls

Used for permissions that provide direct access to the hardware on the device.

Your Location

Used for permissions that allow access to the user’s current location.

Messages

Used for permissions that allow an application to send messages on behalf of the user or intercept messages being received by the user.

Network Communications

Used for permissions that provide access to networking services.

Personal Info

Used for permissions that provide access to the user’s private data, such as contacts, calendar events, e-mail messages, etc.

Phone Calls

Used for permissions associated with accessing a device’s telephony state–including, intercepting outgoing calls and reading or modifying the phone state.

Storage

Group of permissions related to SD card access.

System Tools

Group of permissions related to system APIs (Application Programming Interface).”

This is for all Amazon devices. Reads like the NSA ‘we get this’ list.

Nate November 24, 2014 4:35 PM

@Daniel, Clive Robinsonm, NickP : Thanks for putting up with my rant. I guess I am looking at the 20,000 foot view (ie the computer science view rather than the engineering approach). My problem with virtualisation is not that it exists – I think it’s a wonderful tool, solves all sorts of problems and is long past due – my problem is rather with the complexity/obscurity and granularity of virtualisation, why we only have it at the ‘entire machine’ level and not in much smaller compartments – when our languages have always had such ‘virtualisation’ at any required level, since before OSes existed, in the form of the venerable ‘function’.

IMO, software engineering and compsci parted ways quite a long time ago (around the 1960s); programming languages were more influenced by compsci, trying to use a small number of deep concepts to achieve an elegant whole, while operating systems were much more ad-hoc, closer to the machines of the day, and generally haven’t applied the same design principles as languages.

In pure programming as we learn it in compsci, we don’t have such things as operating systems at all. We only have functions (sometimes extended with state and a little API to make objects, but you can make objects out of functions. They’re mathematically well defined and are the basis of the entire science.

None of our OSes, however, implement functions! Isn’t that strange?

Our big three desktop OSes are derived from VMS+CP/M (Windows) and Unix (Linux, Mac OSX). An OS is just a big program, but seen as a programming language, these 1970s-era OSes don’t even have the level of clean recursive private-namespace encapsulation that the 1950s Algol and Lisp languages had!

We certainly don’t have the level of ‘source code control’ tools with versioning and revisioning, at the level of OS deployment and system administration, that developers do. That’s also very strange! Sysadmins are forced to do programming on enterprise-scale systems using 1950s-era kind of toolsets – ‘punch in the bootloader on the toggle switches’. We can run scripts, but we can’t revert changes, we can’t even always identify what the changes any program makes to the system are. No application programmer could work like this. How do we guarantee system correctness under these conditions?

It just feels like there’s something very wrong with this picture. Our OS foundations are ancient and flaky compared to our language theory; there’s a limit to how much further we can build without it collapsing. And it fact, it all is collapsing around our ears.

There are exceptions of course; the AS/400 object system is intriguing, but stuck in mainframe-land; Erlang has promise; Smalltalk/Squeak could maybe break out of its educational ghetto; various experimental research OSes might get traction. But I don’t think we should start with Unix and try to bolt things on. We should start with lambda calculus (or something minimal and equivalent) and add just enough to get I/O, then stop. Less is more when it comes to security.

There might be a way forward for new execution frameworks in the ‘cloud’ world – like Amazon’s new ‘Lambda’ service – but cloud seems to raise so many new trust concerns (you can’t even see the hardware, how can you possibly trust it?)

What I would like I think is something like AWS Lambda but completely open-sourced, that you can run on your own machines and that works well at small scales. Start with a ‘personal processing cloud’ and then slowly expand it to take over functions of the desktop. So it doesn’t have to be a full OS at first, but could eventually become one.

Daniel November 24, 2014 6:07 PM

Chris Abbott asks, “but if this becomes a security mechanism in the future, do you think there would be malware that would evolve into doing that?”

Yes, of course. It’s an eternal cat and mouse game. But the smart mouses stay one step ahead of the cat.

Nate asks, “why we only have it at the ‘entire machine’ level and not in much smaller compartments”.

My belief is that we over-value persistence at all levels. Look at it this way: How big is your hard drive in GB? How big is your memory in GB? Doesn’t that tell you something about what people value? I think it does.

This is the reason why, in the broader context, I harp upon the idea that we need laws that limit data retention. We overvalue the security in persistence. Bruce likes to talk about the feudal internet. Well…it’s castles all the way down–even into the code.

Clive Robinson November 24, 2014 6:35 PM

@ Nate,

You are starting to get a grip on the problem 🙂

If you have a hunt back on this site for conversations between Nick P and myself over “Castles -v- Prisons” or as Wael calls it C-v-P or just CvP you will see your ideas have been to some extent discussed here.

My original proposal was to take an idea like the “unix scripting ethos” where a large task was split down and implemented by existing specific function tools as the equivalent of a very high level language. But add specific security features. This would be supported by the underlying hardware consisting of thousands of “jail CPUs” under the control of a hierarchy of security “hypervisor CPUs”. The individual sub tasks running in the jails would only be given the minimum of resources to carry out the required sub task and the running signiture it produced would be monitored by a hypervisor. Each jail would have a very very minimal OS that in effect was a “stream switch” using the likes of mediated “post box” buffer stream communications heads to get data and send results. That is not the old *nix “everything is a file” but “everything is a stream”. The “post box buffers” would be controled by a hypervisor CPU that controled the MMU between the jail CPU and the internal resource buses. The jail would have no notion of IO systems, main system memory or even time, thus making any covert or side channel difficult or useless to use.

Further expansion of the hypervisors function at the “choke point” of the post box buffers allows increased security and the hypervisor could halt the jail CPU and examine it’s local code and data memory as well as registers to ensure that the code is “as it should be”. This in turn gives rise to the idea of probabalistic or Monte Carlo Security.

I could go on at length 🙂 but as I’ve already done so, it would be kinder to the other blog users not to again. Further Nick P for quite practical reasons had different views on how to get increased security in what is “available today” in terms of hardware etc. Read both sides and make up your own mind, at the very least it will give you something to think about.

Clive Robinson November 24, 2014 8:52 PM

@ Chris Abbott,

It might have NSA or any of the Five Eyes, Germany, France, Israel or one or two other countries written “all over it”.

And that’s the problem with intel gathering malware, especialy of the targeted variety. By design such malware will be made to mislead any investigator, which is possibly why Symantec are not passing comment on who they think is originating it.

CNN actually have a related article which looks at the –supposed– Russian attacks on Western financial institutions. However you could make a case that it was another nation of which there are several that is using the opportunity to refocus attention away from them.

The director of the FBI has made statments about Chinese hackers being “drunken burglars” but has he or others asked the question “Why?”. Some of the answers might be along the lines of that the Chinese “don’t care” but that is unlikely to be the case, or at best only a small part of the reason. For instance ws know they got in quite successfully with quite a few very high security US Defence contractors, and fairly obviously for some considerable time without being noticed. Certainly enough to do running surveillance on several long term high end military projects. If the Chinese are the drunken burglars the FBI director thinks thay are, how bad does that make the ITsec at some of the most secure establishments there are?

So the next guess would be it’s a smoke screen of some form for reasons that may not be obvious. Maybe the Chinese have human assets in these military companies and it’s to cover up their existance. Maybe it’s a diversion whilst China land grabs the China Seas for their resources, it’s been used as the plot for atleast one major work of fiction.

Maybe it’s another nation running what appears as a Chinese smoke screen as a false flag operation to cover up their own activities, or to embroil both the US and China in a diplomatic quagmire…

We don’t know is the honest answer because we don’t have reliable evidence, nor are we likely to get it. But that has never stopped a political agenda in the past, and China and their proxie nations have been on the US War Hawks “hit list” for as long as anyone can remember, one way or another. So in theory Iran, also being near the top of that same hit list could be running anti-US cyber-attacks to act as a distraction from the current round of uranium enrichment talks in Switzerland etc. A related argument could thus be made to point the finger at Israel… and so the accusatory finger moves on to where ever anybody can make a rational sounding argument point, irrespective of facts or a lack thereof.

Nick P November 24, 2014 11:58 PM

@ Nate

We do have virtualization at much smaller levels. We literally have it at every level in different tools. The main use case, though, seems to be software and hardware not designed to be virtualized, understood, escaped, etc. Makes the virtualization less than elegant. The market dictates things most of the time and that often works against technical superiority.

“We should start with lambda calculus (or something minimal and equivalent) and add just enough to get I/O, then stop. Less is more when it comes to security.”

They were called LISP machines. They failed in the market over time. You can still get their source code and more online. Genera especially for various reasons. A few LISP processors were published in academic papers in a lot of detail, some pure and some practical. They’re easy to find with google if you use “LISP” or “Scheme” with the word “processor.”

The real reason we need at least one imperative system is that people understand it. When COBOL arrived, even lay people could be trained to write useful programs. That’s one of the reasons it’s still around. Starting with a mathematical basis hardly anyone understands and which kills backwards compatibility will only guarantee failure. The best choice is hardware that supports both imperative and functional in a secure way. Then, over time, we might switch to functional. That’s the thing I like about the SAFE processor: it can support anything from Java to Oberon to Haskell. So long as it’s typed and has sensible rules.

re “derived from VMS+CP/M (Windows)”

Microsoft management: “Ok. VMS does mainframe style work on reliable systems with mainframe-style reliability in their clustering. Let’s take VMS, remove reliability, remove clustering, make it a lot cheaper, add a GUI, add MSDOS support, and call it Windows ‘New Technology.'”

Me: “Here we go again…”

Microsoft management: “We’re rolling in the money people. Now, we just need our engineers to get this system reliable enough to get our business off our AS/400 and Hotmail off of FreeBSD. It’s making people question whether our product is worth the money. We can’t have businesspeople asking those sorts of questions.”

Clive Robinson November 25, 2014 5:39 AM

@ Figureitout,

This is about a supposedly new form of lumped circuit “circulator” which will enable on frequency full duplex communication,

http://www.technologyreview.com/news/520586/the-clever-circuit-that-doubles-bandwidth/

As you might know it’s been theoreticaly possible since the Edwardian age but nobody had cracked the design of a practical lumped circuit implementation that would work with the differing power levels of radio systems. Which is why most circulators use transmission lines or transmission line analogs and magnets all of which are both physically large and heavy and of quite limited bandwidth.

From what you have said your Dad might want to get his teeth into it 🙂

Wael November 25, 2014 5:59 AM

@Clive Robinson, @Figureitout,

This is about a supposedly new form of lumped circuit “circulator” which will enable on frequency full duplex communication…

Nothing is free! Some power consumption will have to be paid as a price. I think, while impressive, it’s considered “cheating” 🙂

From the link:

New ways of encoding data stand the chance of making wireless networks as much as 10 times more efficient in some cases (see “A Bandwidth Breakthrough”)…

Is one reason I worked on different encoding / compression ideas. And some made fun of me at the time!

Benni November 25, 2014 1:53 PM

Now Sueddeutsche has released the Snowden docs that show on which cables Vodafone has given GCHQ access:

http://www.sueddeutsche.de/digital/snowden-dokumente-im-original-unheimlicher-helfer-1.2236946

The documents can be accessed more easily here: https://netzpolitik.org/2014/cable-master-list-wir-spiegeln-die-snowden-dokumente-ueber-angezapfte-glasfasern-auch-von-vodafone/

The amount of cables that Vodafone taps is simply terrible.
I do not know what is so interesting on Asians or Germans that one has to spy on every bit of their communications with the foreign world.

But well, Snowden said they share intercepted nude photos, so it could be some sexual interest….

Chris November 25, 2014 2:54 PM

Hi there is a link from a dude calling him selfe Hack me or something
on the Kryptos kink, DONT click on the link but follow up the stuff i wrote below
Theses are seriously cool attacks! so try to download the attack code before too late
I downloaded all but one didnt download just zero bytes.

I tried them one couple of virtual machines and they are very effective dont know what they do yet but the do attack you seriously !!!

Just FYI

//Chris

Gerard van Vooren November 25, 2014 3:01 PM

@ Nicolai Brown • November 24, 2014 5:57 AM

On Linux, ‘less’ can probably get you owned:

It’s C again. This time an integer overflow (sadly that is possible with almost every language), combined with malloc (which is why OpenBSD created reallocarray) and zero terminated strings/arrays.

The funny part is that everyone knows that C is fundamentally broken, yet C and its decedents are still being used everywhere.

That “the industry” is unable to deal with / fix the shortcomings of C for decades by now keeps amazing me.

Nick P November 25, 2014 4:05 PM

@ Nate

You wanted a machine that was functional, clean, and secure at the core. I forgot to tell you one existed that seems to have done that. Look up the two papers, esp Ten15 VM, in the references section. Im not sure anything today comes close to Ten15 or FLEX for that matter.

So, maybe the next secure hardware project should just rebuild FLEX and modernize it.

@ Gerard

Meanwhile, what’s left of the Ada developers are high fiving each other about how much crap they don’t have to worry about.

Figureitout November 25, 2014 6:21 PM

Clive Robinson
–Hmm, interesting…I initially thought this could potentially be applied to a product we just gave the “all clear” on; but I don’t think a redesign or modification is in order if or until we get significant interference problems (took a “logical” countermeasure to that). There’s a separate problem that is puzzling, a battery keeps dying out in the field but we can’t recreate it in lab of course…that would be too easy! We’ve had this problem before and solved due to these UV-resistant windows, apparently they block RF too (a good thing to keep in mind for shielding lol :p)…Just another day engineering whatever f*cked up problems present themselves. But don’t think that’s issue here. Either bad install, someone’s hacking by jamming ack and spoofing the signal and it keeps transmitting til death, someone’s trolling or there’s really that much traffic on location (no, there isn’t). We need more data but that would involve leaving a laptop there which will of course get stolen. Wifi bands are getting so overloaded though; I’m glad to see some different bands being used.

Like Wael, my dad will probably say “Horsesh*t”; then say something like “kids and their SDR dongles don’t know radio! Get off my lawn!” I kid, but it’ll be something like that…When I showed an up-converter for the RTL-SDR for HF bands (was going to show someone up at work that “interesting bands” can be received w/ this cheap little thing), my dad just straight up said don’t buy it, it’ll be a crappy radio and started ranting about up-converters.

Thanks anyway, it’ll be funny regardless; it’s easy to get him going. :p

Gerard van Vooren
It’s C again
–It’s programmers again who really can’t code in C (they can probably, just a logic flaw found and not enough testing, that amount of testing is not possible many times or even needed) and the motto of C “people know what they’re doing” isn’t true. It’s akin to people blaming a missed shot “it’s not regulation size!” or “there’s not enough air in the ball!” The blame is pointing to…you, me, everyone else.

I always hated training wheels, being able to ride freely was so nice. But I still made sure to wear a helmet w/ my training wheels as I do now, b/c I know I’m going to skin my knee w/ my stupid logic (wrist guards also saved me quite a few times from broken wrists).

Nate November 25, 2014 8:36 PM

@NickP: Wow, the Ten15/FLEX architecture sounds really interesting! So many lost opportunities. Or, I guess, on the bright side, computing history is full of neat ideas ready for a second look.

Gerard van Vooren November 26, 2014 1:23 AM

@ Figureitout

In Ada all three issues that caused the bug just don’t exist. In most other languages 2 of the 3 issues don’t exist. The bug could only be there because C has all these flaws.

Grauhut November 26, 2014 1:56 AM

Linux is as buggy as any other os.

“Mayhem has found over 13,869 unique bugs in more than 37,000 programs in Debian Linux.”

forallsecure.com/mayhem.html

Nick P November 26, 2014 1:56 AM

re Ada vs C

Moreover, the goal is to write software that does what you want it to do without performance, reliability, and security issues. C/C++ are good at the performance part. They’re terrible at the other two by default even with how they handle the most common things. So, the training wheel analogy doesn’t fit because riding a bike safely is almost effortless once you use training wheels a certain period of time. Writing C/C++ applications without security flaws is very hard even if you know what you’re doing and is a lot easier in other languages.

Wael November 26, 2014 2:40 AM

@Nick P,

Moreover, the goal is to write software that does what you want it to do without performance, reliability, and security issues.

Stop right there! Many, if not the majority, of bugs happen because programers assumed the code they wrote reflects what they wanted to do. The “without performance, reliability, and security issues” should be in a separate sentence!

Gerard van Vooren November 26, 2014 2:48 AM

Offtopic:

About programming languages. As an engineer I most of the time like to work with units as in Meter, Volt, Ampere, Newton etc.. That helps catching the “thousand off” errors (common sense helps too sometimes). The HP48G was very good back then. However in programming languages working with units is almost not existing. Which programming languages do provide units as first class citizens?

If you come up with Excel you immediately loose all credibility 😉

Wael November 26, 2014 2:57 AM

@Gerard van Vooren,

Which programming languages do provide units as first class citizens?

Object oriented programming languages give you the facilities to define what you want. If this doesn’t fit your definition of “first class citizens”, then these languages give you the ability to naturalize immigrant classes 😉 Either way, you get what you want.

Wael November 26, 2014 3:05 AM

@Figureitout,

Like Wael, my dad will probably say “Horsesh*t”; then say something like “kids and their SDR dongles don’t know radio! […] I kid, but it’ll be something like that

Good thing you said “something like that”, otherwise you statement wouldn’t be true!

Gerard van Vooren November 26, 2014 4:13 AM

@ Wael

I know about OOP but I actually mean something like the HP48G. Working with formulas and units all over. Something like Mathcad in CLI mode with the ability to generate plots.

… but this is the wrong place to ask. Sorry.

Clive Robinson November 26, 2014 6:01 AM

@ Gerard van Vooren,

The simple answer to your question of,

Which programming languages do provide units as first class citizens?

Is none of them.

The reason is that fundementaly all computer mathmatics are integers, not even signed integers at that. Where as all physical units are based on forces (complex/multidimensional) and energy/matter constrained by the speed of light which are analog or continuous in nature. Not discrete, which the computer requires or aproximates in some manner which as those who suffered at the hands of Intel with it’s Pentium Bug are all to acutely aware…

That said the discrete digital integers are capable of far greater resolution than can be measured by any existing instrument which means at some point the discussion belongs into that much beloved area of human endevor philosophy.

For instance on my test bench I have a piece of equipment I built myself which is a form of combined fractional synthesizer and DDS that is more than capable of producing frequency steps beyond that we can measure. However it is connected to an atomic refrence that is orders of magnitude less stable than the frequency steps the device is capable of…

Why did I build it to have this capability? Well firstly because the components would alow it, and secondly there are times when you need two signals that although not particularly stable in the general scheme of things do need to have either a very small fractional phase difference or one that changes very predictably.

Nate November 26, 2014 4:10 PM

@Gerard van Vooren: The only programming language I know personally which has units is a niche domain-specific game development system – Inform 7 (www.inform7.com)

It’s baroque and quirky in a lot of ways, and not intended for general use. It’s for building text adventures. It doesn’t even have floating-point. But its number system does include units! Eg, numbers can be collections of named integers (hours + minutes + seconds, etc).

I7 is fascinating as a counterexample to almost any claim that ‘all languages do/have feature X….’ because it just does some very strange things. It’s not something I enjoy programming very much, but it inspires me deeply to always look at new approaches and reexamine accepted truisms about language design.

Nick P November 26, 2014 6:31 PM

@ Gerard

Ada does it via its existential types for numbers with nice safety benefits:

http://www.embedded.com/print/4218480

Other existential type systems might be able to do the same (eg Haskell). Frink is a language for calculations that supports all kinds of units natively:

http://futureboy.us/frinkdocs/

I believe REBOL language has a number of practical things (eg emails) as first class types. Some 4GL’s have types like currency or DateTime built-in.

Anura November 26, 2014 6:39 PM

http://arstechnica.com/security/2014/11/sony-pictures-hackers-release-list-of-stolen-corporate-files/

Sony Pictures was breached a couple of days ago, as many of you may have been aware. Unlike many recent hacks, this appears to be a inside job involving a member of IT.

Today they gave a list of filenames, and while this is supposedly an activist group, it doesn’t actually seem like anything is really directed at anything in particular. It’s not like they said “Here’s documents dealing with their investigations of copyright violations” or “Here’s documentation on how they are getting fake movie reviews.” I don’t really get what they are trying to accomplish; maybe it’s a ransom job; either way, doesn’t seem to be actual activism.

Benni November 26, 2014 8:24 PM

News from the US drone program:

http://www.focus.de/politik/ausland/gezielte-toetung-einen-terrorfuersten-zu-toeten-kostet-28-unschuldige-menschenleben_id_4304210.html

For every killed terrorists, 28 innocents have to die on average.
In Pakistan, 24 attempts to kill terrorists meant 874 other people were killed, among them 142 children. Only six children where killed in attacks that killed the target.

Attempts to kill Aiman al-Zawahiri costed 76 children their lives.

And on the average they kill every terrorist for three times, always being surprised that the target, for some reason, continues to live….

Uber: Another Malware November 26, 2014 10:54 PM

What the hell Uber? Uncool bro.
http://www.gironsec.com/blog/2014/11/what-the-hell-uber-uncool-bro/

A snip from the blog post:

This is one of those interim posts where I’m not posting something cool, but rather something that’s bothering me. You know, like a blog post?

Anyways, I downloaded Uber the other day and its pretty cool and handy. The only qualm I had was with all the permissions it asked for.

[…]
Christ man! Why the hell would it want access to my camera, my phone calls, my wifi neighbors, my accounts, etc? We’ll see in just a second.

[…]
There’s a lot of code to go over. The thing is about 7.5 MB of classes. In fact, the code I snagged from above comes from about 1100 lines of code. See for yourself. I especially liked the ‘hasHeartbleedVulnerability()’ method. Why do they want to know that? Later exploitation?

[…]

Benni November 27, 2014 12:11 AM

Regarding Uber, I think this here explains it all:

http://blogs.wsj.com/digits/2014/08/12/uber-now-has-an-executive-advising-the-pentagon/

Uber Now Has an Executive Advising the Pentagon…..

Seems it is really the NSA being interested in the taxi business these days…

In germany, we treat Uber like this:

http://de.wikipedia.org/wiki/Uber_%28Unternehmen%29#Kontroverse_zu_UberPop

They are really obnoxious. In germany, taxi drivers have an assurance that pays the client in case of an accident. Uber-pop used private drivers without any formal education or assurance.

After being forbidden by a court, uber simply ignored that at first. Only after the pressure became too strong, uber changed prizes to 0.35 euros per kilometers (1 kilometer=0.621371 miles). That way they circumvented laws that would make them equivalent to a taxi services and require an assurance.

Well, a normal company would go bankrupt with these prizes. 0.35 euros per kilometers, that makes 100 kilometers (62 miles) cost 35 euros…

Apparently, NSA and the US defense department have enough money to afford these prizes…

But it is interesting that the defense department is engagend in uber, a company whose app tests for the heartbleed bug.

This is interesting because openssl, the library with the heartbleed bug, is also financed by the US defense department:
http://en.wikipedia.org/wiki/OpenSSL

Steve Marquess, a former military consultant in Maryland started the foundation for donations and consultancy contracts and garnered sponsorship from the United States Department of Homeland Security and the United States Department of Defense.

(In Maryland is Forth Meade with NSA headquaters….)

For some reason, the spooks themselves do not want uber in their hometown:
http://www.baltimoresun.com/business/bs-md-uber-settlement-20141125-story.html

At least Uber got problems in Maryland…

Uer proposed a settlement Tuesday with the Maryland Public Safety Commission that would allow it to continue operating in the state legally — a compromise enabling it to back down from earlier statements that it would leave if it were required to operate as a cab company….

Thoth November 27, 2014 6:07 AM

Uber is probably spookware. The amount of permissions to grant is insane. Why not just give ultimate access ?

Gerard van Vooren November 27, 2014 6:40 AM

@ Nick P, Nate and Clive Robinson

Thanks for the suggestions. Frink looks to be what I am looking for. Ada… not for the simple “calculator” stuff 😉 Clive, you lost me somewhere down the road, but never mind.

Figureitout November 28, 2014 8:38 PM

Gerard van Vooren
–It looks like some basic fuzz testing would’ve found severe issues with “less” and what all it does in distros. We need to actually test these programs for at the very least malicious input; all of us have other jobs to do though…Do we have an operating system that can easily add on programs that are coded in Ada? No, we don’t b/c it’s a lot of work or will be a major hack job to get it working. Here’s a listing of where all Ada’s used, looks like a bunch of military systems and a scattering of other “critical infrastructure” ones but not across the board. Also the listing of actual desktop programs is weak:

Aerodynamic Analysis of Yacht Sails
Applications for Structural Engineering (tension structures)
CANTA – a tool to learn to sing in tune
Darwin – client-server application for managing full-text documents, with linguistic indexing.
Flaubert – automatically produces natural language text from structured factual data
TeXCAD – program for drawing or retouching {picture}s in LaTeX
TrashFinder – Windows-oriented e-mail filtering application
UnZip-Ada – decompression library for zipped files
Vision2Pixels : A picture critique oriented Web site for photographers
Voltaire – grammatical correction of texts, targeted to the press

http://www.seas.gwu.edu/~mfeldman/ada-project-summary.html

–WTF? Such a random scattering of programs, where’s the disassembler? It’s important to have a popular language b/c otherwise you don’t get a lot of cool programs w/ it; you get a weird scattering of “useful” programs. C has stood the test of time, the syntax is pure sex, it’s so portable, and you can do so much w/ it and learn it fairly quickly (takes a while for the quirks).

Ada is also not mandated for work anymore in DoD. From this link looks like Java is taking hold a bit more now (LOL)…

http://programmers.stackexchange.com/questions/99201/is-ada-really-gone

Nick P
–The analogy sucks b/c it’s hard to compare programming to other regular activities. Point is to stop blaming inanimate objects for human’s failings, especially people who will always be users and just complain about someone else creating the programs or computers they use. If you have a virus on your PC and trying to program, it could affect it and give you really weird errors; on a bike, you could probably see or find the problem fairly quickly/easily…

Wael
–Just told my dad tonight after a few drinks…”That’s impossible” lol, knew it. We’ll probably argue on it later when I show him the link and paper (ugh, scanned it, didn’t feel like reading it all the way). I still see potential security issues when you think about actual data flow and one-way data diodes; what if TX or RX overwhelms the other? Does this enable easier attacks?

Wael November 29, 2014 12:41 AM

@Figureitout,

That’s impossible” lol, knew it. […] I still see potential security issues when you think about actual data flow and one-way data diodes; what if TX or RX overwhelms the other? Does this enable easier attacks?

I believe they demonstrated it’s possible. I only had a chance to read the first two pages, and so far so good. It makes sense. They described what the challenge is and why previous solutions fail. Their first solution targets WiFi, so the extra power needed for cancellation shouldn’t be an issue. For a cell phone, that remains to be seen. In their defense, however, power optimization was not their objective — rather, it’s bandwidth efficiency.

Security needs more thought. However, the receiver should not “overwhelm” the transmitter. The transmitter should not overwhelm the receiver either because that is the problem they supposedly solved. I do like their approach of cancellation using both analogue and digital solutions. If I get a chance to finish the paper, I may say more.

Figureitout November 29, 2014 11:24 AM

Wael
–I never said it wasn’t, blows my mind a bit exactly how still, I’m just saying what my dad would say as Clive told me to “let me put his teeth in it”; mentioned “probably TDMA“, which kind of sounds like it…

Our analog cancellation circuit is in effect implementing the same trick, at every instant we have copies of the signal at different equally spaced delays just like in digital sampling.

Paper says it was intended for low-power, narrow-band, fixed rate protocols; which is kind of relevant for me…If we could cut the “handshake” time in half, that would be huge power savings, and maybe worthy of a redesign or new product. Still got work as it was done under “clean conditions” so likely in a shield room, not “out in the field” where anything goes. Then said they an experiment in a “noisey” indoor environment and quote:

<

blockquote>Outdoor LTE scenarios are less likely to have such strong near-field reflectors, hence we believe our design extends relatively easily to outdoor LTE scenarios< blockquote>

–(Emphasis mine) Well they may find out in practice it may not be so…

Interesting work, side channels cropping up from this new-found efficiency is a big concern for me though.

Wael November 30, 2014 1:05 AM

@Figureitout,

–(Emphasis mine) Well they may find out in practice it may not be so…

I would also agree it’s a little challenging. I thought about this problem many years back but never thought of any solutions partially because I didn’t see the need to do it this way. Took the compression / encoding route instead. Worked with a mathematician friend and we came up with some ideas, but work demands and lack of time prevented us from continuing the work. We have been talking about it on and off for close to ten years now. In addition, we live 2000 miles apart now. Last time I saw him was close to two years…

Figureitout November 30, 2014 2:29 AM

Wael
–A little challenging is an understatement, this is new science! It’s exciting if they actually discovered something new; just need to give them crap until they really prove themselves. I could email the researchers and get a better grip on what they did and maybe get them to come on here, but I don’t care enough (can’t chase every lead, got a bazillion things to do already).

I did go back and read the paper fully, some of it went over my head b/c I’m not in that little niche area they’re researching now and I didn’t really like how they wrote it so I’ll try to spare you the bad writing. Here’s 2 methods that try to do the same thing, one of which seems way more practical until…pico second timing intervals, bleh…

There are two state-of-the-art designs: ones which use an extra transmit chain to generate a cancellation signal in analog [6] and ones which tap the transmitted signal in analog for cancellation [11, 3]; both use a combination of analog and digital cancellation. Note that all these designs use at least two antennas for transmit and receive instead of the normal single antenna, and the antenna geometry ones use more than two

1) ( M. Duarte and A. Sabharwal. Full-duplex wireless communications using off-the-shelf radios: Feasibility and first results. In Forty-Fourth Asilomar Conference on Signals, Systems, and Components, 2010 )

This paper pissed me off even more. F*cking define your variables and simplify the base equations! A paragraph of equations does nothing but confuse for no reason!

Designs which use an extra transmitter chain report an overall total of 80 dB of self-interference cancellation (we have been able to reproduce their results experimentally). Of this, around
50 dB is obtained in the analog domain by antenna separation and isolation between the TX and RX antennas of around 40 cm (the designs also assume some form of polarization/metal shielding between the TX and RX antennas to achieve 50 dB isolation). Note that this 50
dB reduction applies to the entire signal, including linear and non-linear components as well as transmitter noise since it is pure analog signal attenuation. Next, these designs also use an extra transmit chain to inject an antidote signal [6, 9] that is supposed to cancel the self-interference in analog. However, the antidote signal only models linear self-interference components and does not model non-linear components. Further, it is incapable of modeling noise because by definition noise is random and cannot be modeled. Overall this extra
cancellation stage provides another 30 dB of linear self-interference cancellation in the best case. Thus, these designs provide 80 dB of linear cancellation, 50 dB of non-linear cancellation and 50 dB of analog noise cancellation, falling short of the requirements by 30 dB for the non-linear components. Hence if full duplex is enabled over links whose half duplex SNR is 30 dB or lower, then no signal will be decoded. Further to see any throughput improvements with full duplex, the half duplex link SNR would have to be greater than 50 dB.

–Ok…

2) ( http://dl.acm.org/citation.cfm?id=1859997 )

The second design [11] gets a copy of the transmitted analog signal and uses a component called the balun (a transformer) in the analog domain to then create a perfectly inverted copy of the signal. The inverted signal is then connected to a circuit that adjusts the delay and attenuation of the inverted signal to match the self interference that is being received on the RX antenna from the TX antenna. We show experimentally in Sec. 5, that this achieves only 25 dB of analog cancellation, consistent with the prior work’s results. The cancellation is limited because this technique is very sensitive to and requires precise programmable delays with resolution as precise as 10
picoseconds to exactly match the delay experienced by the self-interference from the TX to the RX antenna. Such programmable delays are extremely hard to build in practice, at best we could find programmable delays with resolution of 100-1000 picoseconds and these were in fact the ones used by the prior design [11]. Hence the cancellation circuit is never able to perfectly recreate the inverted self-interference signal and therefore cancellation is limited to 25 dB in analog. However this design also uses two separate antennas separated by 20cm for TX and RX and achieves another 30 dB in analog
cancellation via antenna isolation. Hence a total of 55
dB of self-interference reduction is obtained in analog, this cancellation applies to all the signal components (linear, non-linear and noise). The digital cancellation stage of this design also only models the linear main
signal component, it does not model the non-linear harmonics that we discussed above. Thus we found that we obtain another 30 dB of linear cancellation from digital in this design.

–This design seems most practical right now, but still seems like room for optimizations.

Then their’s:

Our design is a single antenna system (i.e. the same antenna is used to simultaneously transmit and receive), wide- band (can handle the widest WiFi bandwidth of 80MHz as well as all the LTE bandwidths) and truly full duplex (cancels all self-interference to the receiver noise floor). The design is a hybrid, i.e., it has both analog and digital cancellation stages

And here’s where I’m thinking “TDMA”:

The design of our cancellation circuit is based on a novel insight: we can view cancellation as a sampling and interpolation problem. The actual self-interference signal has a particular delay and amplitude that depends on the delay(d) and attenuation(a) through the circulator

That’s the main bits for me. They probably said “novel” like 20 times. I guess I’ll wait until a bigger engineering company takes a look and if they can implement it cheaply w/ their team of RF and software engineers, then we could look at it and these guys could make some money maybe.

Wael November 30, 2014 4:34 AM

@Figureitout,

I’ll try to spare you the bad writing.

Thanks for the summary! I wasn’t too keen on reading the rest of the paper. Re. the 20 times they used the word “Novel”: They have to use these words in addition to other “superlatives” in order to “publish” and equally importantly to impress investors.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.