Friday Squid Blogging: Hoax Squid-Like Creature

The weird squid-like creature floating around Bristol Harbour is a hoax.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Posted on December 6, 2013 at 4:33 PM • 118 Comments

Comments

WmDecember 6, 2013 6:07 PM

The only thing this squid report does for me is to remind me of Bruce Schneier whenever I eat calamari.

Jenny JunoDecember 6, 2013 6:20 PM

Does anyone know why connecting with https to this blog results in low-grade encryption?

I'm using the latest firefox (25.0.1) and without any plugins I get:

SSL_RSA_WITH_RC4_128_SHA, 128 bit keys
(as seen on the security panel of the page-info window - press control-I)

But, if I install the Calomel SSL plugin and tell it to force 256-bit with Perfect Forward Secrecy (PFE) I get:

TLS_DHE_RSA_WITH_AES_256_CBC_SHA, 256 bit keys

My uneducated understanding is that cipher negotiation is decided by the server, so why is the server defaulting to low-grade encryption instead of the highest grade when the client supports both?

FYI, Calomel: https://calomel.org/firefox_ssl_validation.html

JohnDecember 6, 2013 7:00 PM

Can anybody recommend a non-intrusive way to test my websites for CVE-2013-2251? I need to find a way to detect this before the bad guys do, and so far all the "vulnerability scanners" I tried are coming up blank. Alternatively, can anybody share a good community to ask these kinds of questions at? Thanks.

FigureitoutDecember 6, 2013 9:18 PM

I couldn't believe my eyes when I saw it. Made my night.

Developed on Hackaday.

They want to build an open source physical password storing device. I have some problems w/ the complexity of what they want now (touch screen, USB, smartcard) but still it's exciting.

martinrDecember 6, 2013 9:47 PM

The outright Panic that is currently unfolding about the allegations about "NSA can read RC4-encrypted TLS sessions in real time" is extremely disturbing. I have very strong doubts that this is true in the way most people seem to understand this.

Reality check: what are the prerequisites, and would would be the results?

(a) NSA managed break the key exchange algorithm (RSA_ or DHE_RSA ?). The would be the most convenient for them, it would give them all traffic protection keys for both directions at the same time, and word work agains *EVERY* symmetric cipher using that key exchange (AES,Camellia,Seed,3DES,Aria,...) not just RC4. While I would not discount the possibility that they could break a small number of TLS sessions that use 1024-bit RSA certs or 1024-bit DSA, I doubt they could do mass-surveillance the way some folks seem to believe.

(b) NSA has managed to recreate the internal state of the RC4 algorithm from a certain amount of keystream bytes (probably not much less than 16 bytes, maybe more, potentially consecutive). Each communication direction has to be attacked seperately, and they will need known plaintext in order to recover the keystream to work on. The server's response has a fairly low entropy on the first line, querying the server for typical respones is usally possible and distinguising the type of requests/responses may be loosely based on request and response length. While this might be feasible for specific targets, I doubt they could use it for mass surveillance.

(c) NSA has managed to invert RC4-128 and recover the key from a certain amount of RC4 keystream. Largely the same prerequisites as in (b), and would also reveal the normally unpredictable contents of the Finished handshake message. Again, while this might be feasible for specific targets, I doubt they could use it for mass surveillance.

martinrDecember 6, 2013 9:58 PM

Following up to myself on (a): brute-forcing a servers RSA private key (or stealing by hacking the server) would allow them to perform mass surveillance for plain RSA cipher suites on all session to that server using that RSA cert/key once they got hold of the private RSA key. So yes, they may consider this a worthwile target for specific servers/services. The use of PFS cipher suites would impair such surveillance capabilities after getting hold of the servers private key, but could not prevent active MitM with perfect server impersonation on individual targets.

Breaking DHE-1024 key exchange may also be feasible, and a number of implementations are limited to 1024-bit DHE and some implementations are using their DHE keys for quite a long time (only making it appear to be PFS).

Nick PDecember 6, 2013 10:38 PM

@ Figureitout

It's a decent idea. Last time I discussed the concept here, I mentioned that the trusted path/device could be something like one of the old electronic organizers. They were easily pocketed, cheap, easy to type on, and had decent sized screens. Seems like an ideal starting point for an authentication device so I included it in my "transaction appliance" for banking.

Such a simple hardware base (maybe with updated CPU) would be much safer than one with all the buzzword technologies.

Douglas KnightDecember 6, 2013 11:01 PM

Jenny Juno, the choice of encryption protocol is difficult. Labeling some "low grade" is simplistic. In isolation, RC4 is inferior to other choices, but some of the implementations of the others are susceptible to the BEAST attack. You surely have a recent browser invulnerable to this, but for older browsers, RC4 is a better choice; and the choice is largely made by the server. As attacks against RC4 improve and older clients go away, it will eventually be a bad choice, but today the answer is not clear. Well, actually, for Bruce's clientele, the answer is probably clearly not RC4.

dogugotwDecember 7, 2013 5:43 AM

@Figureitout - I have two problems with password carrying devices.
1. If I forget the device, I'm screwed.
2. I share site logins with my wife; if one of us creates a new site login or changes an existing one, until we can sync up the changes, the other person is screwed.

I opted to go with LastPass.com. To date, I haven't heard of any fundamental problems with the site and my login life is greatly simplified. It works if I'm on my phone, any PC I own and have the plug-in installed, as well as PC's I don't own. I have a copy of the password set encrypted at home just in case the site goes dark.

I realize that I've put my trust in people I don't know but I do the same thing every day with my bank or when I give a low wage tipped employee my credit card. For me, the risk/benefit works and when/if it changes, I'll switch to something new and different.

Douglas KnightDecember 7, 2013 8:00 AM

Jenny Juno, no, that's client-side. IE11, like your plugin, lies and says that it can't do RC4. There is a standard way to do a little bit of client sniffing, which is to offer GCM at TLS 1.2. That is definitely superior to what Bruce is doing, but it would leave a lot of people at RC4.

SkepticalDecember 7, 2013 8:18 AM

Thought this might appeal to some (including Bruce). From a FTC press release issued this week:

This spring, the Federal Trade Commission will host a series of seminars to examine the privacy implications of three new areas of technology that have garnered considerable attention for both their potential benefits and the possible privacy concerns they raise for consumers.

As the tools available to track, market to and analyze consumers – often without their knowledge – grow, businesses are able to meet consumers’ demands more efficiently and effectively. But these tools may also carry significant risks to consumers’ privacy. The seminars, taking place over three months, will shine a light on new trends in Big Data and their impact on consumer privacy.

The series will bring together academics, business and industry representatives, and consumer advocates for two-hour discussion sessions, which will take place in Washington, D.C. and will be open to the public. The FTC invites comment from the public on the proposed topics, and will issue staff reports following the sessions.

Link to FTC site with more information

AspieDecember 7, 2013 10:52 AM

The knob jockeys at RBS/Ratsnest are trying to qualify a dismal IT policy with the kind of task-name a fourteen-year-old would give.

Be aware there'll be many more of these "alerts" and many more times the paupers who bank with them will be expected to accept the "inevitable."

kashmarekDecember 7, 2013 12:15 PM

@Skeptical:

Consider this:

facebook patents inferring income of users

http://yro.slashdot.org/story/13/12/07/1548227/facebook-patents-inferring-income-of-users

Note the "what could go wrong" link at the end.

This "patent" should have been rejected due to the existence of significant prior art, such as methodologies used by credit bureaus, insurance companies, the IRS, financial instutions & such. Heck, the Welcome Wagon did this on the social networks of its time, newcomers in a neighborhood, where they lived, where they schooled, where they churched, organizations they were members of, where they shopped, the price paid for a house (or dwelling rental/lease), car driven, etc. Yes, indeed...profiling at its finest.

kashmarekDecember 7, 2013 5:43 PM

@Skeptical:

By the way, IBM (and perhaps others) have programs out there that "infer" crimes about to be committed (juveniles in Florida if I remember correctly). I think the purpose is to arrest people for crimes someone believes they might commit. That thought process happens to be a disease that permeates some areas of government and industry.

In industry, it is the data collection to "enhance your experience" by shoving targeted ads down your throat with high hopes that the targets will be intimidated into buying something (they just want your money with little interest in enhancing your experience).

In government, several not-so-secret agencies are operating under the belief that they can predict when terrorist acts will be performed by creating a dossier on you and inferring whether or not you are a terrorist of have terrorist tendencies (or they just don't like you or want to eliminate your potential to sway votes).

SkepticalDecember 7, 2013 6:07 PM

kashmarek,

Good point on the prior art - patents like that have always made me a little uneasy.

There are a few good articles floating around on the Obama 2012 campaign's use of "big data" to increase the effectiveness of everything from their online messaging and ads to the door-to-door canvassers (in the latter case, predicting which doors to knock on, and what times to knock). It strikes that this type of data, possibly more than anything else, is something that could really be subject to abuse by political groups. Imagine a McCarthy with access to that kind of information. "Have you now, or have you ever been, a member of... well, never mind, we already know." Or just imagine a group intent on targeting people for political exposure generally.

Article in The New York Times on FEMA's new guidelines for medical first-responders to mass casualty events. Protocol now is to enter so called "warm zones" immediately with law enforcement. Reminds me of the discussion on here regarding procedures followed by those responding to the bombing of the Boston Marathon.

Clive RobinsonDecember 7, 2013 6:18 PM

@ Kashmarek,

    @Skeptical: By the way, IBM (and perhaps others) have programs out there that "infer" crimes about to be committed (juveniles in Florida if I remember correctly)

There are a couple of different systems one of which has some basis in science the other does not.

One system uses previous crime data to predict the likelyhood of a crime occuring in a particular small area at an aproximate time. You send a squad car to that area at some time prior to that and give the uniformed officers instructions to make lots of noise and light to make sure that they are extreamly conspicuous. The idea is it acts as a disinsentive to certain types of fairly predictable criminals. Apparently there have in deed been a decrease in crime however as with all "deffence spending" it's extreamly difficult to say if a "non-event" is real and if so what was the actual cause of making it a "non-event".

The second system makes predictions about individuals committing a crime and thus you "pre-crime" arrest them. This is non scientific in nature. The reason being it assumes that the individuals actions are "pre-ordained" and thus they have no "free will". Now if this no free will is true then even if they commit a crime the person cannot be guilty as they had no choice, it's like saying that the bullet that kills you is guilty of murder and ignore the "controlling mind" that caused the finger on the trigger to squease. If however you say there is the free will of a controling mind then the crime is not pre-ordained, and to arrest a person prior to them comitting a crime is a crime it's self...

But such niceties have never stopped a certain type of police officer detaining some one for "stop-n-search" or a lot worse simply because the officer thinks a person looks guilty / suspicious as the only "probable cause". In the UK we have in the past had police officers "fit up" individuals for crimes they did not commit simply because the officer beleived it was "their turn" or justifiable because "they must have committed some other criminal act they have not been caught for".

This second type of system tries to give such arbitary actions the appearance of science thus justic, where as the reality is it does not even rate as faux science and is the tyrany of "justice being seen to be done, even though it's injustice".

FigureitoutDecember 8, 2013 12:11 AM

Such a simple hardware base (maybe with updated CPU) would be much safer...
Nick P
--Yeah, Mathieu Stephan seems pretty dead set on his ideas; even connecting the device to the internet...Maybe wait 'til he finds out a confirmed attack on his setup and can't figure it out. Also those efforts take heat off other projects as agents target it.

dogugotw
--I mean, whatever works for you personally, that's great. If you're like me, you check your pockets continuously as well as physically feel that they're there and when there's someone behind you pay extra attention to "slick touching". If left unattended then you're trying to bait attackers. I can't switch off this thinking, I don't even like having cars coming at me from behind w/o at least a large curb and I'm ready to attack physical attackers on the street at all times.

You don't have to put all your passwords on it too, split them up personally according to you, not based on someone else so attacking it needs to adapt to your patterns.

Credit cards make me extremely nervous, not only allowing for someone to slip in extra charges and leave town, but get a name on you, then more...

The main thing though that I like about this, is the Open Hardware movement, more fab labs (google "secure fab labs" and the 7th hit is a covergirl site...) opening up; allowing someone like me to come in and physically secure a site and try to make some secure chips to build a computer I can actually trust.

speak into the microphone squidbrainDecember 8, 2013 9:43 AM

US NAVY: Hackers 'Jumping The Air Gap' Would 'Disrupt The World Balance Of Power'

"The next generation hackers may be taking to sound waves, and the Navy is understandably spooked.

Speaking at last week's Defense One conference, retired Capt. Mark Hagerott cited recent reports about sonic computer viruses as one way that hackers could "jump the air gap" and target systems that are not connected to the Internet.

"If you take a cybernetic view of what's happening [in the Navy], right now our approach is unplug it or don't use a thumb drive," Hagerott said. But if hackers "are able to jump the air gap, we are talking about fleets coming to a stop."

More:

http://www.sfgate.com/technology/businessinsider/article/US-NAVY-Hackers-Jumping-The-Air-Gap-Would-4994130.php

Nick PDecember 8, 2013 10:44 AM

@ figureitout

"Also those efforts take heat off other projects as agents target it."

I gotta give you props for thinking of that angle. I totally forgot about that as a strategy in security-critical development. Yeah, maybe the riff raff should keep using riff raff tech. Such distractions could buy real engineers time to get their products ready for the battle testing that follows deployment.

@ mesrik

Thanks for the presentation.

@ audio air gap jumping malware link

We get it. That research has been posted here about half a dozen times. Unless I missed another half dozen.

WinterDecember 8, 2013 11:51 AM

The Fox news NRO report indeed sounds like coming direcly from the onion.

Maybe that is the new strategy: Make any news so over the top that no one believes it can be real?

AdjuvantDecember 8, 2013 12:46 PM

Long time lurker, first time commenter.
Nick P, Clive et al.: this hardware password safe project may also strike your fancy. Taking shape on the qi-hardware mailing list, it's known as Anelok. I'm sure your expertise would be valuable. The initial announcement was in September, and the name Anelok adopted softly thereafter.

TotoDecember 8, 2013 1:00 PM

@Winter

Maybe that is the new strategy: Make any news so over the top that no one believes it can be real?

In terms of scope and drama, the NRO logo is comparable to this one for the Total Information Awareness program:

https://upload.wikimedia.org/wikipedia/commons/thumb/d/d1/IAO-logo.png/270px-IAO-logo.png

Both position the government to be all encompassing and without limits.

I think the main goal is intimidation.

It helps to remember that the Wizard of Oz was just a small man with a big machine.

Nick PDecember 8, 2013 3:07 PM

@ Adjuvant

Man I haven't used a mailing list in a while. I'm not sure how to send a message to that specific group or if it would go to all groups in "discussion." Anyway, send them a link to this comment if you like. (Link next to my name is for the specific comment.) The following is directed at Almesberger's group although there are readers here that might find it useful or debatable.

--BEGIN REPLY--

As I read the Sep 6 proposal, I must ask one thing:

What are you protecting against?

Every security feature or countermeasure is supposed to aid in dealing with one or more threats. The main threat to authentication on a PC is that it's controlled by malware. This is how most passwords are taken. Subthreats included keyloggers, screen capture, redirection of web traffic, etc. You could say one is the strategy and others are tactics. A trustworthy authentication scheme must succeed even if the user's PC or applications are malicious. There's strong and weak techniques for this.

The hardware safe proposal requires the user (or device) to enter authentication data into a potentially compromised machine. If the user does it, a keylogger still catches it. If the device does it, the opponents adjust their tactics to its input method. The untrustworthy endpoint possessing the sensitive data is still the problem. So, my question could be rephrased as "If it doesn't protect passwords from malware, what security problem(s) are you solving with your password safe device?"

My last foray into this came up with some minimal requirements and a specific proposal:
https://www.schneier.com/blog/archives/2011/06/court_ruling_on.html#c552667

(Note: the comment references a "prior" or "former" solution that someone was griping about. It was just some crap I threw together for fun and should be disregarded. The actual proposal is the one in the linked comment itself.)

Quite a discussion followed with many proposals and comments that your project might find useful. My specialty is high assurance security engineering, particularly prevention. So, it followed that my proposal was a security appliance with minimal TCB, careful software/hardware interfaces, trusted path to prevent spoofing, and all critical operations onboard. I left out other beneficial details as they're trade secrets of mine. "tommy" pushed LiveCD's while coming up with solutions to certain deficiencies. Mark Currie invented a clever USB device that offloaded SSL authentication so that endpoint didn't control SSL session or see passwords. Clive and RobertT also contributed to the evolving designs.

My so-called transaction appliance (needs a new name) had more benefits than its security claims. It was general purpose enough to support many applications: signing, passwords, verifications, machine root of trust, etc. An investment into the base platform could be re-used in the future, saving lots of money. The other is incremental assurance. I learned this concept from 1990's MLS research where they determined a truly secure system was difficult to build up front. So, they decided to build something pretty good out of carefully interfaced parts, then incrementally increase the assurance of the parts over time. This lets them make money off of it which can be put back into assurance building. This strategy can benefit non-profit activities as well.

Regardless of methods, the important thing is your project learns the lessons of the past and achieves the security goal. The security goal is to authenticate on one or more machines controlled by the enemy. The discussion I linked to had several proposals on that. The portable password safe device doesn't accomplish this. If anything, it's most like a password storage and backup device with certain protections in event of theft. Existing software password safe's accomplish that, though, when combined with good host protection measures.

So, is there a goal I'm missing or this this project just for the heck of it?

--END REPLY--

Nick PDecember 8, 2013 4:10 PM

Semiconductor process technology points in history and what was made with them.

Hey, RobertT, I think your opinions on this might be valuable. We've discussed fabs before. The good ones are ridiculously expensive. I figure, though, that older style fabs should be cheaper to build and I previously cited obsolete chips for subversion resistance. So, how to fab those or new safer designs without dropping several billion dollars down?

My guess was fabs using obsolete process node technology. I pulled the list below off wikipedia looking at process node levels and what was made with them to assess usefulness. Even 1-1.5 micron are useful for a number of security- or safety-critical applications, albeit very resource tight. 250nm seems very useful for a bit advanced stuff, esp if using dedicated chips for certain things. This is despite one being old and one being old as hell.

So, the question is where can I find out how much each of these type of fabs might cost today and if you recommended just one which would you recommend? Maybe with what strategy to employ? (Or would it be not on my list entirely?)

The List

3 micron (1975): Z80 ZIlog like 8-bit chips.

1.5 micron (1982): included Intel 286 (16-bit, memory managed).

1 micron (1985): included Intel 386 32-bit with virtual memory.

800 nm (1989): included Intel 486, microSPARC, and a 60+MHz Pentium.

600 nm (1994): included first PPC chip and Intel Pentium up to 100Mhz.

350 nm (1995): Pentium 2's, an 8 core microcontroller, and Nintendo 64's chip.

Note: We've hit a point where the process node mades chips good for gaming, desktop OS's, embedded, etc. Might be a good lowest common denominator process node point.

250 nm (1997): DEC Alpha, P2 MMX, P3, Dreamcast SuperH CPU + GPU, Playstation 2's first "Emotion Engine" MIPS chip.

Note: Ok, in one process node step we have *much* more interesting chips getting developed. PS2 and PIII were used in multimedia. So this step is still quite usable.

Skipping 180nm as its just a speed boost according to their list.

130 nm (2002): PowerPC 7447, GameCube's IBM Gekko, Pentium 4 Northwood, Intel Xeon Gallatin, VIA C3 x86, AMD Athlon XP, AMD Opteron Sledgehammer, Russian Elbrus SPARC, Vortex 86SX.

I'm stopping there because my current guess is we're getting close to billion dollar fabs if we go further down. Might have already hit that point. So, 130nm is still highly usable for a variety of performing chips and designs. The micron level fabs should be able to do embedded style chips, including those in motherboards. I noticed yesterday that the PS2 did it's IO on a dedicated processor which was actually a Playstation 1 processor with clock rate tuned way down. That's the kind of smart reuse that homegrown stuff will have to do to keep costs down.

Clive, RobertT, Wael, what you think of all this?

Interesting Sidenote

I accidentally discovered the Nokia 9000 communicator while I was at it. I hadn't started using mobile phones at the time so this is my first time seeing it. Used an AMD 486 chip at 33MHz, 4MB for OS, 4MB for apps/data, had a huge screen, front numerical keypad, flip open keyboard, & supported many useful apps. Despite its bulk and ugliness, I think I just found a new addition to my list of "could make a decent start for subversion free cell phone." I'd swap out certain parts. Maybe even spin it into a trusted coprocessor for a variety of applications, optionally making calls.

I wonder how hard it would be to make a knockoff of that which is cheap to produce and might use those safer chips I've been mentioning. Hell, the simple JOP java processor is three times faster!

Nick PDecember 8, 2013 4:28 PM

EDIT to add:

One of my old fab recommendations was just to try to buy an existing one. I'm still looking through resources trying to guess what a modern price would be. Well, I noticed Tower Semiconductor got a 95nm plant from Micron Tech Inc. for $140 million. Everything I listed is larger than 95nm so it seems they should be even cheaper than that. Maybe?

Edit:

One of those on my list, the Renesas plant that made Nintendo chips, closed down because they couldn't find a buyer. They were also planning to sell up to 10 of 18 plants. Might be an ideal time for an enterprising group with a totally different focus (subversion resistance) to take over one or more of these.

FigureitoutDecember 8, 2013 5:23 PM

is the pope a lizard? the queen?
--As far as mesh networks, internet radio could be another mesh network. Just listened to a Dallas,TX repeater; kind of funny to think all the "butter churning" the data goes through.

Adjuvant
--I like it, pretty similar but may have some unique quirks, keep them coming.

Nick P
--You didn't ask, but I'm leaning towards the Z80. Problem is, a state attacker may want to compromise b/c there's so many. Billions of dollars is out of the question, barring some miracle, it needs to be cheaper ($140 mil is much more doable).

Hell, the simple JOP java processor is three times faster!
--You can overclock the z80 on a ti-83+ to ~15MHz; maybe need the extra juice for quicker encryption.

ModeratorDecember 8, 2013 6:15 PM

I've removed the very long copy-pasted comment.

There is someone -- or possibly two someones, but it's clearly a much smaller number of people than names -- who thinks that "talk about the security stories in the news" means "repost the massive collection of quotations you've been keeping on Pastebin." That's not what these threads are for. It's "talk," as in "talk with other people, like a human being," not "drop a bunch of leaflets on us from the air."

Wesley ParishDecember 8, 2013 7:37 PM

@ismar

"One of the more interesting stories is the one indicating ASIO being used for industrial espionage "

That's hardly news. Industrial espionage though, can have quite a chilling effect on a start-up, and throttle it in the very early stages; how much more so for a start-up nation.

Clive RobinsonDecember 8, 2013 8:23 PM

@ Adjuvant,

    : this hardware password safe project may also strike your fancy. Taking shape on the qi-hardware mailing list, it's known as Anelok

I've not looked at the Anelok, my main interest over on qi-hardware has been the Ben nanonote running 4th, to do a prototype for financial transaction protocol. Which all things considered is a harder problem for various reasons @Nick P, myself and others have discussed in the past.

From a quick lookover it appears as Nick P has indicated above to be a "good ideas melting pot" project not one with specific aims and objectives with a clearly defined and reviewed threat model and subsiquent specification cast in that light and forged from previous experiance. Thats not to say it's not interesting but the direction is a little fuzzy.

Another issue is "connections" there is much talk of USB / RF / NFC as far as I'm concerned that is a non starter from a security aspect as the device ceases to be an out of band device and becomes an inband device with no realistic security over sight by the device owner. The first rule of security is if it's going to be "in the loop" then it's through the humans eyes and fingers only as anyother "connection" is open to malware (as we are currently finding out the USB stack is far from secure against state level opponents, the same is true for all radio interface stacks).

There is also a further issues with electrical connections, the first is connectors have an inherantly short life time many manufactures only give a guarentee for 50 operations. Which generaly means they will start to be flaky around 5-10 times that as most mobile phone users will verify (those very expensive manufacturer branded cables get some of their high price from the connector quality not just markup). Connectors are also fragile in other ways and this device is going to be in pockets with fluff, grit and other contact ruining detritus, then ther's the likes of graphite dust from pencils, metal flakes from plating on keys and plastics as well as all sorts of other conducting junk just wanting to get in the connector to short contacts and at the very least shorten the battery life directly (through contact) or indirectly due to "protection mechanisms". Then there are "in use" issues like "soft drinks" and the faff of locating cables etc (why do you think "dropin chargers" are so popular). All of which causes significant cost in terms of PCB, Connectors, case and engineering/manufacturing.

Likewise battery type and replacment are significant issues, and need to be considered carefully (thing how many "TV remote control" style units have busted battery cover latches).

Having spent some time designing FMCE I know that most designs fail the "designed to be manufactured" test. The cost on automated PCB population changes considerably with not just number of component types but re-orientation of the tool heads and a whole load of other "unthought about" issues which also end up effecting product reliability, thus returns rate and can totaly wipe out any profit or the organisation...

Clive RobinsonDecember 8, 2013 9:02 PM

@ Figureitout,

With regards the Z80 CPU whilst it was very popular (due to it being an augmented 8080 that could run CP/M out of the box) it had issues...

Not realy well known was that internaly the ALU was 4bit and the odd addressing modes and non orthagonal registers were down to this in many respects.

If you want an 8bit CPU with bang for your clock rate have a look at the Z8 or 6809 especialy when you consider page zero RAM 8bit pointers it can knock a whole hunk of program size. Whatever you do avoide the 8051 or those earlier 8048 keyboad controler micros the programing pain they will cause you is the modern day equivalent to that of Medi-evil Religeous Furvor (hair shirts loaded with lice,ticks&fleas, with boils, running sours, rotting teeth stomach ulcers, malnutrition/scurvy and the "Kings Pox" or scroffula and piles and other issues, all befor you begin the self whiping, cold water plunges etc to bring on fevers and delirium to get you that little bit closer to your ideal ;-)

You might also want to have a look at Forth, when it comes to resources it tends to have quite a few advantages (depending on threading type). I know @Nick P does not like it because it's not a "strongly typed" language, but I disagree It's the programs that should be strongly typed not the language. The price you pay in strongly typed languages is way way to high and all you realy get is strong typing on simple data types. At the end of the day it's the programers responsability to code all data types in a strongly typed way and not rely on the faux crutch of language strong typing and casting.

One modern version of Forth is 4th which can be found at,

http://thebeez.home.xs4all.nl/4tH/

And if you fancy cutting your own Forth you could start by having a look at,

http://www.bradrodriguez.com/papers/

I've cut my own multi-tasking multi-user Forth for a Microchip PIC24 with four serial interfaces as a prototype platform for a number of hardware projects. And yes you can multi-task/user on a Z80 with Forth if you realy want to.

Nick PDecember 8, 2013 11:12 PM

@ Clive Robinson

"all you realy get is strong typing on simple data types. "

Depending on the language, you might get simple data types, complex data types, dynamic types, stack protection, heap protection, typesafe function pointers, typesafe concurrency, typesafe casts, typesaft linking, typesafe URL construction recently, and so on. The cost along with benefits of type safety varies by language and implementation. For instance, GNAT Ada runs on 8-bit AVR microcontroller and Java is on smartcards. Trick is they use lighter runtime configs for that.

"You might also want to have a look at Forth, when it comes to resources it tends to have quite a few advantages (depending on threading type)."

I'll admit that, despite my gripe, I do keep coming back to it when considering these things. Forth can be implemented easily, it's very cross platform, it's got a code/data flexibility a la LISP, there's tools/code out there, and there's dedicated chips. It just takes a certain mindset to do it right and I didn't think people who can code safely in Java/C#/Python can necessarily code safely in Forth. If you recall, the talent requirement was my biggest gripe.

One of the possibilities I mentioned was using Forth as the target language of a compiler or higher level language. Guess what? Seems someone did that in the govt's OASIS program (whose projects will be in my upcoming paper release too). The problem was making safe/trustworthy boot firmware. Their example was Open Firmware which has a Forth interpreter built into it. So, they made a compiler from a Java subset to that Forth interpreter with safety proofs for the compiler. Safety benefits of Java + efficiency/portability of Forth. Bam!

"(depending on threading type)"

What did you mean by that?

BryanDecember 9, 2013 1:29 AM

Looking at that Navy hysterics. I wonder if we can get them/DARPA to pay for a secure CPU design? What's a billion or two to them? I'm pretty sure they wouldn't want any backdoors in their systems. LOL Do you think we could get them to donate a billion or two to an open hardware KickStarter project?

I just wanna totally upset the system hardware model as well as the OS. :)

65535December 9, 2013 4:33 AM

I don't know if this has been posted:

"Dear Mr. President and Members of Congress,

We understand that governments have a duty to protect their citizens. But this summer’s revelations highlighted the urgent need to reform government surveillance practices worldwide. The balance in many countries has tipped too far in favor of the state and away from the rights of the individual — rights that are enshrined in our Constitution. This undermines the freedoms we all cherish. It’s time for a change..."

Signed by: Google, Microsoft, Yahoo and others.

http://reformgovernmentsurveillance.com/

I hope it actually comes to pass. I have my doubts.

PetrobrasDecember 9, 2013 4:44 AM

About auditing a physical processor: there is an interesting video https://www.youtube.com/watch?v=K5miMbqYB4E with table of contents made available at http://reverseengineering.stackexchange.com/a/3111/3259 ...

Clive Robinson: "@Nick P does not like it because it's not a "strongly typed" language, but I disagree It's the programs that should be strongly typed not the language."

I totally agree with @Clive.

All we need is speed, and an OS with very few lines of code.

I have nonethess thought of another disadvantage of forth: there is not separation between data and code, that means easier code injection. But OASIS is the answer for that, thanks you very much @Nick P.

Clive Robinson: "Might be an ideal time for an enterprising group with a totally different focus (subversion resistance) to take over one or more of [these microprocessor plants]"

What would be the best to crowdfund the needed millions ? What country would you like this plant to be moved to ?

kashmarekDecember 9, 2013 6:23 AM

@Toto:

The TIA (Total Information Awareness) logo reminds me of the "all seeing eye" from Lord of the Rings.

Nick PDecember 9, 2013 9:47 AM

@ Bryan, Petrobas

The government is designing secure processors and stacks under DARPA's Clean Slate. They have top groups on it too. Two of them already have prototypes and many details in their papers. OASIS, on other hand, was a program to come up with mechanisms to build intrusion tolerant networks and a few other things.

Since there's so much interest, I think I'll do the paper release next Friday in the squid forum. It was just going to have the secure hardware papers I'm referencing and some other min work max payoff tech. However, since Petrobas likes that OASIS paper, I'll throw in some of the other OASIS project deliverables as well.

Nick PDecember 9, 2013 10:06 AM

@ Petrobas

"Clive Robinson: "Might be an ideal time for an enterprising group with a totally different focus (subversion resistance) to take over one or more of [these microprocessor plants]"

What would be the best to crowdfund the needed millions ? What country would you like this plant to be moved to ?"

That was actually my comment you were referring to. I'm not sure if you *can* move a semiconductor plant. In any case, the cost of one to buy would be extremely expensive and high to operate. It would probably be a continual loss for whoever acquired it. This means that, if acquiring for security, then it would likely be a group of investors who thought that was worth it. RobertT previously suggested a bunch of smaller countries worried about spying getting together on a fab. That's definitely a possibility. Another is them and private groups like multinationals (or drug lords) funding one that's capable of producing mobile chips as well.

I posted the list of process node tech because I wanted RobertT's opinion. Thing is, there's many types of chips of various sizes and functions. We can't have a fab for every one. So, how many fabs need to be built/bought at what process node tech to get most bang for buck and lowest operating cost? I think this question must be answered before we see progress.

re Forth

Yes, code injection is one of my main worries about a typeless language.It's the main reason C was avoided for most security kernels back in the day. Now, as I write this, I wonder if it might be possible to design a Forth system that loads code at boot, validates it, and then allows no new code in the system. Might be doable.

RobertTDecember 9, 2013 10:38 AM

@Nick P
Re Buying an old Fab

There are plenty of old fabs available in the world however they are available because they loose money. basically their operating costs exceed their revenue. They are in this situation despite having run the fab for years and developed product specifically for this fab over the years.


The basic problem is that functionality is actually getting cheaper with each new generation of fab, so a z80 embedded somewhere on a 40nm is chip is cheaper than a 3 um stand alone z80 core. Since most of the market pays for functionality it means that most functions migrate out of the old fabs about 3 generations after the fab is created. Today 22nm is the leading node so most material will be processed in 40nm, 65nm or 90nm fabs.

ALL product above that node either requires higher voltages OR simply does not require the advanced processing (functions like Analog and sometimes RF ). the problem with back filling a fab with these functions is two fold, firstly the market is small and owned by companies like TI, Analog devices,....even if you have the equivalent product nobody knows about it nor do they trust you as a reliable source for these functions.

The minimum operating costs are probably around $5M/month even for a fab that is free, so you need to generate this revenue from what ever product you produce, just to keep the doors open.

Anyway bottom line is that there is no way to justify owning a fab just so you can be sure of the production flow.

BTW ALL subversion of product at the Fab stage is probably done by motivated enterprising individuals without any cooperation from fab management. WHY would this be any different at a fab that you own?

kashmarekDecember 9, 2013 11:18 AM

With regards to the Amazon package delivering bots (via Slashdot):

eBay CEO: Amazon Drones Are Fantasy

Posted by samzenpus on Monday December 09, 2013 @08:52AM
from the pie-in-the-sky dept.

angry tapir writes "In the race to deliver online shopping purchases faster, drones don't impress eBay's CEO. 'We're not focusing on long-term fantasies, we're focusing on things we can do today,' John Donahue said in an interview. He was reacting to an interview Jeff Bezos, CEO of e-commerce rival Amazon, gave last weekend in which he said Amazon is investigating the use of drones for package delivery."

HolyCrapMarthaDecember 9, 2013 11:34 AM

NSA invades gaming world (from Slashdot)...

http://games.slashdot.org/story/13/12/09/1632242/nsa-collect-gamers-chats-and-deploy-real-life-agents-into-wow-and-second-life

My God, what's next? Will they start visiting brothels, bars, and gambling halls? Shucks, they have probably already been doing that. Well, maybe they will look into churches, televangelist get-togethers, and charities. But that has probably already been done as well.

So, what's left? Political organizations, Congress, the Federal Government....what am I saying...they are part of the Federal Government.

Ubiquitous, disease, cancer, something awful...that about covers it, except it is all paid for by your tax dollars. Can't get much better than that.

AnuraDecember 9, 2013 12:09 PM

"The Creature tells of evil gnomes, coming to destroy our homes, and trolls that come with gun and knife, to threaten our way of life."

BryanDecember 9, 2013 12:42 PM

The basic problem is that functionality is actually getting cheaper with each new generation of fab, so a z80 embedded somewhere on a 40nm is chip is cheaper than a 3 um stand alone z80 core. Since most of the market pays for functionality it means that most functions migrate out of the old fabs about 3 generations after the fab is created. Today 22nm is the leading node so most material will be processed in 40nm, 65nm or 90nm fabs.
I'd kind of suspected this. My thoughts have been to make the CPU with current tech at multiple FABs. Just make the function blocks used in the design auditable, and audit their interactions.

I do like the idea of getting a few small country governments and some corporations together to pay for a modern FAB and keep it up to date. The other option is to have the chips made in multiple fabs. The question becomes, which country to site the FAB? I'd be looking for a very clean human rights record.

Clive RobinsonDecember 9, 2013 12:43 PM

@ Nick P,

    What did you mean by that?

A few things are quite criticaly effected by the thread type.

Essentialy when you create a new dictionary word by the usual,

    : name xxxx yyyy zzzz ;

You end up with a dictionary entry of,

    [head][xxxx][yyyy][zzzz][end]

Where the head consists of the following,

    [len][name][link][code]

And the xxxx/yyyy/zzzz are entries form the parameter entries and 'end' provides the code for the return function which leads into the next word in the program.

The head 'len' field gives the number of chars in the 'name' field, the 'link' field is to the next word in the directory and the 'code' field provides a jump or call address for the code which handles this directory word.

The paramters can be "constant" which loads a value onto the stack, "variable" which loads the variable address onto the stack "user" which identifies an offset into the user memory area or "code" which can either be a link to another word in the dictionary or a jump to a block of native code.

Your Virtual Machine can work with different types of fields in the head or parameter area depending on various trade offs.

For instance if the number of dictionary words is small then the DOCOLON head can be stripped back to nothing by using a lookup table instead, likewise the word parameter entries could be reduced to just a byte token. Whilst saving program space it does slow the VM interpreter down. At the other end of the scale the dictionary can be dispenced with by the VM running a compiled program where words are replaced with inline assembler macros, this gives the fastest execution speed but a very large compiled program without functions in this respect it's similar to "loop unrolling". Originaly Forth was optomised for miinimum memory foot print with the later standard Forths such as FIG Forth on 8 and 16 bit machines sat in the middle but the more recent ones for PC's with effectivly unlimited memory tend to go for the compiled option.

As for,

    Now, as I write this, I wonder if it might be possible to design a Forth system that loads code at boot, validates it, and then allows no new code in the system. Might be doable.

Fairly easily, if you think about Forth on Harvard architecture CPU's much of the work has been done, and modifing the outer interpreter loop likewise fairly simple. The problem being for the programers not defining dictionary based data just user area data. Strictly speaking it would not be Forth but some other "Forth like" language of which quite a few exist.

PetrobrasDecember 9, 2013 3:59 PM

@RobertT: "WHY would this be any different at a fab that you own?"

This is why I insisted on how to audit a chip with a microscope.

If you own the fab, you may also change some details to make that audit cheaper. (What I have in mind: do not coat or paint the transistors on the processor, limit the number of transistors to a low number, use a die that is transparent for the relevant microscope, ...).

(Sorry @Nick P for the misattribution of your comment.)

Clive RobinsonDecember 9, 2013 4:15 PM

OFF Topic :

It would appear that the various US Gov federal agencies still uses floppies for data transfer with only one or two using CD ROMs. And if other sources are correct Win XP and even Win 95 are common. Apparently the reason being in part the cost of their internal IT market being way way to high (thus next to none sign up so costs don't come down...).

http://gawker.com/federal-agency-still-uses-floppy-disks-1478600670

Anyone care to think about the security implications of this...

65535December 9, 2013 4:41 PM

@RobertT

“…minimum operating costs are probably around $5M/month even for a fab that is free, so you need to generate this revenue from what ever product you produce, just to keep the doors open.

I agree and that is probably on the low side. Fabs require economies scale and booked orders. The maintenance of old fab production machines is quite high.

But, there are R&D fabs that might do specialized jobs. The price per unit would be rather high.

“BTW ALL subversion of product at the Fab stage is probably done by motivated enterprising individuals without any cooperation from fab management. WHY would this be any different at a fab that you own?”

I would guess if you targeted a disaffected worker with enough money (NSA size pay-offs) you could persuade him to subvert the fab (or at least record details of the workings of the fab for later subversion).

@ HolyCrapMartha

Those are signs that the NSA has hit rock bottom. The NSA has run out of real terrorists to target and is moving into vice style crimes.

Terrorists probably use courier methods or communications provided by Russia and China. The term “National Security” is being stretch to mean almost anything.

BryanDecember 9, 2013 5:29 PM

Those are signs that the NSA has hit rock bottom. The NSA has run out of real terrorists to target and is moving into vice style crimes.
Nah, they just wanna improve their own online gaming scores and they no longer can find enough real targets to point their massive spy apparatus at. ;}

AdjuvantDecember 9, 2013 5:47 PM

@Clive, Nick P, Figureitout: Many thanks for the input. I'll play messenger later tonight with the qi-hardware list. Most of all, I agree with Clive's recommendation. That any such project be developed as an out-of-band device in order to remain useful against a capable adversary. I agree that the threat model is fuzzily defined, and I'm hoping to build some useful bridges that might benefit all parties.

@Petrobras w.r.t limited transistor count and auditability of a proposed design, allow me to make another effort at cross-pollination here. With the understanding that this question comes from an essentially non-technical individual with some technical acculturation, has anyone encountered or formed an opinion on the CPU architecture put together by one Russell Fish III and his vehicle VenRay Technologies? The specifics are outlined in this article-qua-manifesto which I happened to stumble upon while trying to inform myself about the future prospects of Moore's Law: http://www.edn.com/design/systems-design/4368705/The-future-of-computers--Part-1-Multicore-and-the-Memory-Wall

It comes to my mind chiefly because it purports to achieve high performance with a low transistor count while utilizing common processes, specifically by integrating CPU cores onto DRAM dies. I'll note up front that certain commenters seem to consider the idea straight-up bonkers: http://hothardware.com/News/CPU-Startup-Combines-CPUDRAMAnd-A-Whole-Bunch-Of-Crazy/
But from a position of genuine ignorance, I wonder intuitively whether there might be anything useful in this idea for security or auditability purposes?

Respectfully submitted,
Adjuvant

AnuraDecember 9, 2013 6:58 PM

http://www.washingtonpost.com/politics/federal_government/big-tech-companies-lash-out-at-government-snooping/2013/12/09/752171a6-612e-11e3-a7b4-4a75ebc432ab_story.html

Twitter Inc., LinkedIn Corp. and AOL Inc. joined Google Inc., Apple Inc., Yahoo Inc., Facebook Inc. and Microsoft Corp. in the push for tighter controls over electronic espionage. The group is immersed in the lives of just about everyone who uses the Internet or a computing device.

I'm a bit cynical in that I feel the thought process behind this is "If the NSA keeps it up, people will stop giving us their information and we won't have full knowledge of what everyone likes, buys, reads, searches for, and where they go, who they know, etc."

Dirk PraetDecember 9, 2013 7:11 PM

@ HolyCrapMartha

Re. NSA invades gaming world

Sniffing out AQ terrorists in WoW and SecondLife ? Probably the most creative excuse ever to goof off at work. I mean, seriously ? Your tax dollars at work ...

Nick PDecember 9, 2013 7:22 PM

@ RobertT

"The basic problem is that functionality is actually getting cheaper with each new generation of fab, so a z80 embedded somewhere on a 40nm is chip is cheaper than a 3 um stand alone z80 core."

Interesting. Thanks for this and all the useful details.

"The minimum operating costs are probably around $5M/month even for a fab that is free, so you need to generate this revenue from what ever product you produce, just to keep the doors open.

Anyway bottom line is that there is no way to justify owning a fab just so you can be sure of the production flow."

I appreciate the estimate and opinion. I expected it to be over ten million a year but near $60mil would make most rich fat cats spit their drink across the bargaining table. It would definitely need to be subsidized if it were to happen at all.

"BTW ALL subversion of product at the Fab stage is probably done by motivated enterprising individuals without any cooperation from fab management. WHY would this be any different at a fab that you own?"

Because others and I have been dealing with *that* problem for a long time now. Thing is, unless I have a fab or am partnered with one I have no insight into the process *at all.* They can say whatever they want and I'm operating on blind faith. If I can manage personnel, operation, etc., then I take measures that have a shot at working against TLA's. That's a world of difference in assurance.

@ Wael

"I'll need some time..."

Well RobertT covered the price/performance issue pretty well. The remaining issues from all of our fab discussions are as follows:

1. Of existing fabs, which are trustworthy against who and why.

2. Of existing process node tech, which are cheapest to audit by buyer for potential subversion and what are best methods?

3. What processor types make most sense to do a knock off of to get most bang for investment buck.
3a. Servers.
3b. Desktops.
3c. Special purpose devices such as networking, storage, or thin clients.
3d. Embedded chips with fewer resources.

So, there's plenty of things to discuss far as hardware openness and subversion go. These issues need good answers before any investment is made.

@ Clive Robinson

"At the other end of the scale the dictionary can be dispenced with by the VM running a compiled program where words are replaced with inline assembler macros, this gives the fastest execution speed but a very large compiled program without functions in this respect it's similar to "loop unrolling"."

That's giving me ideas of how to do a Correct by Construction style of Forth. Not coherent enough to publish yet, though.

" Strictly speaking it would not be Forth but some other "Forth like" language of which quite a few exist."

I figured. Might still have the advantages, though.

@ Petrobras

"This is why I insisted on how to audit a chip with a microscope."

RobertT once linked to a company that specializes in reverse engineering chips. It was very interesting reading on how they dealt with the problem layer by layer. The difficulty and cost involved made me guess that it wouldn't seem like an easy way to verify chips as only the tiniest portion could be checked and you'd have to hope the subversion was visible.

So, my focus shifted to inspecting chips that weren't so tiny/complex or securing the production process. There's also good work going on in academic circles on designing circuits that can be combined with potentially malicious one's one a SoC to reduce risk. All that is in its infancy, though, while dealing with technical and personnel issues is a bit better understood. So, I picked a hard option to look into but it's because alternatives seem as hard or harder.

@ Adjuvant

I'm glad you shared this link because I think it might tell me something important. RobertT pointed out that the logic cells are important in the process as they're often a black box supplied to reduce design time. I wondered how hard it would be to make some open, trustworthy cells. The link gives us an idea:

"Creating and characterizing a logic library is not difficult, but it was time consuming. We ultimately created just over 60 digital cells22 and 14 analog cells and supercells.23 The CPUs were built from these cells."

The rest of the CPU on cheaper memory fab process is also brilliant. There could be something to use here for... something. Still not sure as those basic questions I outlined earlier must be answered. I'm keeping this bookmarked though.

@ Dirk Praet

Best job at the NSA. Too bad I can't stand WoW or NSA. I'd probably sign up otherwise.

FigureitoutDecember 9, 2013 8:46 PM

Clive Robinson
all befor you begin the self whiping
--Oh yeah I just love to spank myself. :) In all seriousness I'm getting a book from the library on Forth w/ a foreword from Mr. Moore himself. Bruce's books were there too, did a quick flip through Cryto-Engineering. Also got an embedded C book and guess which chip they're featuring: 8051 (like I said, more spanking :). I've got a bunch of other things first like lots of basic circuits I need to get familiar w/ so I've always got lots to do and then can be a little more productive for a secure PC. Need some years but I don't know what the police state will be like then so we need some non-political action *cough Nick P* now and I'm not trusting someone saying they haven't flashed a virus or hidden commands in chips and hell no bluetooth.

Bruce ClementDecember 9, 2013 9:45 PM

FreeBSD removes hardware random number generators from /dev/random

Not completely, but their output will be sanitized through Yarrow with the rest of their input "we are going to backtrack and remove RDRAND and Padlock backends and feed them into Yarrow instead of delivering their output directly to /dev/random" FreeBSD status report

RobertTDecember 9, 2013 11:24 PM

@petrobras
This is why I insisted on how to audit a chip with a

Have you ever looked at the surface of a 90nm chip with an optical inspection microscope?

I'm guessing the answer is NO because truth is you cant really see anything just a blurry mess of golden metal . Now you can use an optical immersion microscopy techniques with specialty DUV optics and deep UV laser illumination source but it is not and easy a flow OR you can inspect with SEM (scanning electron microscope) maybe $400/hr dont know exactly haven't used one recently. This inspection does not really tell you anything about the electrical connectivity of each Via / contact. (Via's connect Metal layers while Contacts usually refers to Metal1 connection to Poly or diffusion / well layers)

Via "poisoning" has been a major issue for fabs for at least the last 15 years, so it is the obvious place to hide a fault that would cause subverted behavior. It's a big issue because the only way to discover it is by cross-sectioning the via (basically make multiple cuts across something that is less than 50nm wide) Typically this is done with a FIB (focused Ion beam) machine and takes about an hour or 2 hours per via. A modern processor chip might have over 500M vias on it, so checking everyone on one chip would only take 1B hours and cost about $1T USD. (around $1K/hr). This is why you need to know what you are looking for before you start down the path of verification by inspection.

If I were Reverse engineering a chip I would not inspect every via, it is completely unnecessary, I just assume that what looks like a Via is a Via (HOWEVER sometimes when you want to confuse someone trying to steal a design, by reverse engineering, you might intentionally add structures that look like they are connected with a Via but where the Via itself is missing. It is very difficult to discover this sort of thing especially if the circuit will sort-of function when the connection is made but only works correctly when the via is absent. )

You dont seem to understand that 20nm is about 1/50th the size of the minimum structure that a typical optical inspection microscope can resolve even at 130nm your dealing with structures that are about 1/4 the wavelength of Green light (about 500nm). You might be able to sort-off see 130nm cells but you have no idea what the edges of the structure really look like its all just a diffracted blurry mess.

In a perverse way the difficulty with just getting a 20nm function right is likely to be the main force that prevents the functionality from being subverted at the fab. Think of it this way, if the fab were capable of properly producing structures at 1nm than they would be already doing it. Making intentional processing/lithography errors that have a known result, but remain undetectable is probably an order of magnitude more difficult than just making the circuit they asked for. In technology more difficult means more likely to go wrong which means more likely to attract attention (which is always the path to discovery).

Bottom line is I dont hold out much hope for you closing the loop completely through optical inspection.

RobertTDecember 10, 2013 1:21 AM

@Nick P
If I understand you want to own the fab so that you can understand what the fab guys are doing?

That's some crazy logic. I dont own a fab nor do I aspire to own one yet I suspect I understand far more about this function then I would ever understand if were just the owner. Owning a Fab, from my experience does not suddenly make you an expert in anything apart from possible accountancy fraud. Most small fabs owners seem to acquire fraudulent accounting skills extremely quickly.

To understand what is possible with a modern fab you'll need to understand cutting edge Lithography, Advanced directional etching , Organic Chemistry and Physics that's not even nearly mature enough to be printed in any text book. These skills are all combined to repeatedly create structures at 1/10th the wavelength of the light being used. Go back just 10 or 15 years and you'll find any number of real experts (with appropriate Phd qualifications) that were willing to publicly tell you just how impossible the task of creating 20nm structures was, yet here we are!

Not sure why you believe that owning the fab will suddenly give you these extremely rear technical skills. If you dont have the skills, and I mean really have the skills (really be someone that knows the subject and is capable of leading edge innovation) then you must accept everything that your technologists tell you, even when they're intentionally lying. I cant see why this is any better then simply trusting someone else to properly run their fab and not intentionally subvert the chip creation process.

In the end it all comes down to human and organizational trust,

I'd say the greatest revelation of the whole Snowden affair is the relative ease with which the NSA has perverted individuals, organizations, companies, law makers and the entire judicial process. What's more disturbing, at least for me, is that we're not even sure why they've done this, only one thing is certain: it is not for the publicly touted reasons.

PetrobrasDecember 10, 2013 3:15 AM

@RobertT: "If you dont have the skills? then you must accept everything that your technologists tell you, even when they're intentionally lying.;"

If you want uncoated processors with only two metal layers (for easier checks), your skills won't be enough, you also have to own the fab to ask for such requirements.

@Adjuvant: "has anyone encountered or formed an opinion on the CPU architecture put together by one Russell Fish III and his vehicle VenRay Technologies?"

Thank you for the link, this processor is a very good choice ! Quoting your link and next article http://www.edn.com/design/systems-design/4368858/Future-of-computers--Part-2-The-Power-Wall of same series:

"The Borealis [...] 32-bit microprocessor cores [are] implemented in 22,000 transistors including multiplier and barrel shifter. The virtual memory controller adds another 3,500 transistors. The architecture could be called a modified RISC. [...] This was easily routable with 3 metal layers. The caches added an additional 393K per core. You can run your gcc benchmarks on TOMI Borealis at http://www.venraytechnology.com"

@RobertT: "It's a big issue because the only way to discover it is by cross-sectioning the via (basically make multiple cuts across something that is less than 50nm wide) Typically this is done with a FIB (focused Ion beam) machine and takes about an hour or 2 hours per via. A modern processor chip might have over 500M vias on it, maybe $400/hr dont know exactly haven't used one recently."

This is exactly why I was looking at the armv6l processor with only 35,000 transistors, and is why the Borealist processor with only 22,000 transistors seems very nice. That costs $35M to check all VIA connections on *one* processor, so it becomes feasible.

And I guess that in 35000 transistor, it becomes very difficult to subert the processor without introducing visible bugs. So it may be enough to check the transistor layout, you need $100K to check one processor, assuming 10 seconds for each transitors and assuming full documentation of the transistor layout. This becomes possible.

@RobertT: "Go back just 10 or 15 years and you'll find any number of real experts (with appropriate Phd qualifications) that were willing to publicly tell you just how impossible the task of creating 20nm structures was, yet here we are!"

The past boyfriend of someone I know is a physicist that had, in 2006, 5 years advance on most recent processors about that subject. Don't know though if he would be willing to help us just for the heck of it.

scripted lynx userDecember 10, 2013 3:19 AM

@Zaphod: "Anyone else experience the site being down around 21:15 GMT?"

Yeah me too. If the moderator asks for it, I will produce detailed logs from my lynx logs (see the link).

PetrobrasDecember 10, 2013 3:25 AM

typos:

s/it becomes very difficult to subert the processor/it becomes very difficult to subert some VIAs/

Z.Lozinski December 10, 2013 4:37 AM

@Zaphod: "Anyone else experience the site being sown around 21:15 GMT?"

Yes. It looked like the site was unreachable, but I didn't check the logs to be certain.

Clive RobinsonDecember 10, 2013 5:29 AM

@ Figureitout,

With regards real "books" on Forth they are getting a little like "hens teeth" to find these days.

That said if you have a hunt around you can find PDF's of the books often on the writters home pages.

However the link to 4th I gave above has a 500 page manual as a PDF. I'd have a look at it because although 4th is not a "True Forth" it is certainly a good threaded language to study being written in C makes it fairly easy to understand and link into your own code.

If you can track down "Frank Sargent's Pigmy Forth" it was small fast and when I originaly played with it easily understood.

The trick to understanding threaded languages like Forth is to take your "current programers head" off and put your logic/hardware head on ;-)

In essence what you are doing is building a virtual CPU as an emmulator ontop of an existing CPU. The result is often way greater than the sum of the parts ;-)

Oh and if you are thinking about "rolling your own" search out the "Jupiter Ace" documentation on the net it was a 1980's Z80 based system in many ways very similar to Sinclairs ZX80. At one time or another the Z80 assembler file, circuit diagram and PCB layout were up on the net and the user manual had a nice intro to Forth that young children found understanadable. Yup back in the 1980's there were 8year old Forth programers, my neighbours kid was one much to my annoyance, as she was better at it than I and won 1st prize at the local computer club for a game of "snakes" she wrote I didn't even place with my "Space Defender" game written in basic on the Apple ][ (which I still have).

RobertTDecember 10, 2013 6:31 AM

@Petrobras

I'm not looking for a long drawn out discussion but I think there is one fundamental concept that you need to understand.

Basically if ANYONE on the team understands how to achieve resolution beyond the normal wavelength related focus limits of typical optical processors THAN you must assume that they have the ability to create structures which you cannot properly inspect. This skill means they can potentially subvert chips without making obvious alterations.

Because knowledge is never destroyed we (society) simply cant ever go backwards even if we really want to. So a 2014 security processor made on a 1990 1um process (with just 35000 transistors) sounds great until you realize that the die area required to implement the function could potentially contain other active structures / connections so tiny that they were invisible with your inspection methodology. It is possible because today's best technologists understand how to achieve this feat. That knowledge never dies so once discovered it exists forever as a potential subversion method/ threat. Ignoring the facts does not eliminate the attack vector!


PetrobrasDecember 10, 2013 6:55 AM

@RobertT: "That knowledge never dies so once discovered it exists forever as a potential subversion method/threat."

But it cannot subvert processors that were produced previously.

So you may now:

- buy armv6/Borealis on 5 years old computers, physically audit them, and use them, right away.

- produce some armv6/Borealis at 130nm (ideally uncoated and with only two metal layers), and store them unused for 5 years before physically auditing them.

(Thank you all for your patience)

Nick PDecember 10, 2013 9:41 AM

@ RobertT

Your post seems to assume that my strategy stops with ownership of the fab. It certainly doesn't. The threat is a combination of technical, physical and personnel. Controls would be in place for each of them. I've put the most mental effort in my solution to the trusted personnel problem which is neither easy nor guaranteed, but would fail entirely if it's publicly known. That's why I haven't mentioned it in these comments.

The problem isn't as hard as you make it out to be though. You're so brilliant and knowledgeable about the technical aspects of it that this is what you're focusing on. The problem isn't outthinking every single thing the technicians might do. The problem is ensuring they wouldn't try. That's a human loyalty problem and has been successfully handled many times in history. Hundred times as many failures, for sure. Focusing on that problem, what tilts odds in defence's favor, and doing what little auditing of results is possible will be how success happens. Any other route will just put an illusion of security up that results in massive compromise.

re new tech vs old

". So a 2014 security processor made on a 1990 1um process (with just 35000 transistors) sounds great until you realize that the die area required to implement the function could potentially contain other active structures / connections so tiny that they were invisible with your inspection methodology."

Very well said. I'll try to remember this myself.

re 20nm

You keep mentioning it's so hard to create structures on this level and even harder to do devious things. Does this mean that you think the inherent nature of this process node makes it safer at the moment from compromise? In other words, the middle parts between specifying the hardware and fabbing it are where the only real risk is? If so, then going straight to 20nm with existing fabs in Asia seems to be the best solution to that part of the problem.

And maybe keeping the cost down by fabbing a FPGA or CPLD instead of a specific processor design for at least the embedded chip. Then the fab would constantly crank out the same thing, but the actual uses would be versatile. The most powerful chip, server or desktop use, would probably be a dedicated ASIC of which there are open starting points. 20nm gives us many possibilities the other fabs I looked at didn't.

@ Petrobas

Knowing one person you might trust won't help. Fabs are huge operations. The buying older chips solution might help and was what I recommended previously on this blog. The problem is that it will eventually result in a situation similar to the pre-1986 machine gun market where the numbers are constantly declining due to wear and the supply/demand difference drives prices up. Unlike machine guns, we can't replace individual wearing parts of VLSI circuits so the tens of thousands that exist on the market will disappear quite quickly.

A solution to the problem that allows use of newly produced chips is inevitably necessary and might be necessary sooner than we want.

Re old processors

I'd go back farther than that in general. NSA had less authority or will to take risks on subverting the chips themselves as you push back toward the 90's. My range is 2003-2004 or older. Still plenty of use to be gotten out of top chips made in that era. One piece of equipment I have is 2003 which seems safe b/c it was a minority player outside of NSA's target demographic. It still has a 1GHz processor, vector extensions, and plenty RAM. :)

ZaphodDecember 10, 2013 9:41 AM

Thanks scripted and Z.L.

Do we (collectively) need to be worried about the downtime?

I guess, if we do, then no one (apart from me) will see this post - and *I* may see some bogus replies.

@Clive - maybe we should all reveal our own most obscure implementations of Defender :-)

Z

BryanDecember 10, 2013 1:03 PM

@Nick P

If so, then going straight to 20nm with existing fabs in Asia seems to be the best solution to that part of the problem.
I came about to this stance when I thought about how to check a billion plus transistor chip. All checking would have to be automated, and even then it would be slow. The other thing is I also figured that to be successful, the project would need to produce a competitive product. That also drove home the need to be current with the fab tech. It's also much harder to deliberately bugger up something in a specific exploitable way when it is barely understood than when the fab methods are well understood.

@Nick P

The problem isn't as hard as you make it out to be though. You're so brilliant and knowledgeable about the technical aspects of it that this is what you're focusing on. The problem isn't outthinking every single thing the technicians might do. The problem is ensuring they wouldn't try. That's a human loyalty problem and has been successfully handled many times in history. Hundred times as many failures, for sure. Focusing on that problem, what tilts odds in defence's favor, and doing what little auditing of results is possible will be how success happens. Any other route will just put an illusion of security up that results in massive compromise.
I also tend to be to focused on tech aspects, but then I'm not a people person. I'm a thinker.

How about applying the two man coding methods to chip design. Two people work together on one task. Both are making inputs to the task as well as checking each other's work. Also spot check auditing of work along the way. I see areas like the instruction and data paths needing to be carefully checked that they are clean.

On chip inspection: At each step of the fab process, wafers could be grabbed and checked against what they are expected to look like. The composite of those inspections could then be checked for matching the functional design.

There could also be careful design of the chip so it is easy to match up chip parts to the original design. Keep the major functional units separated by distinct border zones.

Nice cross section diagram showing layers in an early 2000's CMOS chip and package. The actual transistors on a real chip would be much smaller than shown.
https://en.wikipedia.org/wiki/File:Cmos-chip_structure_in_2000s_%28en%29.svg

More for the uninitiated:
https://en.wikipedia.org/wiki/Standard_cell

I'm playing catchup on fab stuff, and I'm not sure how deep I want to go in this direction. I'm more interested in the CPU/MMU/etc. features that could make high security easier to assure even in the presence of bad code. IE, how to partition a system so that a breach in one area is contained only to that area. Then once I have that design, design an OS and development environment that makes use of the high security as easy and natural as possible.

AdjuvantDecember 10, 2013 1:04 PM

Since my previous contributions seem to have been so well-received, allow me to mix my metaphors and throw one last curveball into this pot. In most high-level discussions I've encountered about variant CPU architectures, there always seems to be at least one fellow who intercedes vociferously on behalf of a particular lost cause that seems to inspire almost religious fervor among its adherents (who tend to be concentrated in the Northeast US). Specifically, I am referring to the long-lost Lisp Machine. Seeing as this pleading has failed to materialize here of its own accord, allow me once again to cross-pollinate with a link to a fine example in the genre from some fine Quijote who, lacking a better alternative, appears to be trying to instantiate a virtual LISP machine on x84-64 hardware single-handedly, starting from scratch.

First, an excerpt that may pique some interest: "Educated persons who read Ken Thompson’s “Reflections on Trusting Trust” throw up their hands in stoic resignation, as if they were confronted with some grim and immutable law of nature. But where is the law of physics which tells us that any computation must be broken up into millions of human-unintelligible instructions before a machine can execute it? Not only is it possible to build a CPU which understands a high-level programming language directly, but such devices were in fact created – many years ago – and certainly could be produced again, if some great prince wished it. It is also eminently possible to build a computer which can be halted by pressing a switch, and made to reveal – in a manner comprehensible to an educated operator – exactly what it is doing and why it is doing it. Can you buy such a computer at your local electronics store? Of course not. The Market, that implacable Baal, Has Spoken! – it demands idiot boxes. And idiot boxes are what it will get."

For discussion, reflection, and (perhaps) assimilation: http://www.loper-os.org/?p=1299


Nick PDecember 10, 2013 1:20 PM

Re Forth

I think everyone considering Forth might find this an excellent read. It goes deep into the nature of Forth, praises it, criticized it, and reflects on his personal experiences with it in combined software hardware systems. Relevant to this discussion.


http://www.yosefk.com/blog/my-history-with-forth-stack-machines.html

Another takeaway for me was that LISP makes even more sense due to (a) having similar benefits wrt flexibility and (b) having vastly more resources put into it that might benefit next project. Also has verified implementations, chips, security kernel, compilers to efficient C, etc. It bridges with mainstream platforms and security efforts more effectively is I think what im trying to say.

CallMeLateForSupperDecember 10, 2013 1:25 PM

@ Clive
"Anyone care to think about the security implications of this..."

No thinking required, Clive. I noted Win-95 and Win-XP and reflex-gagged. The fact that those creaking malware magnets are still used by gummit agencies means that they have a much larger problem than figuring out how to migrate from floppy disks. Just four months from now, XP will be a crewless hulk, guns spiked, totally defenseless, ripe for the big pwn. "I didn't have the funds to upgrade" will not pass muster (IMHO).

CallMeLateForSupperDecember 10, 2013 2:05 PM

Regarding the recent open letter to Obama, from Google et al, I was struck by how seamlessly it blends concern for citizen rights with concern for the health of the authors' respective businesses. Presumably any Congress critters who are not moved to action by the former will be moved by the latter. We can only hope that *something* will break the constipation.

This part of the letter particularly irks me: "...this summer’s revelations highlighted the urgent need to reform government surveillance practices..." Guys and gal, summer was *months* ago; did it really take this long to draft a letter?

AdjuvantDecember 10, 2013 5:32 PM

@Petrobras Thanks, I didn't catch up with the relevant portion of that thread. If I'm going to be "moving in" here, I guess I'd better be more thorough in keeping up. I see that Stanislav Datskovskiy himself had already linked to his blog post, but I think it does stand the reposting!

In any event, tangential to that thread, there's another thought that's been gnawing at me for some time but that I don't have the technical acumen to scratch the itch on. This is with respect to practical security of x86 and x86-64 BIOS, which has been a concern in many quarters for some time.

There have been some pro-active mitigation efforts in the industry, such as HP's BIOSphere work (the soundness of which I am entirely unqualified to assess):
http://www.hp.com/hpinfo/newsroom/press_kits/2013/HPEnterpriseSecurity2013/CMIT_Security_Release.pdf

I note also Wael's suggestion in the linked thread to flash a CoreBoot BIOS image, then "disable BIOS upgrades" and then air-gap.

But intuitively, these solutions seem less than satisfactory to me. It is my understanding (via Wikipedia et al.) that in the past BIOS chips were produced via mask-programmed ROM that was not alterable after manufacture. Applying KISS, wouldn't the simplest and most fail-safe mitigation be a return to true ROM -- perhaps with socketed chips that could be replaced in order to "upgrade"? Is there some major technical or economic hurdle to this idea, or is it simply a design choice that has become outmoded for reasons of convenience and remains so because of industry inertia?

I've have noticed in my Internet wanderings that certain very high-end gaming-oriented motherboards do include socketed BIOS chips to allow for custom BIOS mods for extreme overclocking (though I'm quite unclear on the details). Perhaps that might be a place to begin research for practical COTS projects?

I am assuming, of course, that one isn't here concerned with subversion at the fabrication stage, or that the intention is to establish a trusted supply chain for BIOS chips.

Curious to hear people's thoughts on this, with the understanding that this is a bit of a tangent.

RobertTDecember 10, 2013 5:41 PM

@NickP
Re: 20nm fabs
It is incredibly difficult to get anything to function at 20 nm, so yes if you find a working recipe your unlikely to fiddle with it in any way you'll just follow the recipe. That said there are definitely exploits unique to the deep sub micron process realm that are only evident to practitioners skilled in the art of creating these structures.

I've mentioned OPC (optical proximity correction) before and at 20nm there remain ways to control chip connectivity through manipulation of the OPC process. Typically these structures are automatically generated, VERY secret and definitely proprietary information. This means that even if you inspect the masks, you dont really know why any particular OPC structure is done exactly as it is done (why focus bars instead of hammer-heads, you dont even know whats the nature of the problem the OPC algorithm is trying to correct for) you basically have to trust the OPC algorithm there is no other way.

I've developed several other subversion methods specifically targeted at deep sub micron processes and their unique flow methodologies but I'm not ready to reveal these methods because I dont want to educate my potential adversaries, nor do I want them aware of the structures that would attract my attention.


Clive RobinsonDecember 10, 2013 7:54 PM

@ RobertT, et al,

    That knowledge never dies so once discovered it exists forever as a potentia subversion method/ threat. Ignoring the facts does not eliminate the attack vector!

Some time ago I assumed that at some point we would have to secure systems built with technology we could not check therfor could not trust.

So whilst trying to build trusted technology is a laudable endevor I haven't thought for quite some time it's either cost effective or even practicle in any meaningfull definition of the word.

Which is why I started looking at ways to mitigate subverted technology and thus still be able to build secure systems from it.

The question is how, I've indicated some in the past perhaps it's time others think of ways to do so.

RobertTDecember 10, 2013 11:00 PM

@Clive Robinson
We've talked about massively parallel processing architectures in the past and I still think this is the best solution. Imagine a chip with 1000 independent CPU's all working on a problem.

It is conceivable that each core could use very simple steps to encrypt its local and global memory that would ensure other cores could make no sense of memory reserved for another. Doing just this one step simplifies the job of preventing rogue applications from discovering key information stored in common memory or another cores local memory, it is kind of a hardware sandboxing strategy. Doing a very simple encryption built around soft Memory address offsets and simple XOR's with a changing RNG secret key is sufficient to completely eliminate most security concerns with processes accessing memory intended for others.

I'd add a hypervisor say every 16 cores with the selection of which core is the hypervisor being a soft decision driven by a TRNG at powerup.

None of this is cryptographically secure in any real sense, its just so unbelievably complex that it'll take another 10 or 20 years to figure out how to subvert the process, at which time you'll need a process 10 times more complex to get another 10 years of security.

Unfortunately as we have seen in the past most programmers are terrible at understanding parallelism and inherently code for a single core. They just dont think in terms of tasks that can be done in parallel and dividing these tasks across an array of cores. even CPU's with very advanced interrupt structures often get used in RTOS's with simple round-robin task allocation. When you ask why you get gibberish answers that basically come down to...serial task execution is easier!

Part of the reason I believe it might be time to revisit this architecture is that modern cores are so tiny that localized heating is becoming an issue CPU's have entered thermal management realm that was once the exclusive domain of power electronics. Modern sub 45nm processes actually exhibit positive thermal feedback, in effect they get so hot locally that the MOS parasitic bipolar devices (vertical PNP's I believe) can enter thermal run-away. In a distributed processing core we get better thermal management by controlling the processing load of each core, meaning the code execution efficiency of each core is no longer an issue. Indirectly this means that real security critical tasks can be allocated to their own core without any real loss of system performance.

The massive parallelism makes it difficult to create communication side channels because you simply dont know where or when to look for the required information.

FigureitoutDecember 10, 2013 11:31 PM

Clive Robinson
in many ways very similar to Sinclairs ZX80
--I'll just write it off as another very weird coincidence, but again randomly my dad shows me an old plane vacuum-tube radio from like the 1950s tonight and it has etched in the metal around it "Sinclair 44623". Led me to this guy who's also named "Clive" lol. My dad said someone in his ham club said it's a battery or something (wasn't really paying attention). Uses something pretty cool called VOR. Anyway, I'll just chalk it to yet another "funny coincidence" where you either know too much or raise some of my "red flags". :)

Yeah, I prefer my information to be in paper-format, during this past semester one of the little maple computer projects in a PDF was displaying characters wrong on my hilariously hopeless malware-infested computer and I couldn't see math problems. So it would probably of course alter information I'm kind of relying on (I also found that 'Deep C Secrets' book, probably going to read it too and compare).

Clive RobinsonDecember 11, 2013 3:26 AM

OFF Topic :

More spying on citizens news from the UK this time it's on children and their hopes and dreams by interception of their corespondance...

Apparantly the Royal Mail has analysed this years "santa mail" and the top wish is for Lego, whilst the number two is dreams of "One Direction" related merchandice...

Clive RobinsonDecember 11, 2013 4:12 AM

@ Figureitout,

Yup life abounds with coincidences (including the number of people who share my name...).

One word of warning about Sir Clive Sinclair, he's concidered by some to be a "visionary" but not much positive has been heard about him since he developed a battery powered tricycle like vehical back in the mid 80's and called it the "Sinclair C5", supposadly the name was a shortend version of "Sinclair electric car version 5". Which implies there was a C1,C2 etc. However as I'm sure you are aware the name "C4" is more frequently associated with a product that produces a lot more bang for your buck and jokes were made in that vein at the time. It did not sell well and one poor bloke won one in a raffle one New Years Eve and was arreste early new years day for being "drunk in charge of a vehical" when he tried to get it home. That said the C5 has been modified in various ways you might approve of and one won the land speed record for electric vehicals, and another got fitted with a jet engine. And yes there were more recent press stories of a C6 in the works. (http://news.bbc.co.uk/1/hi/magazine/3125341.stm )

Again VOR is a coincidence if you think back to conversations on this blog between Wael, RobertT and myself abouut Near Field Communications and maintaining radio secrecy I mentioned Aero Navigation systems that used mixed modulation methods (of which VOR is one) to provide directional information.

Clive RobinsonDecember 11, 2013 5:59 AM

@ RobertT,

    Unfortunately as we have seen in the past most programmers are terrible at understanding parallelism and inherently code for a single core. They just dont think in terms of tasks that can be done in parallel and dividing these tasks across an array of cores. even CPU's with very advanced interrupt structures often get used in RTOS's with simple round-robin task allocation.

I very much see programmers or as I prefer "artisan code cutters" as the major security problem, and why I said we need "strongly typed programs not languages". If they can not properly secure the data types they create them selves then what hope do we have...

The solution I thought of was not to alow the run of the mill code cutters to get at "sharp tools" they would try to run with. And instead give them pedestrian high level scripting which they should not trip with, where those very few capable of writing secure code would write the applets that the rest would script them together as required into aplications fairly rapidly. The applets would be written in a way where they would produce signitures that a hypervisor could monitor and check for abnormal behaviour.

The thing is "scripting" is about as "high level" in programing as you can get and do useful work, and usualy produces rapid development. Just about every study carried out shows that the number of bugs is related more to the number of lines of code not the functionality being built. Thus the higher the level the less lines of code to do any given job so (hopefully) the less bugs. Studies using Lisp suggest that currently it beats Java C++ and VisBasic and other .net languages hands down not just for speed of development, but on the less bugs and ease of maintanence side as well. Now I appreciate as there are so few Lisp programers it might reflect more on their abilities than Lisp it's self but...

Also few new code cutters appear to know that Rapid Application Development (RAD), Agile, Patterns and re-factoring all trace back to "Unix Scripting", which in turn picked up on methods (rapid incremental testing) used by NASA in it's Mecury Project, which in turn was based on work in the 50's by the New York Telephon Co. (Which also later spawned RAD).

So the idea has a reasonable pedigree behind it. The downside is it supposadly produces an inefficient and slow run time that then has to be optomised to be more efficient. However history shows that any attempt at optomisation generaly fails because of the lengthy time involved, and the fact that the power of hardware effectivly doubles in a relativly short time and end user needs change in a comprable short time, which means a release cycle might have a six month or less "usefull life" which means to be useful any optomisation needs to be no more than a couple of weeks.

As many sys admins will confirm they can write a reasonable script and test it in about 15% of the time it would take them to just type in the text of an equivalent functionality C program. Which is why I suspect the real sell to managment is "time to market" and "programmer productivity" both of which should improve considerably for competent practitioners.

However whilst this addresses the run of the mill single threaded design process multithreaded and parellel programing require different thought processes and also open the possability of increased security problems.

As far as I can see in many respects this is very much an open accademic research area currently. But if the applets represent minimal execution threads which run effectivly on a single CPU on a multi CPU chip, the problem moves more towards memory usage and communications, which is currently a beter understood area.

Clive RobinsonDecember 11, 2013 7:23 AM

OFF Topic :

It would appear that the FreeBSD developers have decided not to use the raw output of Intel and VIA's on chip TRNG's. And instead feed it through Yarrow before making it available.

http://arstechnica.com/security/2013/12/we-cannot-trust-intel-and-vias-chip-based-crypto-freebsd-developers-say/

Personaly I think this is a sensible thing to do and have said so several times in the past.

The last time on this blog was in connection with Linus "throwing the toys out of the pram" over people suggesting similar about Linux.

I wonder what Linus's comments will be now if somebody raises it again ;-)

Clive RobinsonDecember 11, 2013 7:40 AM

OFF Topic :

Are you sitting comfortably ?

Probably not... it's been reasond for quite some time now that "sitting upright like the boss wants" is neither comfortable or healthy. Past medical advice was get up and move around frequently but that would for many cause. "Boss freakout" with accusations of not working but "sloping off".

Well recent research sugests sloping around 135 degrees (about the same as a reclined seat on aircraft) is probably best for your back.

http://news.bbc.co.uk/1/hi/health/6187080.stm

So now having solved that problem, we now need a solution to the stress caused by a "boss freakout"...

Nick PDecember 11, 2013 11:49 AM

@ RobertT

re parallel architectures & security

"We've talked about massively parallel processing architectures in the past and I still think this is the best solution. Imagine a chip with 1000 independent CPU's all working on a problem."

Reminds me of the Thinking Machines Corp.'s CM1 which had 65,000+ independent processing units (which were quite simple). The Massively Parellel Processing architecture was used in scientific fields like genetic programming. It saw much less use than machines with a small number of high performance, fully featured cores. I think despite multicore the market would choose against such a design today too. Plus, having done MPP programming, I can assure you it's a challenge for the software developer even *without* security requirements on top. ;)

However, don't let mainstream fool you: there are people working on stuff like that (parallel, not 65000 cores). The recent chip discussion had me digging through papers and commercial product specs. There were quite a few designed for multicore, "manycore," etc. There's also been programming languages such as Cilk, X10 or the brilliant Parasail that let programmers transition easily to such architectures. Finally, there's software diversity and obfuscation research similar to your obfuscations. I could easily seeing some of these combined at the compiler level where it took a parallel language program and split it up on cores in a complex, yet efficient, way.

While fun to think about, I think it's the wrong route for most systems. Our current problems stem from the design of chips making it easy to steal data, inject code or shoot oneself in the foot when coding. Small tweaks to existing architecture can greatly improve the resulting reliability, security, etc. Radically different designs greatly improve it. A few are in my upcoming Friday paper release that targets bottom-up secure or safer architectures. Older designs that attempted this include the capability architectures, Burroughs' tagged processor + type safe Algol system programming, Intel's i432 that the market killed, and original AS/400 object architecture.

So, it's been done before without massive processors or complexity. As I say of language design, doing the good stuff should be incredibly easy and doing the bad stuff incredibly hard. Such choices incentivize developers to act right [most of the time]. Current architectures provide, let's use Bruce's term, "perverse incentives." Change the architectures to change the incentives to change the behavior.

BryanDecember 11, 2013 1:54 PM

@Adjuvant on Lisp.
Code is data, data is code. No can do. BTW, I'm thinking of using a language more like Ocaml. I'll be getting together with some of my more computer language orientated friends to discuss that bit. The UI idea has received good reviews from the couple I've shown it to. It's possibly a full step or two beyond what's out there now. I really need to hand my concepts off to a few UI gurus and have them run with them. I'll admit I'm really coming in from left field on this issue. My UI really requires a high degree of shared data and security requires a high degree of protection for that shared data. They can be done. I've seen OS models that would work.

@CallMeLateForSupper

This part of the letter particularly irks me: "...this summer’s revelations highlighted the urgent need to reform government surveillance practices..." Guys and gal, summer was *months* ago; did it really take this long to draft a letter?

It took that long for the accountants to say this is really hurting the bottom lines.

@Adjuvant

But intuitively, these solutions seem less than satisfactory to me. It is my understanding (via Wikipedia et al.) that in the past BIOS chips were produced via mask-programmed ROM that was not alterable after manufacture. Applying KISS, wouldn't the simplest and most fail-safe mitigation be a return to true ROM -- perhaps with socketed chips that could be replaced in order to "upgrade"? Is there some major technical or economic hurdle to this idea, or is it simply a design choice that has become outmoded for reasons of convenience and remains so because of industry inertia?
Time and effort to upgrade the BIOS. Also do you want static laden fingers probing around on motherboards?

@RobertT

Indirectly this means that real security critical tasks can be allocated to their own core without any real loss of system performance.
It is my plan to have the security processes all on their own cores and also having only the permissions they need to operate. A process drawing graphics primitives, can't read the keyboard IO data. The MMU won't allow it. The Achilles heal is unfortunately those supervisory processes, and I haven't seen a way around that yet. They need to be highly audited, and only loadable from a trusted source. The parts of the chip that handle data and code also need to be audited for easter eggs that allow for privilege escalation.

@Clive Robinson

The solution I thought of was not to alow the run of the mill code cutters to get at "sharp tools" they would try to run with. And instead give them pedestrian high level scripting which they should not trip with, where those very few capable of writing secure code would write the applets that the rest would script them together as required into aplications fairly rapidly. The applets would be written in a way where they would produce signitures that a hypervisor could monitor and check for abnormal behaviour.
How about a full system made to make this the default and easy? ;)

@Clive Robinson

Also few new code cutters appear to know that Rapid Application Development (RAD), Agile, Patterns and re-factoring all trace back to "Unix Scripting", which in turn picked up on methods (rapid incremental testing) used by NASA in it's Mecury Project, which in turn was based on work in the 50's by the New York Telephon Co. (Which also later spawned RAD).
I need to read up more on current RAD systems. I myself have done allot of code generation type development over the years and make heavy use of scripting. My plan has been to make the sharing of data between processes working together on a shared goal easier, but also make it secure too. Over the years I've had many ideas, but they all banged up against OS and hardware limitations.

@Clive Robinson

As far as I can see in many respects this is very much an open accademic research area currently. But if the applets represent minimal execution threads which run effectivly on a single CPU on a multi CPU chip, the problem moves more towards memory usage and communications, which is currently a better understood area.
:) Yeah, I'm not sure how to solve the parallelism issue of the core logic, but I do know that there are lots of points in the code path where work can be handed off to another process, and processing on the core path continues while the other process handles some sub task like displaying a bit of the data. In PCs, already allot of the GUI drawing is handed off to a graphics processor. I can see doing the hand off at a much higher level, and there even being multiple levels of hand off. Those can be embedded in the objects used to write the apps and servers.

On massively parallel. Right now I'm thinking of maybe 100 to 500 cores on a chip. Each core has a CPU, MMU, code cache, and local data cache. I'm thinking the local data is only for data in memory spaces that aren't shared. That means there is no need to cache coherency for it. Shared memory spaces will get stored in the lower level caches. To help keep the cores simple, a CPU/MMU/Cache core is only able to handle one process at a time. There is no instruction restart. If a process blocks for a memory fault like page swapped out, it is halted until the supervisory process loads the memory and updates the MMU tables and unhalts the stuck process. For supervisory process, a page fault is a different issue and handled differently. To keep the execution speed up some pipelining will need to be done. That makes instruction restart much harder so I ditched it. There may be a way to save a minimal state so it is feasible to do instruction restart. That would allow for fewer cores, and lower current draw in the drivers in the MMU and cache system.

I need to read up on interprocess communication methods for hardware. I've been toying with actual hardware support for communication queues.

Nick PDecember 11, 2013 4:57 PM

@ Bryan

"Code is data, data is code. No can do."

That's true for a PC too, though. Otherwise data couldn't become code through a buffer overflow, could it? ;)

Anyway, secure confinement was already done in LISP. So, it's definitely a possibility. Especially if the core is verified (as in VLISP) or runs on a processor made for LISP (esp w/ built in GC & memory management). Plus, LISP's turning data into code can always be restricted by the environment: there's no reason to use a fully flexible LISP implementation for highly secure use case. That's kind of crazy, actually.

"It is my plan to have the security processes all on their own cores and also having only the permissions they need to operate. A process drawing graphics primitives, can't read the keyboard IO data. The MMU won't allow it. "

Capability architectures. You've mentioned many problems that they solved before the 90's. Enforcing types or POLA at the lowest levels of the system is their main strength. Starting with that might make whatever you're doing easier (or help in another project). OS examples were KeyKOS (esp Factory pattern) and EROS. It was applied to UI's in HP's Polaris and Combex work IIRC. CHERI at Cambrige already has a prototype for their capability processor supporting isolation and legacy software use cases.

"On massively parallel. Right now I'm thinking of maybe 100 to 500 cores on a chip. Each core has a CPU, MMU, code cache, and local data cache."

Agarwal at MIT went in that direction with RAW Workstation project. Topped out at around 64 cores with 100 on the way. Tilera's chips have been in commercial use for a while now. Their interconnect scheme might be just what you need. (or their chips) There are also the so-called "network on a chip" designs that might help that you might find with Google, although might be paywalled.

I told RobertT a while back that I think part of our solution is in Massively Parallel Processing designs and I can tell his mind is brewing with stuff. Yours too. I like that we're all going in different directions because such diversity leads to best results.

One warning I'll give you about this stuff: execution units on separate chips with separate components are almost always more secure than those on the same chip with shared components. The reason is covert channels. The more is shared and integrated, the more there are. This is why when I brought it up to RobertT I focused on MPP or similar designs where there's a shared interconnect or memory of some sort, but there are also independent execution units, components and memories. Physical isolation needs to be built in to some degree.

My current MPP-based security scheme design

1. Address space, tagged memory or both at hardware level.

2. MPP architecture of many (dozens to thousands) nodes connected over high speed interconnect and distributed shared memory.

3. Each node has one or more execution units.

4. Interconnect interface is standardized to the point that each node can be whatever type of chip is necessary (RISC processor, FPGA, Java chip, etc.).

5. Each node's interconnect chip has builtin MMU that restricts what shared memory the chip can read or write. The MMU is controlled by the master controller of the system. The compute node can't override it or even know what its rules say. This is similar to MAC networking in Orange Book days & Clive's prison concept.

6. Nodes include general purpose COTS, general purpose secure (eg. tagged), single purpose COTS, and single purpose secure. The COTS chips will tend to be faster/cheaper.

7. Each node must include a trusted boot feature that can't be overwritten by software and gets the system into a known state where it can receive software/commands/data.

8. System uses recovery based architecture whereby a service node regularly restarts, tests, etc. compute nodes and tries to spot problems before they become a security risk.

There are quite a few ways to use a system with that architecture and primitives:

1. VM's running on different isolated nodes with mediated, high speed communication between them and availability.

2. A different chip per function in a single image software system. (I've even considered a dedicated chip for secure IO and driver handling.)

3. A combination of VM's and isolated applications constituting a larger system.

4. Intrusion detection is just a node with read access to certain parts of memory.

5. Integrity is easier to achieve if a separate node with built-in integrity protection handles file systems or database storage.

6. IO can be on dedicated nodes with mediated access to shared memory. The speed of the shared memory will make mediation bearable.

7. Smart compilers for a language with distributed computation in mind (eg E) can automatically split an app over many cores and nodes with POLA applied to each.

8. Low cost, low power chips can be used for functions that don't need so much. Having fewer heavyweight chips running will reduce operating costs and possibly risk.

9. As it has integrity protection of bootware, the system can be used to run untrusted COTS software for HPC applications without threat of permanent harm to system state. Any critical stuff could be unplugged (or MMU restricted) in such an event.

10. The TCB of the system can be adjusted for various tradeoffs. At a minimum, the shared memory enforcement and trusted boot must work correctly. From there, you secure just what you need.

Lot of possibilities. My architecture is in the alpha state, obviously, with plenty of room for problems or improvements. I'm using this paper to see what other systems did with interconnects and nodes. Most of my focus right now though is on is on simple modifications to ISA's to boost security and the Friday paper release. Then, I'll have time to work on my combination of MPP, tagged architecture and assured shared memory.

Btw, is there anything any of you like about MPP for POLA concept or use of managed NUMA memories? I've looked and can't see anyone in industry attempting this approach to INFOSEC. Seems original so far as I can tell.

Dirk PraetDecember 11, 2013 6:27 PM

@ Clive

Which is why I started looking at ways to mitigate subverted technology and thus still be able to build secure systems from it.

I second that emotion. Although I have been carefully following the secure hardware threads discussed by yourself, Nick P., RobertT, Wael and others - learning quite a bit in the process - I am not sure to which extent it is feasible to build such systems absent serious venture capital. And even if we could get there, it would need to be build outside the US because in the current legal context anything with a label "made in the US" equals "insecure by law".

In a scenario where we get to a blueprint everyone agrees on, the only people able to build it would be individuals with a decent background in hardware and electronics, or loosely-knit groups with a particular interest in such systems like, say, the CCC in Hannover, Germany. Not to mention IC communities.

That's why I think mitigating subverted technology (proven or suspected) makes sense, as in the FreeBSD guys throwing out raw Intel/VIA TRNG output, and I would be very interested in hearing from the subject matter experts to which extent it would be possible to do the same with other hardware components. As to Linus, he's a stubborn character and will probably call the FreeBSD developers a bunch of clueless morons, but not being an idiot, I suppose he too will catch up in due time.

Nick PDecember 11, 2013 7:35 PM

@ Dirk

Custom hardware certainly might be prohibitive as you pointed out. It's why my efforts are trying to minimize changes or expensive developments. It's also why I'm pushing *very* strongly for people to use what's already been done rather than explore new solutions to the same [solved] problem. It will reduce work to a minimum.

However, Wirth's Oberon project and Moore's Forth show that a ground up redesign with simplicity, maintainability, performance, AND LOW COST are possible. I mean, the Wirth projects (esp Lilith) designed a type safe systems language, a processor supporting it, a workstation using it, a compiler, an OS, useful tools, UI, and so on usually with a small number of people over several years. We have better tools today, esp the FPGA's and IDE's. The main difference is that it will take more people and require knowledgeable security engineers on each project.

" the only people able to build it would be individuals with a decent background in hardware and electronics, or loosely-knit groups with a particular interest in such systems like, say, the CCC in Hannover, Germany. Not to mention IC communities."

Mostly true. But, as I pointed out above, don't forget the academics. The college kids and industry researchers are coming up with more solid work in security tech than about any other group. Almost every awesome piece of work I've listed comes from these types of people. Hopefully the Snowden leaks will only make them more motivated.

FigureitoutDecember 11, 2013 8:02 PM

This is really cool...Hope it turns out to be somewhat of a success...If anything, I want a verifiable baseline of trust like the TCB so I can cling to it.
Nick P
although might be paywalled.
--If anyone needs papers, tell me what to search and I can put my free academic access to some papers to use. Surely others out there can chip in too (I love reading blog posts here, Bruce says the article is behind a paywall and maybe 3-5 posts down someone says, "Article for free here").

Dirk PraetDecember 11, 2013 9:03 PM

@ Nick P

However, Wirth's Oberon project and Moore's Forth show that a ground up redesign with simplicity, maintainability, performance, AND LOW COST are possible.

I really should take some time to dig into this kind of stuff to learn more about it, but I'm currently spending most of my spare CPU cycles on learning Japanese, which - even with my linguistic skills - is a bit of a challenge, especially the different writing systems 8-) .

name.withheld.for.obvious.reasonsDecember 11, 2013 11:43 PM

Senator Whitehouse (MD) said that since we were comfortable with commericial big data--this is reason enough to form the basis for what the NSA is doing. How out of touch are these morons???

PetrobrasDecember 12, 2013 4:13 AM

@Adjuvant: "If I'm going to be "moving in" here, I guess I'd better be more thorough in keeping up."

The design of Bruce's blog does not make the life easy to keep up with. But it makes it difficult for third parties to tamper with it.

I wish a lot it had an interface like flrn news reader (think of precompiled web pages available. When someone posts a comment, it would trigger the refresh of all ascendant messages ; the navigator history would nicely change the color of already visited links).

It would though need a better structuration of quote-sourcing, else it will need to guess a lot, like http://meta.stackoverflow.com/questions/82454/is-there-a-new-comment-notification-system does to notify new comments, and, more, nicknames often get shortened or mispelled.

Without the cooperation of Bruce, but with some time available, it is still possible to setup a separate website that interprets the comment feed as I detailed just above.

@Bryan: "A process drawing graphics primitives, can't read the keyboard IO data. The MMU won't allow it."

Except if you disconnect the keyboard from the ISA/PCI bus and instead connect it to the only processor core which is in charge of keyboard. I prefer this type of physical isolation to isolation enforced by kernel as detailed in the link http://homes.cs.washington.edu/~levy/capabook/ from @Nick P.

https://www.schneier.com/blog/archives/2012/11/friday_squid_bl_353.html#c1023604

@Nick P: "Parasail that let programmers transition easily to such architectures."

(1) Parasail is GPLv3+ according to its sources parasail_sources_5_1.zip (md5sum c21a0443477d14abebf68dc10c6b59fb sha512sum 98875f6032973333967d8ce6cab33c82b27b07d9a9abaf4db813f9553eb0ebf28d6b82d0d8070fde520a39a54f0f55004fba69d54f79b42101a3f39e00b44ec2; link requiring javascript: https://docs.google.com/file/d/0B6Vq5QaY4U7uQU5MZjFXZklmSUk) but they oddly released proprietary binaries https://groups.google.com/forum/#!topic/parasail-programming-language/8KJgfq3P7cQ (binaries hosted at https://docs.google.com/file/d/0B6Vq5QaY4U7uXy1fZzVvVmgyVVk md5sum 1158f3e183d0aaab3131659ee269c945 sha512sum 88e8a2573eb0dee73b40d3c89025ebd575dfe995d49b4e126e05932d0d0d3c56daa7db74138a6e62684979b707e69298b3acf65f4c166e3eff1722690fe10a51).

(2) I wish the parallel directive between instructions (||) would be implicit in Parasail.

(3) I wish Parasail would be available without a Google'account and without Javascript.

Clive RobinsonDecember 12, 2013 7:54 AM

@ Nick P,

    5. Each node's interconnect chip has builtin MMU that restricts what shared memory the chip can read or write. The MMU is controlled by the master controller of the system. The compute node can't override it or even know what its rules say.

This needs a bit of amplification because of a number of issues.

Firstly, is the distance speed of propergation issue.

A signal is bound by two "light cones" one at the point of transmission one at the point of reception. The effect causes a "dead zone" the size of which is proportianate to the distance and propergation speed which limits the meaningfull spead of processing. One major issue @RobertT can answer better than I can is the reduction in propergation speed in chip traces. In the macro world transmission line speed ranges from around 0.7C to 0.1C depending on the dielectric effect and impedance geometry (coax tending to be faster than twisted pair)

The consiquence of this is the smaller a functional unit is the faster it can operate, thus the simplicity gained by functional decomposition not only helps security by reducing complexity it also alows an increase in operating speed. The downside of this is the "dead zones" which can open up new side channels (delay channels which are arguably the inverse of time channels not just mathmaticaly but functionaly as well). In our macro world we have seen "dead zone" exploitation by the likes of the NSA by simply being closer and thus responding faster than a more distant resource, and by the likes of stock market trading where trading systems are as closely coupled as possible.

One such resource that should be closely coupled to the CPU is memory we see this by the use not just of registers but also of multi-level caches. However caches are self limiting in that the more complex the cache algorithm the larger and slower the cache logic becomes. Thus it's faster to have local registers/stacks/memory when the number is small than it is to have equivalent size caches, hence we actually see a grading of dual port registers, single port registers, stacks/memory files, L1 cache, L2 cache... local SRAM, local DRAM... out to eventualy distant network storage. With coresponding memory size access thus bit/byte/word in registers, word/cell in stacks and memory files out to blocks of memory in increasing size the more distant the memory from the CPU. Likewise we see a greater increase in access algorithm cost with distance. Each of these has a "sweet spot" at which point the overall cost is minimised for a given memory type, which moves with the technology involved.

The same is true for memory access as well dedicated registers are less expensive than general purpose registers and often will be faster. This is because they are only connected to one bus at input or output thus have not just less loading but usually less signal trace distance. But more importantly they also reduce microcode logic as they have very much less RTL logic and don't give rise to bus locking issues.

This spreads out with distance and one problem is the choice between segregation, dedicated paging, segmentation, MMU and memory function tagging. This generaly provides an inverse relationship between how fine grained the control is and distance from the CPU the control logic is. For instance the difference between Harvard and von Newman architectures is an implicit function of the CPU architecture and thus is segregation inside the basic CPU logic, it has many speed benifits which is why these days "von Newman is cobbled onto Harvard" almost at the chip pins. Dedicated paging is an extention of this where you break the memory down not just into data and code but other types such as I/O and mapping for the likes of interupt vectors, it's a matter of architecture design as to if they have their own individual dedicated page memory maps or appear within a more general segmented memory map. Intel chose to do both with I/O having one memory map and vector mapping appearing at a fixed point within the main memory map which also was broken down by segmentation into code, data, heap, and stack areas. Dedicated paging is similar to segregation and forms part of the internal CPU logic and operats at that speed, segmentation however is like a "poor man's" MMU whilst it is often part of the CPU like the von Newman it tends to be cobbled on at the perifery and thus considerably slower than segmentation. A full MMU represents a considerable investment in logic and can involve more gates than the CPU, it is thus a bottle neck on the external bus of the CPU and is slow. However it's not as slow as tagging which in effect is a form of extension to a memory word, where each memory cell would consist of say a 32bit word and 4bit tag which are colocated at whatever distance they are from the CPU.

After an ongoing thought process I concluded that an optimal design for a chip with a significant number of cores would be that the CPU would be strongly segregated and aside from the general purpose dual port registers it would have four stacks/files/caches of 64bits each internal to the CPU and local memory of around 32K that would be segmented with access to general/shared memory and I/O streams via a simplified MMU. Both the segmentation registers and MMU would be under the control of the hypervisor, and any memory fault would halt the core for the hypervisor to investigate. I decided that all I/O should be abstracted via mediated "letter box" bufferes similar to "streams" but rather than being limited to "char I/O" it would alow for "block I/O" where the block could hold either arrays of data or data structures of simplified objects.

This brings up the issue of strong typing and casting of data types/objects and is a thorny subject. If it is to be done at the CPU level then the meta data of the object would be transfered via the stream control channel alowing the data to be strongly typed by a mechanism in the CPU logic [1] that is setup by the hypervisor. The issue of if typing and casting object data from one type/object to another type/object should be a policy enforced within the CPU logic or else where is one I'm still giving thought to. Whilst I want both the typing and casting to be runtime enforced by the CPU logic thus enabaling faults/exceptions to be trapped the CPU halted and the hypervisor to investigate, I'm quite aware of the significant effect it can/will have on the number of gates and the attendant issues in the CPU.

Mediating memory / I/O streams puts a lot of work onto the hypervisor, as I've previously indicated it needs to be a fully defined limited state machine not a Turing engine, however it needs only serial connections to the "task CPU" to set up segment registers, I/O metadata and examine the CPU when halted. However the task CPU hypervisor is effectivly a semi-autonomous head to a backend hypervisor system which controls the MMU and other asspects such as investigating faults and exceptions which means the backend of the hypervisor needs to be a CPU in it's own right. Thus whilst the head can be tightly integrated with the task CPU in effect forming part of it (think Jtag on steriods ;-) the backend needs to be part of a hierachy that also controls device level I/O.

An I/O CPU would be effectivly identical to a task CPU except for the addition of the connection to the I/O device switched matrix. Whilst the I/O CPU setup and matrix routing would be done by the hypervisor the I/O CPU would run like a task CPU in an unpriveledged manner and thus would embody the design ideas behind "user mode I/O". However as the I/O data is mediated by the hypervisor it can also be monitored as a "choke point" thus allowing additional segregation benifits as found in extended TEMPEST design rules.

I will for now leave out the issue of DMA, from my point of view it is a method long long past it's "best before date" modern chip technology should be fast enough that it is nolonger realy required. However if it is required it should realy be into "controled data" not "general purpose" memory to aid in preventing "data as code" issues.

[1] One thing I've not mentioned is "register tagging" as it is still very much a "thought in progress". Whilst general/shared memory tagging is a non starter for various reasons, a different set of rules applies to the general purpose CPU registers. As they are general purpose they need to be tagged, but in a more complex way than is normaly portraied. This is because the are --or should be-- the only place where data can be cast from one type/object to another and the place where strong type enforcment should be carried out.

Clive RobinsonDecember 12, 2013 8:47 AM

OFF Topic :

I occasionaly mention I read the UK satirical magazine "Private Eye" well the lates issue is 1355, and it's first item will probably be of interest to readers of this blog.

It involves a "hidden backhander" to the UK Coservative Party currently in power as part of the "coalition" and a Chinese Communications manufacturer Huawei which has been effectivly banned in the US Auz and several other countries but not in the UK.

I hope the "editor" will not object to me typing in the article (they don't put them up online at http://www.private-eye.co.uk ).

While David Cameron was in China having his headline grabbing love-in, the Eye uncovered evidence of an under-the-radar payment to his party from one of China's most controversial Companies.

The technology fire Huawei --banned in Australia and the US amid security concerns over it's perceived closeness to China's People's Liberation Army-- made an unrecorded payment of £8,600 to the Tories to attend a networking dinner at the party conferance in October.

The payment--which eentitled Huawei to rub sholders "with senior members of the frontbench teams"-- was listed with the Electroial Commision not under its own name but under that of its lobbying company, MHP Communications. The loobyests conceaded to the Eye that this was an "administrative error" and said they would ask the Commission to amend the record.

In Australia, Huawei is banned from working on the broadband network because of "advice from the national security agencies", and in the US last year the House Intelligence Committee said the company should be kept away from mergers with major US firms because it was a "security risk".

Huawei denies this and will have been reasured by Cameron's on-message performance in China --that Britain is open for business and up for sale-- as it is investing $2bn in the UK over five years.

Nick PDecember 12, 2013 11:34 AM

@ Dirk Praet

"I really should take some time to dig into this kind of stuff to learn more about it,"

You can get a quick overview of what they accomplished here if you just read the intro, Lilith, and Modula sections.

This lengthier tribute to Wirth tells a lot about his style, accomplishments, and the effects it had. I think his design approach combined with modern tools/tech & security engineering will be the best method to solve our current problems. He's already demonstrated it works on hardware, low-level software and high level software.

@ Petrobas

"Except if you disconnect the keyboard from the ISA/PCI bus and instead connect it to the only processor core which is in charge of keyboard. "

I'm guessing you didn't read the link. Many of the capability architectures had dedicated processors or even separate machines to do IO. The main chip would interface to it in a certain way. There's enough advantages to that approach that certain secure systems of mine offloaded filesystems/networking to other machines, too.

Even in the integrated designs, the capabilities were enforced in hardware on the chip. Also, the A1-class SCOMP system used a dedicated IO processor that could impose access restrictions (a la IOMMU in 1985).

Connecting it directly to the processor has some benefit, too, though. It's not a bad idea for the keyboard long as you eliminate the timing channel it creates.

Nick PDecember 12, 2013 11:00 PM

@ Dirk Praet

I forgot to mention good luck on the Japanese. Someone I know spent some time learning it and thought it was a rewarding experience. He said it forces a person to think differently. And it was fun when he got the hang of it.

Plus you'll be able to make better use of those Japanese chips and computers that NSA hopefully has no backdoor in. ;)

BryanDecember 12, 2013 11:15 PM

On Parasail... I'm not sure it would work for my thoughts on the programming system. It's parallelism is found on the compilation unit, which is fine and could be used in an application or even parts of the system libraries. I'm looking more at a whole system where tasks are broken down into function areas. Network, DB, disk, GUI, task core logic, etc.. Each of these function areas can be assigned a core to run on, and when the task core logic wants to display a text entry field, it sends a message to the GUI process which draws the field[1], and opens up an editor widget[3] to handle the editing of the contents of the field. When the "close" button is clicked on, each widget sends a message back with it's field's contents. Some widgets may be continuously reporting, like a volume slider. This makes for a totally different type of parallelism than parasail is designed for. I'm looking at languages more like Standard ML, Ocaml, and BitC as better candidates for the main system programming language. BitC having the polymorphic type system, but also being designed for systems work looks like a good candidate. I need to look at it more. I don't think objects are needed, but I do think the polymorphic type system is a requirement for the GUI or chasing details in a message stream set[2] as they change may be a nightmare. When a data element in a message stream set is altered, the other process is alerted to the change. That can be on the individual element, or a group of them. I envision primitive like a transaction block for handling group updates so there isn't an inconsistent state seen by the other process. I originally came up with my flex programming language as a means to explore language constructs that could better enable programming of the system.

[1] In my system, all program interfaces, even GUI windows are message stream sets[2] that are incoming, outgoing or bidirectional. A message stream set that is normally expected to be displayed to a user has a GUI layout/skin defined, but that isn't necessary. It can be default laid out by the user's window manager, or even given a new layout/skin by the user. It can even be linked up to other programs, and controlled by the program or passed in whole or in part through to the GUI.[4] Part of a program's data is the default layout/skins for each user message stream set.

[2] A message stream set is a collection of related data elements stored in a shared memory segment. For a music player you would have buttons for play, stop, forward, backward, skip to next song, etc.. You could also have a track listing window with columns for title, genre, artist, album, time, rating, etc.. Another listing could just be for artist, and selecting entries in it limits the data presented in the track listing window to only those by that artist or selection of artists. Those data lists, ordered tuples, are defined by the task, and the GUI or another program can only read/select, but not edit them, unless of course there is a interface that allows editing;). The shared memory space has two areas. Read only (by GUI/other process), and read/write spaces. Data elements that are not allowed to be altered by the client side go in the read only area. A selection list could be provided in the read only area, with the selection result being returned in the read/write area. The types stored with a message stream set are really primitives, grouping, and structure definitions. They can have type names, descriptions, etc., but that isn't needed. The polymorphic type system looks at the type primitives and structures and makes sure they are used right by a calling program. At runtime link time, they are only checked at the primitives level. Actually I am planning on an auto generated ID that if it matches, checks are skipped. An interface could potentially be very complex.

[3]An editor widget could even be a instance of EMACS or VI, just confined to a text window. It would need to adhere to data type and size limits, but what the heck. A user could have their personal editor choice for text entry.

[4]I'm planning on a pass through primitive to allow parts or the whole message stream set to be passed through unchanged. The program in charge of starting up the program then makes the links directly, or assembles stubs for the passed through parts.

I actually first envisioned this GUI system back in the late '80s, but didn't have the time nor resources to pursue it back then. I should have, but I was more interested in AIs then.

Nick PDecember 12, 2013 11:24 PM

@ Bryan

My Parasail reference was for someone else. This was the one I showed you as it involved many cores and customization on chip.

Far as your described system goes, it reminds me of the Functional Reactive Programming (eg Elms) and Dataflow paradigms. I bet those languages might work well on such an architecture. Other aspects I'd have to think on as the design's security implications are complex.

Re BitC

It's a work in progress that hasn't neared production. You might want to look at the other languages you mentioned more as they have finished compilers and tools.

Clive RobinsonDecember 14, 2013 9:20 AM

@ Vas Pup,

The article is light on technical details and distinctly worrying in one.

I'm assuming from having done similar (to electronic walets and pocket gambling machines back in the 1980's) that the intend to use the microwave EM carrier to get through ventilation slots and past cheap decoupling cap filters to carry a "fault injection signiture" into the on board computers to cause them to fault in some way. They will however run into problems if they try to patent the idea, firstly I hold 'prior art' as do one or two others (various researchers including Ross J. Anderson). Also it was discussed in depth in public forums when the car carrying Princes Dianna crashed in a tunnel in Paris that led to her early demise (though the usual UK tabloids did their usual shoddy journolist job on technical details). Secondly the man that claims (incorrectly) to have invented Differential Power Analysis will almost certainly try to patent troll with his own invalid patents as they don't appear to be earning him much money currently.

However in the NS artical there is a dangerous assumption put forward about pulling the computers into repeated "reset". The reason it's dangerous is the assumptions of 1, the brown out circuit(if actually installed&working) won't be effected by the EM signal and fault signiture (which I find dubious at best). 2, the computer circuit will "fail safe" when attacked by the EM signal and fault signiture (which I find to be a ludicrous assumption). Also I would not want to have any medical electronics implanted or worn within several hundred feet of such a devic, I've seen them go wrong in the past when subjected to a high field strength signal without modulation.

I can see somebody such as a bystander getting injured by the use of one of these units and suing the operator (LEA) and adjoining the manufacturer in the action, who when pressed for design documentation will refuse, or not provide evidence of sufficient testing and thus a big fat hunk of money being awarded to the injured party or their heirs/dependants. I suspect that it is this "fear of being sued" that is causing the stinger to be less and less deployed.

The real issue that lawmakers need to get to grips with is the conciquences of the laws of physics with respect to mass and momentm and what happens when the control system is impared/disrupted/imobalised in an unsafe manner, which is what all these "technical stopping solutions" do. Essentialy by removing the current controling persons ability you are also removing all the liability from them from that point on and putting it onto the shoulders of the person operating or ordering the operating of such equipment. Arguing "the greater good" might remove criminal liability from the LEA but in many juresdictions it won't remove the civil liability to those hurt.

WaelDecember 24, 2013 1:35 AM

@ Nick P

"I'll need some time..."
Well RobertT covered the price/performance issue pretty well. The remaining issues from all of our fab discussions are as follows:
I totally agree with RobertT's statements...

Leave a comment

Allowed HTML: <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre>

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc..