scotch December 27, 2013 5:52 PM

US Military Commissions Sock Puppet Program

What’s old is new again

“The Guardian and The Telegraph are reporting that US based Ntrepid Corporation has been awarded a $2.76 million contract to develop software aimed at manipulating social media. The project aims to enable military personnel to control multiple ‘sock puppets’ located at a range of geographically diverse IP addresses, with the aim of spreading pro-US propaganda. The project will not target English speaking web sites (yet) but will be limited to foreign languages, including Arabic, Farsi, Urdu and Pashto. The project will be funded as part of the $200 million Operation Earnest Voice program run by US Central Command.”

Trogdor December 27, 2013 6:00 PM

Richard Fernandez discusses how Cass Sunstein’s role and recent laws “adding oversight” to the NSA (et al) really accomplish nothing while appearing to accomplish something. Call it “security theater” directed at those who are concerned about the security apparatus itself.

He comes to a pretty good question to consider:

The easiest way to judge a system is to imagine someone we don’t like in charge of it. Let’s imagine Cass Sunstein in charge of overseeing the NSA. Or let the left imagine say, Rush Limbaugh running the show because in the nature of things someone like Sunstein might actually be appointed to an oversight position. And who knows, maybe some right wing Tea Party guy might get the job next time. Can we live with that system? With what it empowers?

Good post.

Buck December 27, 2013 6:13 PM

I would love to see some young, gutsy, enterprising lawyers/journalists/security researchers pen-test the inherent vulnerability imbued into NSL’s. One would not even need to forge such a document considering very few even know what they look like… Thanks to the attached gag order, how would recipients verify authenticity without considerable risk to their own freedom? Who knows… It may even end up being far more effective than a FOIA request! 😉

Perhaps eventually the ‘letters’ would no longer suffice, and the feds would have to start sending out the men in black (and be sure to have their friends in the media make this completely clear to the general public). Impersonating a federal officer carries stiff penalties, and I would absolutely NOT recommend anyone attempt this! Although, that certainly wouldn’t be a problem for criminal organizations if they felt the reward was worth the risk.

Daniel December 27, 2013 8:20 PM


The problem with that logic is that it fails to take into account the inherent nature of ambitious men. It would be easy enough, for example, to imagine a constitutional provision that dictated that no one person could either earn a salary nor accumulate wealth greater than a ratio of 1:10 over the lowest paid person in the country. It would never happen.

The reason is because when we imagine power we imagine it egocentrically. We imagine not our worst enemy in power but ourselves. So no ambitious man wants to crimp the power of the state because in doing so he limits not only the other guy’s potential but his own.

The most obvious cultural example of this truth is America’s plethora of lotteries. Americans do not want to soak the rich, the want to be rich themselves. Who goes out and plays the lottery imagining the other guy as the winner? No one. If you did, you wouldn’t play. Lotteries only work because people imagine power egocentrically. “Somebody’s got to win, why not you?” as one lottery’s slogan went.

“The easiest way to judge a system is to imagine someone we don’t like in charge of it.” I personally believe asking Americans to do that is like asking pigs to fly and the moon to turn to cream cheese. It will never happen, not in my lifetime.

kashmarek December 27, 2013 8:39 PM

Then entire NSA scenario just begs for a new edition of the movie-plot. Only, this time, it is not about the plot, but how it ends.

So, conjure up your thoughts on everything up to the end of Dec 2013, about the NSA that we know (or think we know) or what Mr. Snowden might know (though he might not actually know everything that he has acquired and placed somewhere to be revealed). How will this all end? How would you like to see it end? Is there an end?

This is a suggestion for Bruce’s next round in movie plots.

Godel December 27, 2013 8:56 PM

“The new system, dubbed “WiTrack”, uses radio signals to track a person through walls and obstructions, pinpointing her 3-D location to within 10 to 20 centimeters — about the width of an adult hand.”

Originally found on

The security implications are obvious.

Bryan December 28, 2013 12:55 AM

Trust me, this has relevance…
Beats Antique – Full Performance (Live on KEXP) [youtube]

From last week…

Nick P

The foreign companies racing away from American software/hardware are entirely justified.

They better be careful, though, that they don’t inadvertantly run right back into America’s arms via one of its foreign subversions.

Or into China’s, Russia’s, or even somebody else’s.


I do agree that routers are target of choice for the NSA (and good hackers).

Me too. It’s curious that the day after typing into a text document a design and plan for how to log the inputs and outputs on both internal and external interfaces for the fiber to the home router at my home, the route to my email and web server is busted again. It’s also very curious that I can still get a login prompt on a server just 10 IP addresses away on the same subset and running on the same machine… Just like last time… It’s managed to baffle a couple levels of tech support at the local ISP.[1] Last time nobody would tell me what was broken. I expect that’ll happen again.


There has been an insane amount of discussion about making secure hardware in the comments of this blog (not that I have a problem with it, it’s a very interesting subject), but it seems to me that 99.99% of the problem we have today are backdoors in firmware/software, exploits in code, lack of government oversight, excessive secrecy (at both the government and corporate level), and a disconnect between the goals of private enterprise, government, and the people, as far as internet and communications are concerned.

The main reason is with current consumer computers. Once you get through one backdoor that allows a privilege escalation. The system is usually fully compromised. Hence the discussion on how to make a computer from the ground up that is secure. As for the firmware/software side. I have great faith in the quality of programmers. They will by design or by accident insert backdoors into systems. Best to design the system so backdoors only lead to tiny rooms with simple walls with vacuum message tube ports in them.

@Clive Robinson

One problem we have is we talk about “foundations of stone” or “shifting sands” as though the quality of the underlying ground is the primary requirments to build on… gues what it’s not. It’s actually cheaper to build a barge of the same square footage as it is a house and that will be as secure resting on rock, sand shifting or otherwise and even water.

And then the barge develops a crack, the ground rumbles, etc…. As I said, I have great f… We really need to have the hardware. We can now work on securing the upper layers, but we also must build that proper foundation. Yes, we are 10 years behind, but better late than never. Reminds me, another project from 10+ years ago needs reviving. May even be more important that secure computing hardware.

If someone can implement a real-time direct-democracy system; rendering representatives pretty much useless and getting what we actually vote for, that’s progress.

If they did, there would be tyranny of the masses. You really really don’t want that if you value freedom. Just look at how easy it has been to get people to jump on the terrorism bandwagon and flush their liberty down the sewer.

Change, though, will require powerful private or public interests putting the squeeze on Congress and President to put sqeeze on NSA. That’s about the only way to stop them. Snowden failed to have an impact there. That’s where power interests are winning. That’s where almost all effort needs to be focused.

A good scandal or few are needed where the data is used to viciously harm somebody who is rich, famous, or politically well connected and liked. Only once that happens will we see any real change.

[1]Yep, I realize I have a couple compromised systems. I also have a few that aren’t.

Aspie December 28, 2013 2:26 AM

It seems the old ways are still the best.

If you want your kids to avoid something like the plague – insist that they use it. That’ll make it “uncool” and quickly put an end to its appeal.

Clive Robinson December 28, 2013 6:07 AM

@ Aspie,

Looking through the article the turning point is when a parent sends a “friend request”.

In effect saying “Do you want to be in my gang”[1] which “creeped them out” and they ran without even screaming NOooo. Which I guess is progress in some respects.

Having never seen the point of FacePuke and having been spamed almost from day one by ConnedIn due to others giving them my details –and thus dump various Email addresses–, I’ve a fairly dim view of social networking and those who are behind it[2].

[1] For those either not old enough or from outside of the UK it’s a line from a –once popular– song by Gary Glitter back in the Glam Rock days and it turns out he was a lot lot more than creepy.

[2] Somebody I know was told by their –then– employers to join ConnedIn as that was how the company was going to “reach out to customers and proffesionals”. He refered to it like being told to “kiss your granny”. And I must admit that those I know who are keen social networking users have the same sort of anoying personality fault that makes them so anoying at Xmass and other social gatherings where after a rather enjoyable –and belt loosening– meal insist every one get up and play physicaly active party games till all but they are green in the face at which point they make one of those statment questions of “Aren’t we all having fun” to which few would dare answer honestly…

name.withheld.for.obvious.reasons December 28, 2013 6:19 AM

After this week with two judges (is the NY judge serving on a district court) returning different opinions I think the federal court(s) are engaged in legal trolling. I am rehashing MY opinion based on the FISC rulings…it’s still relevant.

From what I can gather the New York Judge Williams Pauley, is the victim of a crime. Evidently the university in which the FISA judges received their degree(s) lacked the proper accreditation to properly issue lawful diplomas for a degree in law . Quite possibly the unnamed universities, can’t reveal sources and methods, failed to properly tenure professors in law. Or the university course curriculum was designed by mathematicians and conducted wholly by two elves and a reindeer–my suspicion–is an ass. I hope the judge didn’t take out a student loan to pay for such an insufficient education. Maybe there’s a chance at getting a refund, it’s clear that the judge has suffered from the experience–both intellectually and financially.

The “Courts” rendition of the facts in argument for the constitutionality of the government orders for bulk phone records, section 215 of the Patriot Act, is as idiotic as they come..,

  1. In using Smith V Maryland regarding expectation of privacy
    1. pen test case,
    2. individual (there is no corollary to the parties in this case–I don’t see how it is relevant); and,
  2. as Smith V Maryland is the way in, all other arguments supporting the “Court’s” claim are vacated.
  3. Three is not necessary since the judgment in my opinion would be vacated.

  4. Where is the authority for the “Court” to render constitutional opinions with respect to any “government” request. The 4th amendment does not apply to the government–as such the government has no standing. Therefore, the judge could not possibly rule on a case where there is
    1. only a plaintiff or defendant; and,
    2. due to the scope of the order, it would be impossible to contravene the supposed defendant as an individual.
    3. As this issue is between the government, and as in the Smith case, an individual injured by the government, precludes the governments “expectation of propriety”.

    I leave this with you to ponder…

Clive Robinson December 28, 2013 6:20 AM

ON Topic,

I guess the photo of Kin Jong Un standing there should be titled something like,

    Kim Jong looks benificently on all his friends

Oh and for those wondering why the Squid Factory has had such a bumper harvest of squid, it might just be down to the fact that due to over fishing of other sea creatures the natural balance has been altered in the squids favour and they are now decimating other fish stocks and have reached the point where in some areas they are now the over dominent species. And in those areas it won’t be long befor the squid is so numerous it will have destroyed it’s own food supply and the areas will become not just economicaly but environmentaly baren.

BlackAngel December 28, 2013 8:44 AM


“A good scandal or few are needed where the data is used to viciously harm somebody who is rich, famous, or politically well connected and liked. Only once that happens will we see any real change.”

This is what I was thinking, especially after seeing this NY Judge’s report. So far, the surveillance is said to be bland, useless. His report was so ass kissing, I was not sure if he just did not have so many skeletons in his closet he was scared to have an independent thought — or if he was blackmailed already.

Then, I realized he was probably just without any capability for independent thinking. Occam’s razor.

Which means, in order to see the surveillance system come down and freedom be restored and buttressed properly for the ever challenging future… there has to be some manner of major message sent.

Fiction can send that message OK, but not strong enough to really ensure no one ever abuses this sort of technology in the future. After all, what else actually threatens free nations, but those inside it? Small groups of terrorists? Small nations? Quite the reverse, as is clear from the past centuries of scapegoating minorities and the weak to prop up totalitarianism.

Those inside with power threaten freedom. The power hungry.

Extortionists, however, can be difficult to catch and expose. Surveillance, after all, primarily leads to extortion. As a means of power. Secondarily, it opens avenues for manipulation — find out about what people’s preferences are by surveillance and you can control them by controlling what they secretly want the most and what they fear the most. Even knuckle dragging retards have the instinctive understanding to hide their deepest desires and fears from the public.

So, for Weiner, it was simple as finding certain types of women who would allure him, making him feel as though he was attractive. Which he clearly was not. So they were paid a pretty penny. For the head of the CIA, he needed an affair with an intelligent woman that could add a spark of danger and secrecy to it all… so his mind could be numbed. (And for the dummy FBI who followed those breadcrumbs that led to his destruction, they wanted their names in the paper. Fame.)

(And so on and so on. What did Obama want and fear? What has Alexander’s dog collar and spike been?)

But… I remain skeptical such things could ever be exposed… as people never believe the sophisticated and complicated even when they are told it. It means believing someone else is smarter then they are and this offends their ego, so they do not want to believe it.

Aspie December 28, 2013 12:57 PM

@Clive Robinson

As a fellow Brit I know you’ll understand why the takeup of “SocNet” is as popular as it shouldn’t be amongst the older.

The reason so many of us “ear-trumpets” have flocked to SocNet (present company excepted of course) is to see what our little darlings are up to. Now they’re getting scared that we’ll be “down” with the lingo and viddying them whilst “givin’ it large” with the “hep cats” and “biggin’ up” their mates trying to coalesce a “scene” with the Haight-Ashbury crowd’s grandchildren.

I might have got some of the terminology out of kilter (what with not actually giving a flying fck what some spotty little ingrates are doing with their inheiritance) but I hope to have caught the flavour which is this … get the hell outside and do something in meatspace for chrstsakes! Oh – and watch out for them batons and pepper spray.

History repeating.

Zaphod December 28, 2013 3:31 PM


Not really – once the squid ‘destroy their own food supply’ and dwindle in numbers the very same food supply will start to recover and so the merry dance perpetuates


r2d2 December 28, 2013 6:47 PM

“The Guardian and The Telegraph are reporting that US based Ntrepid Corporation has been awarded a $2.76 million contract to develop software aimed at manipulating social media. The project aims to enable military personnel to control multiple ‘sock puppets’ located at a range of geographically diverse IP addresses, with the aim of spreading pro-US propaganda.

Will it also spread pro-Google statements or are those handled by fanbois and employees (the latter who are dependent on good stock market valuation on that advertisement company)…

I mean with that recent purchase of the war robot company Google is probably becoming more part of the government than ever before. So they might as well take care of each other.

Clive Robinson December 28, 2013 8:10 PM

@ Zaphod,

Something can only recover if there is a sufficiency of breeding stock and a food supply for it.

Some squid are known to wipe out entire food chains, especialy when there is also an inverse dependancy (ie a lower level of the chain is dependent to some extent on a higher level).

When such a prolific preditor does this other creatures have to move in from outside the area and often they are different species and whilst this is happening the environment changes so the original aquaculture is lost entirely.

Also the squid may not “die back” they likewise move to other areas and this can leave a real vacuum behind them, this is being seen with the “red devil” which is moving out of it’s normal areas…

I guess “time will tell” but one thing is certain is that it will have an effect on land dwellers such as man.

r2d2 December 28, 2013 9:33 PM

If it is, as Mike the Goat and some others have said on this blog, that Ed Snowden did not accomplish much because (for one reason) those in the Internet business already knew that the governments spy on people…

…then most likely Google knew too. So why didn’t they block the spying, at least from their side?

What a “friend” you have.

Me December 28, 2013 10:15 PM


I disagree. I think Snowden accomplished a lot because he brought awareness to the problem. The best remark that I have read about this whole affair is this: “before Snowden I thought I was too paranoid of the American government. Now I know I wasn’t paranoid enough.”

Then there is this recent blog post:

When even law enforcement types are beginning to question who they are really working for and what is really going on that has impact. And it is the Snowden revelations that are the cause.

Did or will the revelations by Snowden work a miracle? No. But he has shifted the debate and put the NSA and others on the defensive. That alone is progress. Now its up to everyone else to make sure his efforts were not in vain.

65535 December 29, 2013 12:28 AM

@ Nick P


Routers do have firmware and it can be manipulated. And, I have seen very odd thing happen to routers. That would include a sudden reset to default values (Admin passwords) and changing the default gateway and default route.

I do think routers are the target of choice. I have found that most SOHO routers claim to have “admin” password from 64 characters to 128 characters – yet are actually truncated to ten characters.

For example Belkin routers truncate long password to 10 characters – and that is breakable (I have ‘Googled’ around the web and other people say the same of well known routers).

Cisco enterprise routers are subject to CALEA – the loophole for the NSA. Further, I think the backplane management passwords probably are truncated as with consumer routers and SSH has probably been weakened (I have no proof other than hearsay).

Bruce, has separated from BT. Now we have a speculative report from Cryptome indicating a full backdoor placed in the firmware of BT‘s china made modem/routers (called BT Agent). That is to say BT supplied the hacked firmware and not the hardware supplied by Chinese maker Huawei.

The Cryptome report also indicates that major Certificate Authorities simply send a copy of SSL keys to the NSA when a customer purchases a certificate. If true it is very troubling.

Next, is the thumbprint of TOR packets. Routers are close to the end-point (although there are other ways of unmasking TOR).

Also, the use of 30.x.x.x/8 network supposedly for increasing the size of “non-routable” IPv4 address space. It seems odd (As far as I can tell there is no shortage of non-routable IP address give the use of NAT). Why do it?

That address is used by the NSA. Why would you ask the NSA for permission to use their routable IPv4 space for non-routable use? Couldn’t the NSA just mirror you use of said IPv4 space and pipe it into their data bases? You are using a valid IPv4 address space.

Routers rarely ever get up-dated. The companies just build new models. What is to stop the NSA from hacking the firmware of the router/modem?

Bob S. December 29, 2013 6:58 AM


re: “The project will not target English speaking web sites (yet)..”

Maybe not that one, but I am very sure there is some kind of contract in place for American media. There was a seismic shift in commentary soon after the Snowden Revelations in that a lot of the Pollyanna-America commentary simply disappeared.

I figure it was stopped before it could be revealed.

Sometime even now if you dialogue a hard core pro-orwell poster you find out he works or worked in the industry. I suppose admitting it makes it a little better.

Nick P December 29, 2013 11:25 AM

Nice link Bruce. A previous discussion here came up with the ideas of chips/radio in USB and subverted cabling. Then I see these in action. Seems Bruce’s blog attracts more innovation in one thread than NSA TAO pulled off since 2008. Gotta wonder if they’ve been reading the blog and then suggesting stuff to the project managers. Can NSA be liable for copyright infringement? We’ll sue them for… (Dr Evil gesture) “1 BIILLLIOON dollars!!!” Or issue a DMCA takedown notice of TAO office.

Expect them to do more with emanations, ultrasound, old/obscure systems, firmware and chips now that it’s been discussed here. I expect them to produce one remote zero day in each major old OS & firmware on its native platform. They know we’ll be using them so they’ll start doing badBIOS like tricks on those. They weren’t built for security so it will be easy. Anyone depending on those needs to really air gap them with carefully managed serial ports or one-way cables. Any ability to send a signal back might be an exploit.

A group really needs to get together to fund work for the defense side by people with the right background. That’s the main problem. Even if we had a unifying vision of where to go, the money supply to get there even for software and a few basic chip is missing. So many CARE about this NSA problem, but few want to PAY for a solution.

Nick P December 29, 2013 1:21 PM

Nice list of high security devices from many countries

We’ve been doing several things here:

  1. Trying to design highly private communications and computing schemes using both old school and new approaches.
  2. Trying to deal with the “subverted by country X” problem.

Well, the easy solution to 2 is buying from a competing country unlikely to subvert the product. Still some risk there. I’ve always said for No 1 we should look at how NSA and such protect their biggest secrets, then copy such tech and approaches. Same applies to NATO. So, here’s the list of products NATO endorses for various classification levels and purposes.

In emulating things, it’s best to emulate devices doing COSMIC TOP SECRET with NATO SECRET a 2nd best. Notice how different the designs/operation strategies are between those products and the ones certified at RESTRICTED/CONFIDENTIAL. It’s like they come from entirely different design schools. Hint: they do and only one approach actually works against strong attackers.

Each solution on the list also has the country that contributed it. Composing solutions from different countries seems to be the best approach. For instance, I might use a secure terminal from one country, a crypto chip from another, and connect via a link encryptor from yet another. OSS based security (eg OpenBSD) on diverse hardware can be injected in different spots for extra prevention or intrusion detection.

Bonus: This link is one of their devices for key generation. Looks like something the Ghostbusters would carry with them. Notice they use dedicated, highly assured devices to generate their keys and then other devices to move them. Most COSMIC TOP SECRET equipment has a “FILL” port where key fill devices connect. Now days, they also support Over-the-air-rekeying with their FIREFLY protocol (based on Photorus). Leads to requirements: “Key is managed by dedicated devices and extra engineering is put into safely moving key to devices that need it.”

Bonus: This device reminds me of a very cheap word processor or electronic organizer. Again similar to my own trusted path design except very bulky, likely due to EMSEC & MILSPEC. This is a device that we might emulate for secure messages, commands, key generation, key storage, etc. The screen is too tiny, though, hence my preference for the old electronic organizer type devices whose screen was as large as a cell phone.

Those were just two I found looking into a tiny portion of the site. I’m sure there’s more fun products for the rest of you to find. Another thing is that I got a memory jolt that many high security products are available from Konsberg, located in Norway. If you trust Norway & Konsberg is loyal to them, then they might be a good company to do business with. They have existing products and the expertise to make new ones with military grade security properties.

@ regulars in subversion discussion

What do you all think of Konsberg? Any bets on other companies on the list? Remember that knowing which could be trusted on a future project is more important than which have products we need now.

Final, totally unrelated note: Looking at Groupe Bull mainframes I noticed they have their own secure cell phone made entirely in France. I’d get the simple one instead of the smartphone as the latter is highly vulnerable due to complexity & android support. I put their offering in same category as Cryptophone: secure against many attackers but endpoint is probably weak against NSA. People wanting that kind of safety (French made) should contact Thales about the phone they made for French govt. Notice how simple it was. Those who aren’t worried about NSA spying at all, but fear other countries, should go straight for the GD Sectera line.

Coyne Tibbets December 29, 2013 2:07 PM

There are two highly interesting articles in Volume 27 issue 66 of the Risks Digest.

The first of the two articles, Security versus Countersecurity, by Donald Mills, discusses the security and cyber-warfare, and how these are both incompatible with each other and what that inevitably means for security of all types given a government preparing to accomplish both.

The second article, Re: RSA Key Extraction via Low-Bandwidth Acoustic Cryptanalysis, by Henry Baker, relates to last week’s article Acoustic Cryptoanalysis right here in this blog. It discusses how LED light bulbs could be modified with a microphone and a modulator for the light output to communicate that acoustic computer information to anyone having a telescope that can see a window. Unmentioned, of course, is the obvious possibility of equipping a house with such bulbs (or even simply equipping every LED bulb with this technology) to allow listening into ordinary conversations within homes…

In a sense, the articles even dovetail, since the government desire for security and cyber-warfare has led them, apparently, to subvert a national standard. How would we know if they have decided that security and cyber-warfare requires that all bulbs be equipped with listening capability?

Skeptical December 29, 2013 2:54 PM

The recent opinion from the Southern District of New York in ACLU v. Clapper is required reading for anyone curious as to the legality of the NSA telephone metadata program.

See especially pages 37-44 if you’re interested in the 4th Amendment questions raised by the program.

In brief, with respect to the 4th Amendment, the Court finds:

  • as per Smith v. Maryland, the telephone metadata collected is not protected by the 4th Amendment because such information is voluntarily disclosed to the telecommunications provider and forms part of that telecommunications provider’s business records.
  • while we may use cell phones for many purposes other than making telephone calls, the telephone metadata program collects only information that pertains to telephone calls, namely outgoing and incoming call numbers, duration, and so forth. The changing nature of cell phones, therefore, does not affect whether this program falls under the reasoning given in Smith, as some have argued.

The opinion is a crisp and clear explanation of the law as it stands today, and includes some useful history as to the role of secrecy in government institutions (noting that secrecy in some respects is antithetical to democracy, but also noting that some secrecy has long been held vital to effective government even in a democracy, citing sources ranging from the US Constitution (providing that Congress may withhold some records of proceedings as secret) to the Federalist Papers to current US law)).

I have to say that viewing the opinion from a legal vantage, there is nothing remotely surprising or controversial within it. If I wanted to point someone to a good guide to current law (not law as we wish it might be, but law as it actually is), this would be it.

The opinion is lengthy, addressing many legal questions, but one can fruitfully focus on the sections of particular interest without reading the opinion in its entirety.

I’d note that the judge who issued the memorandum and order was appointed by a Democrat, President Clinton. This judge will be the 16th judge (15 others from the FISA courts) to have found the program legal. There is a decision from a district court in the DC Circuit, finding that the program is “likely” unconstitutional, but the opinion is frankly weak in several respects, and I don’t expect it to survive an appeal.

BulkyAtoms December 29, 2013 3:50 PM

The courts seem to recognize that the business records of lawyers deserve confidentiality. What if your phone company, ISP, and/or secure email provider, hired a lawyer as CEO and declared itself a law firm and confidential legal communications provider? Would that bring the meta-data they collect some protection? It would probably still be subject to court approved subpoenas, but at least it might be exempt from bulk warrantless collection.

Clive Robinson December 29, 2013 5:30 PM

@ Bruce,

If you go take a carefull look at the bottom of page 2 of the first link you provide there is a distinct oddity with regards GCHQ…

If the NSA TAO are so clever etc etc etc why can they not do what GCHQ can?

I think people want to tred and think very carefully about that, and as a first step drop assumptions about who is in the driving seat and who is riding shotgun…

Clive Robinson December 29, 2013 5:57 PM

@ Nick P,

Of the equipment list the one I have most knowledge of is the BID 950. And the “fill port” on that is that lump of silvery looking metal on the front panel which is an optical tape reader….

In the blurb it mentions the BID 610, which was a horendus beast of interconected units, internaly it was designed using 4000 series CMOS chips and it did not play well with rough terain and being bounced in the back of 4 ton trucks. Sufficit to say I always took a tool kit with me when in the field with that and had to dig into it’s guts a number of times.

As a general note over in the old world side of the Atlantic “stream ciphers” were and I belive still are favoured for low through high level circuits. However over in the new world side “block ciphers” are favourd for low level circuits and some data at rest storage. Which might give you something to think about…

Iain Moffat December 29, 2013 6:04 PM

@ Nick: The Norwegians actually supplied the US with some NATO Crypto in the 1970s (RACE aka KL-51), so perhaps competent but not so independent ? See: and . The machine is compatible with the Kongsberg PACE see . Kongsberg’s defence crypto business (or at least one component of it) started in Norway as Lemkuhl, and was for a time a Thales subsidiary, before coming back under Norwegian ownership.

I do agree that much can be learned from military and government crypto – I think the challenge is how to find an acceptable balance between the convenience of PKI in today’s web and the key security inherent in the various forms of key distribution practiced by state level operators.

Where most private use and state or corporate use differs is that state or corporate users are hierarchical and can work with a single root of trust. For most private users a need for an external CA-like root of trust is at best an overhead and at worst a risk of key compromise.

Most real world web users want something that “just works” and displays a padlock in their browser when they need the reassurance to part with financial data. They don’t want to be consciously involved in making it work. State users are able to pay people to operate complex processes correctly most of the time (history teaches not all of the time). We need to find some way to make the secure way the easiest way if it will be adopted, without the need to either trust root certificates from a CA or physically exchange key material at least once for each pair of users (or user and service). Self-signed certificates are not user friendly, and most people won’t wait for physical key media now. I think SSH (with at least the ability to check that subsequent connections are to the same remote party) is about the best that I have seen.

In the non military world a hierarchy of secure key distribution and fill devices reaching from the root of trust to all the end user devices in a community is probably unfeasible – I suspect most people concerned about secure personal communication need a decentralised system so they can choose who to trust without having to turn to a third party (so a peer to peer approach to distribution) and the keys need to be distributed either electronically (which raises the same issue as OTAR key encryption keys in military radio) or physically (in which case removable media rather than a fill device are easier to post). Bar codes on paper in the post perhaps?

I think a realistically achievable approach to a secure web/internet for end users is to

  • Develop an alternative peer to peer key infrastructure to SSL and certificate authorities and get it supported by mainstream browsers and e-mail clients so it gets critical mass. This should ideally support both “over the network” and “out of band” key distribution in reasonably user friendly ways and not be tied to any specific session encryption or key format.
  • Develop an “open source” home router to fit between an ISP “home gateway” and the trusted “home” network that is fully open and auditable at both hardware and software levels, including firewall and IDS functions, using as many hardware security precautions as possible to mitigate against external compromise either over the network or through bad code. The KA9Q NOS ( ) on which I learned IP around 1990 might be a model (probably too old to be a starting point?) for this as it needs to be based on something simpler than Linux to have a chance of being auditable and running on hardware that can be assembled by an end user. The hardware should be simple enough for people to build from a reference design with multi-sourced components if they want to minimise trust, but include at least hardware for data execution prevention and program write protection separate from the CPU.
  • There remains the problem of compromise of data and keys through end user machines running commercial operating systems in the trusted network. Ideally we would develop a new PC and OS architecture optimised for security rather than backwards compatibility but I suspect that the resources required for such a project are now such as to be beyond an individual or a small group with mutual trust. I do however think there is a potential open source hardware and software project for a crypto device that operates as a peripheral to the PC with a key entry path independent of the PC. That is I think the only way to be free of key loggers and other malware getting access to keys. I would see such a thing as a USB or serial device receiving plaintext from the host, returning ciphertext to the host, and provided with its own keypad and calculator display (or removable storage?) for key management. Such a thing ought to be possible with a simple microprocessor and a code base small enough to be decompiled and understood by a competent person in a few days using a well understood non-agency-sourced algorithm (e.g. Twofish) that is robust against a large known-plaintext attack inherent in the host PC having access to both sides of the device. A development of this device would be to provide two host ports so it can act as the secure bridge between a “trusted plaintext” and an “untrusted internet” computer – the challenge being to present it as an easy to use operation to move data through the link as seen by an end user without requiring a complex software implementation in the device itself !

I am a former hardware engineer, not a cryptographer, so I leave the implementation of these suggestions as challenges to those better qualified than me, but I believe that these three steps would be sufficient to get to the point where knowledgeable users could be in control of their own security to a sufficient extent for most lawful purposes again.


Skeptical December 29, 2013 6:07 PM

Well, this will vary somewhat by jurisdiction, but actually the billing records of an attorney that don’t reveal the substance of legal advice either sought or given, or the legal judgments of an attorney or an agent thereof in connection with legal work, are discoverable (i.e. can be subject to a request for information by an opposing party during litigation or to a subpoena from a grand jury or other similarly empowered entity). Communications with your attorney for the purpose of soliciting or receiving legal advice may be protected by attorney-client privilege, but the mere involvement of your attorney in a conversation or service does not thereby confer a protection of privilege upon it.

So in short US law does recognize special privileges for certain kinds of information which we share with others and protects them from additional disclosure, such as that subject to attorney-client privilege. These aren’t 4th Amendment protections, however. And telephone billing records have never fallen into that category, and indeed have explicitly been held not to fall within that category.

We can argue about whether they should. We can argue about whether bulk accumulation of telephone metadata into a hybrid court-controlled/executive-controlled database is a good idea. But that’s a policy question that should really be decided by legislatures. As the law stands currently, they’re not protected by that kind of privilege. And this isn’t a new development, really; the power of the grand jury extends well back into English common law, and that power is the historical antecedent for what you see in the Section 215 business records provision.

Clive Robinson December 29, 2013 6:28 PM

@ Skeptical,

    This judge will be the 16th judge (15 others from the FISA courts) to have found the program legal.

It actualy does not matter how many judges agree when basicaly they all base their reasoning on Smith v. Maryland, which was a bad decision when made and which time has onlly made worse.

The reasoning is almost allways as you quote,

    – as per Smith v. Maryland, the telephone metadata collected is not protected by the 4th Amendment because such information is voluntarily disclosed to the telecommunications provider and forms part of that telecommunications provider’s business records

The problem is “such information is voluntarily disclosed”, if people think about it there is actually nothing “voluntary” about the disclosure, nor is there anything voluntary about this information “forming part of… …business records”.

When this nonsense was spouted the phone service was a Government licenced monopoly with no alternative method of real time two way communication over distance available because the Government wanted it that way. The only choice you had was to communicate over the phone or travel to where the other party was and no doubt be followed in the process.

Back when the founding fathers drew up their various documents only streets and public areas in towns during social hours were places that were subject to public sight. If two people wanted to communicate privatly they walked away from the “public areas” of town and streets and met in any other place that was defacto private.

Since that time the government has done it’s best to render every place including the most private in your home “public” by one means or another and likewise every communication. It is long past time where people say “enough of this nonsense and pretense” and take up the Herculian Task of cleansing the filth from the state stables.

name.withheld.for.obvious.reasons December 29, 2013 6:56 PM

@ Iain Moffat

Another area of concern is the use if self signed certs on embedded systems (AKA SOHO routers and industrial devices). On thing that has been bothering me is “at rest” encryption used by applications such as zip, adobe, and MS office. With the scarfing up of all encrypted data the use of the data to determine collisions or the known weak RNG’s constitutes an unacceptable data risk.

milkshaken December 29, 2013 7:36 PM

In the view of the TAO/NSA story from SPIEGEL International: How likely do you think that the demonic badBIOS could be for real – after all??

Iain Moffat December 29, 2013 8:04 PM

@Name Witheld:

I think the badness of self-signed certificates depends largely on the threat that is of most concern. If you want or need SSL but have other means of ensuring that the far end is who and what you think it is (e.g. you control and built both ends but dont trust the network between) then the potential for key leakage by involving a CA is possibly greater than the risk of being unable to validate the remote end and revoke certificates. I know certificates can be pre-shared but it is not a user friendly operation with current browsers !

I agree with you that many (most?) of the various at rest encryptions used in modern home and small office computing are open to doubt either because we do’t know enough about them or they come from places which have been mentioned in recent news stories (or both).

The problem is not helped by most proprietary and industry standard data file formats having a substantial amount of structured metadata i.e. known plaintext inherent in their design which even if not put there for the purpose must greatly help cryptanalysis unless the data is compressed first to disturb the structure. Having said that I dont think encryption of data at rest is a complete cure as any attempt to use the data in a normal PC inevitably results in plaintext in memory that can be exfiltrated.

These issues can’t really be addressed in a machine that uses the same CPU, memory and OS instance for everything and runs the plaintext application and the encrypted storage and networking sides under the same user at the same privelige level. I think if anyone does design a better PC architecture it needs to address this by separating these roles on different hardware more like a mainframe and its front end processor.


Bauke Jan Douma December 29, 2013 9:48 PM

Friday Squid’s come and gone. Let’s pretend it’s monday, ‘kay?

Here’s the deal — for all you experts:
Ordered a USB- RS232 device about a month ago.
Got it delivered, after three weeks. No delays — really, wrapper envelope
looks ‘real'(tm?). China and all. Neat little thingy too, screams “plug me”.

Should I plug it?
Would you?

Ref (but not limited to):

BulkyAtoms December 29, 2013 10:54 PM

@Skeptical – If your telecommunications company was a law firm, I still wouldn’t expect the records to be protected by the nearly absolute attorney client privilege, which goes far beyond normal 4th Amendment protection. But at least the reasonable expectation of privacy might require a subpoena like is currently required for content.

The point of having your telecommunications company run by an attorney is actually mainly just to make it clear to judges that, like in a law firm, just because your data is a business record and accessed by secretaries, computer technicians, and other employees, doesn’t mean you don’t have an expectation of privacy.

Of course if your communications company tells you in its contract that it sells your data on the open market, then you obviously don’t have a reasonable expectation of privacy. Telecoms might be reluctant to give up that revenue.

Nick P December 29, 2013 11:04 PM

Thanks for the link on laptop intercepts. There were people here worried about this when I posted my recommendation to buy old hardware. The names seem to indicate it’s new stuff they’re intercepting but I assume they could get old stuff too. There are two solutions to this:

  1. Drive to an area that has the hardware present and buy it there. Make sure you anonymized both the browsing and the call to the seller to hold it for you.
  2. Pay someone to buy it for you. Make sure they have a legitimizing cover such as working at college comp sci dept, computer repair shop or integrater.

“The report indicates that the NSA can even exploit error reports from Microsoft’s Windows operating system.”

Good that I always hit “Don’t Send.” Wasn’t sure about specifics at the time but had a feeling it might be a problem at some point. I felt this way even more after seeing the extremely clever academic paper on automatically producing exploits from Microsoft patches. Made me rush my patches too. (shudders)

@ Bauke Jan Douma

I doubt you’ve been targeted by a hardware intercept. No offense but you don’t seem that important to them. I’d expect them to send it to people with good intel or who cause them big problems. If I were you, I’d be more worried about vanilla USB threats and software-based attacks (by hackers or NSA). Remember most of their attacks are via insecure routers, fake certificates, MITM injections, and so on.

Dark Reality

My way of looking at it is assuming they’ve compromised every aspect of the computer I’m using to connect here. I’ve assumed they are building a profile of me and everything I do online, along with capturing digital secrets on the machine. They are probably doing this for Schneier and everyone who has supported him here. They know someone will challenge them eventually so they’re keeping an eye out across the board, collecting dirt to use in court and other hit lists. Without going through extreme and expensive measures, there’s few ways to stop them that their engineers can’t bypass.

So, use the Web as if you’re already compromised by nation states is the best advice I can give you. Make the protections make sense. Try to stop the rest of the attackers. Restore the PC from clean slate on non-writable media occasionally, update it, and back that up to similar media. Occasionally replace the PC. Past that, though, there’s no reason to assume your defenses worked against NSA and others’ million dollar plus security programs failed. PC’s + the Web is a dangerous combination.

Far as air gapped machine, it needs to have been purchased for that purpose and never been connected to your Internet. Disable things like its wireless in the BIOS and lock the BIOS. If you’re serial porting it or to it, you might be fine long as you carefully set up the software doing the transfers to have minimal access. Non-x86 and non-Windows can reduce the effect of vanilla malware too.

Daniel December 30, 2013 1:10 AM

In my view the hardware intercepts angle is overplayed. Frankly, as the article notes, if the NSA wants you they have no problem going into your home and installing spyware after you have got the computer. Doing the intercept makes their life easier to them, to be sure, but it hardly means much with their resources.

Fundamentally, I agree with Bruce’s comments from some months ago that if the NSA wants in, they are in. It is not realistic to expect one person to shut all the doors and since one open door is as good as all open doors better to save one’s energy and money for a good lawyer or a plane ticket out of the country.

In my view taking on the US Government these days is a game of wack-a-mole which the individual is sure to lose. In its own way it is security theater. People feel like they must do something so they do random shit that makes them feel better but doesn’t actually accomplish anything. Worrying about hardware intercepts is one such pointless exercise.

BulkyAtoms December 30, 2013 2:47 AM

The NSA may be able to crack virtually anyone, but that doesn’t necessarily mean they can crack virtually everyone at the same time. They can’t send a black bag team into every house. When an advanced team was sent into an apartment in Germany, they succeeded in disabling three alarms but missed the fourth. They can’t even use zero day vulnerabilities on everyone, because the attacks would be logged and the vulnerabilities exposed and patched. The NSA “wants in” – to everything – but even with today’s pathetic security, they can’t get in – to everything.

The more secure the public’s systems are in general, the smaller portion of the public they can compromise on a regular basis. That would help keep them focused on who they should be focused on or help minimize the damage they can do focusing on the wrong people.

If a lot of people used encrypted email, TOR, Truecrypt, smartcards, and secure hardware, there would be a crowd to get lost in. If the public would demand clear laws and strong protections for whistle-blowers, it might help a lot. The situation is not hopeless.

Jimi December 30, 2013 2:49 AM

– Target’s Close Relationship to Government Needs to Be Watched
Target’s Forsenic Services is who the FBI, Secret Service, BATF and others have turned to for help for two decades

– New algorithm finds you, even in untagged photos

– The World In An Eye

Clive Robinson December 30, 2013 6:56 AM

OFF Topic :

Some of the readers on this blog will remember that Bruce had some of his work “passed off” by others to get academic credit…

Well there may be a reason this is going on in India and other related places…

It would appear that some education establishments have a “two published papers to graduate rule”. That is the students have to get two of their papers published at an international conferance…

Well to do this they are having to pay conferences quite high registration fees etc.

As the author of this article points out often it’s not just the registration fees either.

Basicaly it looks like this “two paper rule” has been exploited to create what is at best a faux market for conferences that claim to be of international status. Others would nodoubt see this as a fraudulant practice to fleece students.

Clive Robinson December 30, 2013 7:21 AM

@ Nick P, et al,

It would appear that MicroSoft has a new “container OS” in development using a strongly type safed extension to C# called M#.

Apparently the idea behind M# is that it will be the “lowest level language you will need” and thus OK for OS etc development…

Reading between the lines M# whilst it might be strongly type safed will still have all the other security weaknesses that such low level languages have (ie a lot).

As for the “library OS” idea whilst it has merrits it has a lot of issues. Any way it appears MS are serious on becoming a “devices & services” organisation and this may well become a fundemental building block of the move.

name.withheld.for.obvious.reasons December 30, 2013 9:25 AM

@ Iain Moffat

Liked your response, cogent set of useful assessments of “our” dilemma. Have a
few things to add…

The reason for issues regarding embedded systems have some basis or root cause from the following:

  1. Embedded OS Cert initialization feature/function
  2. Level of entropy, base key(s) are shared (i.e. from the same embedded image or binary)
  3. Selection of hash and key methods, tend to favor performance over key strength or lengths
  4. Protocol implementation, often embedded devices use some high level platform that leaves any “security implementation” vulnerable. I’ve seen a few 3G/4G devices that use Ajax as a front end…it’s not difficult to craft injections that can produce unwanted results.

On the “at rest” issue, the primary strength of “at rest” types of methods is the consideration that not all data will be captured/trapped/sniffed/exfiltrated.

You’re observation is both astute and timely when it comes to “security engineering”. Many on this blog have noted that the lack of end-to-end, or what I might term System-of-Systems, approach to computational systems integrity and how this is woefully inadequate–regardless of the threat.

When companies want to push their blameless EULA’s instead of a reliable product one has to wonder who is being served.

Add the NSA’s current subversion and down right criminal behavior and it is clear, it is that we have met the enemy-he is “US” as in the United States.

Anon-UV-Squirrel December 30, 2013 9:39 AM

Question: What is a good ethernet logger? I’d like it much like tcpdump, but for the lower ethernet layer. This is for making a transparent firewall/logger machine. EWAN devices need to be heard.

Iain Moffat December 30, 2013 1:46 PM

@ Name Witheld:

re. issue 2 I have certainly seen copied certs to avoid the pain of generating them (and not just on embedded systems). I hope any replacement for PKI makes it easier to do the right thing than the wrong thing (or at least no more difficult) – to me the current
tools are not quite fit for purpose in that respect. Embedded systems with real world sensors ought (if they wait long enough) to have the advantage over a PC in finding external entropy, but I guess the need to work immediately without interactive bootstrap using a non-web user interface is seen as a barrier. I do wonder if military-style key distribution via a fill device is more appropriate in some embedded environments where there is a single administrator (I am thinking of industrial or building management systems here, not home automation).

re. issue 4 I dont think it’s just embedded systems where use of ready-made frameworks bring ready-made weaknesses – I have had bad experiences with Weblogic in particular in my earlier career (in that case for performance reasons – although the project died before it got near penetration testing and I am sure a whole new set of problems would have surfaced) and if I have to have a web UI in any of my personal projects I usually try to write it as CGI code directly in ‘C’ or ‘Perl’ so that any bugs beyond the host webserver’s are my own and an adversary at least has to start finding them from scratch …

I think the data at rest problem depends on where the data is at rest and the threat you are concerned with – in the case of an adversary who has access to the media without interference then more material is likely to make decryption easier. In the case of a remote attacker then I agree that there is some safety in volume unless they have a lot of time and bandwidth to exfiltrate data, or are able to run an effective known plaintext attack locally on the target machine against specific files. I had the former scenario (“compromised cloud storage”) in mind as the worst thing that could happen to a naive or trusting end user when I wrote.



anonymous December 30, 2013 4:18 PM

Sorry, have to repeat it:

Lots of new details about specific programs, infected hardware, complicity(?) of the (usually American) businesses being involved (Apple, Dell, many more), BADBIOS is mentioned, hardware that implements RC6 and magically sends out UDP packets…

“NSA claims in their QUANTUMTHEORY documents that EVERY attempt to implant iOS will ALWAYS succeed.”


Gary December 30, 2013 4:26 PM

Shopping for Spy Gear: Catalog Advertises NSA Toolbox

When it comes to modern firewalls for corporate computer networks, the world’s second largest network equipment manufacturer doesn’t skimp on praising its own work. According to Juniper Networks’ online PR copy, the company’s products are “ideal” for protecting large companies and computing centers from unwanted access from outside. They claim the performance of the company’s special computers is “unmatched” and their firewalls are the “best-in-class.” Despite these assurances, though, there is one attacker none of these products can fend off — the United States’ National Security Agency.

anonymous December 30, 2013 4:53 PM

Makes me wonder if the NSA had their hands in electronic voting systems, rigging elections in the US….

Anura December 30, 2013 5:11 PM


They don’t need to rig elections, no matter who gets into office, their power will remain unchecked. You’ve got two choices, and they both support the current system. Sure, the Republicans might be against it now, but had Romney won the Republicans would be supporting it and the Democrats opposing it. Politicians have loyalties not convictions.

anonymous December 30, 2013 6:39 PM

One thing is certain, after all these revelations I’m certainly going to keep my antique cellphone and other obsolete hardware.

Anura, I certainly know that the US is not a real democracy and is governed by one party with two only slightly different factions. That’s why I have refrained from stepping foot on US soil for well over 15 years, and hope I never have to visit that cesspool again. Fortunately my current job doesn’t require me to travel there anymore.
They probably would deport me anyway. Don’t need that.

Unfortunately I need to travel to Tahiti/Polynesia in the forseeable future, still trying to make up my mind regarding the exact location and duration. Three weeks minimum, more would be better. Have to make the Missus happy and want to spend a shitload of my money there instead of supporting other economies that continuously try to piss in my cornflakes (as soon as her last spoiled brat has left the house, certainly won’t buy a ticket for that braindead Facebook addict), but nearly all the flights go over US territory and have a stopover in L.A. I guess I have to search for flights via Tokyo. Haven’t been to Japan yet, or are there any flights via Bejing? Need to investigate.

Figureitout December 30, 2013 9:22 PM

If they did, there would be tyranny of the masses.
–Btw, that was me, not Clive that said that. And I will gladly take “tyranny by the masses” over “tyranny by a plutocracy” anyday. In fact, you could say “tyranny by the masses” is…democracy. I don’t agree w/ the “tyranny by the majority” argument James Madison made, that’s democracy and its imperfections. Thus, everyone (like even me who tried to get involved in public affairs) will have no excuses when (not if) it fails and there’s more worthless bickering and doing nothing then trying to take credit for doing something. So the decisions get left to psychos who then elevate themselves…screw it I’m focusing on tech. until it collapses which may happen tomorrow for all I know…

Clive Robinson RE: M#
–I have little interest in the product, but a little joke popped up in my head. The “#” symbol is known as the “pound” sign in the US. So, pronouncing the language name, M# (M-pound, or Impound) could be referring to what that product will do to you lol.

–Nice to see you participating in the Squid threads, now twice since a while. Getting a little social now, aren’t we? 🙂

Nick P
A group really needs to get together to fund work for the defense side by people with the right background. That’s the main problem.
–Don’t get caught up in other things (I’m still looking for a “second opinion” on fabbing a secure chip but it’s going to take me a while b/c I have other things I need/want to do now) and let this problem “float away”. This needs to be done by people who really care and want it. Also, you mention people w/ “the right background”, tell me a project you’ve worked on that had all people w/ perfect backgrounds, got along perfectly, etc…I say trustworthy people that won’t intentionally subvert the project, is the main and biggest problem. How to verify that w/o resorting to TLA-cavity/background searches..?

So, use the Web as if you’re already compromised by nation states is the best advice I can give you.
–Dude, (no offense, I realize how bad it is) that advice f*cking sucks. It amazes me just how bad it has gotten so quickly, then to compound we are all being forced to use chips w/ bluetooth stacks that are a backdoor to your computer.

Aunt Fred December 31, 2013 12:29 AM

“Makes me wonder if the NSA had their hands in electronic voting systems, rigging elections in the US….”

If your hope comes from a TV screen, you’re still stuck in the Matrix.

there will be no revolt
there will be no resistance
they are moving us to a future where
implanted chips will be the norm
they will read and record our thoughts
and perhaps they will physically move us, too
and since they’re working on removing memories
we won’t remember what happened when they ‘moved’ us.

even the bible says there will come a time when people
will seek death but won’t be able to find it …
because THEY won’t let you.

It’s all downhill, folks, they want your brain
without enlisting in any force
and they will take us by force
yesterday the chip in the head people were crazies
now we have the reality, they just have to introduce it

they will seduce us into this electronic tattoo, pill swallow and monitor health, implantable chip and even stronger, more hideous technology in the name of many things, safety, health, entertainment, g00g1e gl4ss is the beginning. Soon they will say, “WHY AREN’T YOU WEARING ONE?” and you’ll be forced to wear one like good old Wesley Crusher was.

freedom – it was good while it lasted.

Figureitout December 31, 2013 12:46 AM

Bruce on the motherboard:

You give the best hackers the best budget and you get these sorts of programs.

–Disagree, the best hackers make do w/ diddly squat. They grow out the ground like weeds we see all around us. No matter how much poison or times we pull them out, they always grow back in more inventive and annoying ways then we can believe.

MicroSD card hacking, goddamit…

Flash memory is really cheap. So cheap, in fact, that it’s too good to be true….The illusion of a contiguous, reliable storage media is crafted through sophisticated error correction and bad block management functions. This is the result of a constant arms race between the engineers and mother nature; with every fabrication process shrink, memory becomes cheaper but more unreliable. Likewise, with every generation, the engineers come up with more sophisticated and complicated algorithms to compensate for mother nature’s propensity for entropy and randomness at the atomic scale.

–This is stupid, we are giving up control of technology for the sake of “looking cool”. Idiots go surf your iphones that get bricked and have no clue; we need verifiable technology. The Open Hardware movement needs to continue further…

Figureitout December 31, 2013 12:52 AM

I say “goddamit” RE: the MicroSD cards b/c that is what you use to boot up a Beaglebone Blk!! TI’s own tutorial!! Grrr!!!

Figureitout December 31, 2013 1:14 AM

Aunt Fred
–To look at it from a terribly “optimistic” perspective, there won’t be brains nor willing people to harvest anything from. I see a future of massive physical problems that require hard work but everyone will be “too good” to pull up the sleeves and fix it b/c of “social pressures”; b/c if you do that then immediately you are disqualified for better jobs b/c you’re a worker while everyone just sits on their ass and hopes for someone… Thus massive systems like sewage and electricity will shut down and that’s when the fun will begin. I’ve already prepared my mind for such a hell.

Anura December 31, 2013 1:31 AM


I disagree; we won’t have a future of hard work, we will have a future where all menial jobs are automated with minimal human support required. We will become extremely efficient at production, but won’t be able to consume enough to keep people employed, and we will watch politicians sit around doing nothing as income declines, until the people revolt and we enter a new era of post-capitalism.

Figureitout December 31, 2013 1:40 AM

–It’s ok if we disagree (besides politicians doing nothing, that’s a given). What happens if the robots fail though? Do robots fix the robots? Then the future is even worse, there is no use for a lot of people and they will die off from starvation. There won’t be revolt so long as police and the military impose tanks and night-vision against .22 rifles, shotguns, and pistols. So we all become helpless weaklings that can’t solve anything. One foreboding experience I had was a worker for my house, he couldn’t figure out a problem for a light we had and even though we paid him to find the problem he actually said to me, “Tell your dad to look at it, he’s handy.” Worthless. Me and my dad looked at the problem they couldn’t figure out, it was a simple sh*t solder job on a transformer.

Anura December 31, 2013 2:51 AM


Robots fixing robots will probably happen to some extent, but it’s all solvable problems. The thing is, if we can produce, then in theory everyone should be able to eat; the problem is that the current system requires everyone to be employed, which means we must increase consumption along with efficiency. This brings us to an unfortunate fact: modern capitalism requires consumerism, as a drop in consumption means a rise in unemployment (which tends to cause a vicious cycle). However, over-consumption has environmental consequences, and there is also an income level in which people stop consuming and start saving, which means that it becomes harder and harder to keep growing the economy to keep jobs.

My main concern is that when AI hits the point where it can replace assembly line workers, taxi/truck drivers, clothing sewers, warehouse workers, shelf stockers, and cart pushers, then we will see rapidly increasing efficiency, beyond our ability to grow income levels (resulting in unemployment and deflation, which usually means recession or depression) and possibly even our desire to consume. There’s three main solutions to this problem that I see, with varying degrees of socialism:

1) Increase public schooling years, reduce retirement age, improve pension. If you required 21 years of school instead of 13, you produce a society with much higher skill sets, while also reducing retirement age and increasing public pension/social security to improve the quality of life of retirees in order to keep unemployment low. This is the closest to the modern system; consumerism is still required to some extent to keep jobs, but you can adjust unemployment by lowering the retirement age.

2) Unconditional basic income. If there are not enough jobs for everyone due to an increase in efficiency, instead of requiring an increase in consumption, allow people to choose between free time and income. If you don’t want to work, you don’t have to work and you still get a fairly livable wage, if you want to work, you will get a higher income. Note that under this system everyone would get the same check from the government, whether they worked or not, with no requirement to look for work. Those who do work get paid on top of their basic income, not instead of it. I suspect people in general would work fewer hours, with long periods off between jobs.

This is still a capitalist system, but without consumerism, as people don’t need a job to survive. Furthermore, it’s somewhat self-regulating if you fix it to a percent of GDP; if too few people work the payment would start to decline, encouraging more people to work. If it gets too bad, the percentage can be decreased. Conversely, if we are over-producing because of an increase in efficiency, you can increase the percentage to encourage people to work less and reduce production.

I like this system the best, because I think it exerts the least amount of control over people (so you have taxation, that will exist either way, and I don’t think that’s worse than telling people when they can retire, how much they need to consume, and how long to go to school). I’m very much anti-judgmentalism, and if there is no societal need for people to work, I don’t see a reason why we should force an ideology of hard work upon them, or call them lazy parasites or whatnot. Besides, you would all be freeloaders anyway, living primarily off the fruit of the robot’s labor; if you are one of the people with a job, the idea that you worked hard to earn it is merely something you tell yourself to make yourself feel special.

3) A variant of socialism with people paid by their ability to work, not their actual work. Basically, if there aren’t enough jobs for programmers, but you can get hired as a programmer, you would be paid the programmers salary, and if a position opened up (even if it’s beneath your ability) and you were told to take it, you either take it or you lose your salary. This works if the government owns the means of production.

You would not have to completely ban capitalism, people can still run businesses, but most jobs would, indeed, be for the government.

Eventually, society will hit a point of true post-scarcity. All three of those systems handle the transition; in scenario 1 retirement age approaches public school age, in scenarios 2 and 3 people are paid regardless of whether they work, so jobs would simply approach zero with minimal socioeconomic problems.

In a true post scarcity society, nanotechnology would assemble anything you want, from food to complex electronics (even nerve gas!). At this point the only commodity is energy, and if cold fusion ever gets off the ground, we can have enough energy for everything, with minimal environmental damage. At this point, I think the only political/economic system that would make sense is pure anarcho-communism, especially if we are a spacefaring race by then. Without scarcity, money itself becomes obsolete, along with most jobs outside of research and creative fields (including software/electrical/civil engineering, as well as the arts), but I suspect that without jobs or a need for money, people would do them out of sheer boredom (people already voluntarily code free software, it’s not much of a stretch).

an observer December 31, 2013 3:56 AM

Just so you know; your footer still reminds us that your views aren’t those of the entity formerly known as British Telecom. 🙂

Nick P December 31, 2013 1:38 PM

@ Clive Robinson

Thanks for the link on M#. I like keeping up with MS Research tools because they’re often useful at some point. There are several projects that come to mind that might hint where they’re going.

  1. Verve OS. Much of OS written in C#. Used typed assembler and theorem provers for riskiest code. Gives us an idea of how they might do verification.
  2. COSMOS. OS written in C# and X# (a high level assembler). Gives us an idea of how a C# OS might work.
  3. Sanos. I found this one recently. Originally targeted at including just enough OS to run a java application server. He now targets C. I could see it inspiring others to make a library type OS that does just enough.
  4. MirageOS. The Ocaml container that runs on Xen. It’s a “Just enough OS” project that puts as much code in safe Ocaml as possible. Then the apps use Ocaml to leverage that. Result is a deployable, self-contained image of an OS + application.

So, much of what Microsoft wants to achieve was done in various ways. Now, they just have to put it together in a way that would have people ditch C++. That’s quite a challenge. I’ll be interested in seeing how they accomplish this.

@ Bryan

“For M#, look up BitC.”

BitC was interesting but it’s mostly a dead project. It had plenty of activity when Shapiro ran it as part of his COYOTOS project. When he went to work for Microsoft, both projects stopped making much progress. I doubt BitC has a trustworthy or practical compiler yet given such little support it has. (Last news post was 2010.)

Importance of layered solution to language problem

Personally, I think such projects try to do too much at once then fail. Quite similar to new OS developers. My advice for someone trying a similar goal is to decompose the problem into different levels. The lowest level might be something like Typed Assembler, Cyclone, zero runtime Ada, H-layer, etc. It’s for implementing the lowest operations with some safety behind interfaces that might also provide safety. The next layer is typesafe language optimized for performance, control of data structure memory layout, etc with optional garbage collection. Ada, Modula, and certain ML/LISP languages with low level focus come to mind. It should be easy to build high performance alogithms, state machines, etc. in this language so much of OS or platform libraries can use it. The next layer is a language that focuses on productivity and other modern concerns, maybe even being a modern language. Java, C#, Go, Python, Common LISP, Haskell, Ocaml, etc. all come to mind. They’re all good enough with correct coding practices.

The reason I advocate this is that it might work. If each layer is developed in sync with another and developed well, then they will be easy to implement by volunteers, easy to inspect for issues, and easy to use to build real systems. Additionally, by limiting each layer to a certain aspect of the problem one can focus tools (eg static analysis, formal verification) on the layer that work best for it. Proving high level user code and proving low level kernel code processing interrupts are so different that it’s a huge strain to reuse tools across these problems. However, there are specific tools for each job and strategies for safe composition of them via good interfaces/linking. So, why is everyone trying the hardest approach (and mostly failing) when we can just decompose this problem like so many others?

We won’t get anywhere if they keep trying to build the perfect one-size-fits-all solution. Next group of industrious individuals need to look at a solution, break it into pieces, find the best assurance activities for each piece, use them, and then use whole system assurance activities to supplement that. Focusing on ease of use, repeatability, and tool availability will also benefit. Then we will actually have something to use rather than develop further, maybe.

@ anonymous

“I’m certainly going to keep my antique cellphone”

Throw it away. Cellphones are insecure by design and there were catalog items in old days for compromising them. No cell phone or mobile device is trustworthy from targeted attack if it includes wireless functionality.

@ Figureitout

“Don’t get caught up in other things (I’m still looking for a “second opinion” on fabbing a secure chip but it’s going to take me a while b/c I have other things I need/want to do now) and let this problem “float away”. This needs to be done by people who really care and want it. Also, you mention people w/ “the right background”, tell me a project you’ve worked on that had all people w/ perfect backgrounds, got along perfectly, etc…I say trustworthy people that won’t intentionally subvert the project, is the main and biggest problem. How to verify that w/o resorting to TLA-cavity/background searches..?”

I’m still working on my part of it. Lack of support and funding are the main obstacles. I’m working on my security investigations. Stopping all this is actually simple: put the NSA in their place from above and around them rather than fighting their tactics. Their tactics will always win as they’re smarter, bigger and better funded. So, one must change the NSA itself (traits or incentives) to win. Any solution would be initiated by the public or wealthy private parties motivated to act. I see no opportunities on that front yet. So, I’m still sidelined and focusing on my area of expertise (high assurance INFOSEC).

re radio

My main interest was for comms that don’t rely on centralized Internet infrastructure. I expect them to do surveillance or blocking of the comms. I was thinking they might be able to help with a few things used with the most critical communications:

  1. Communicate with main links are down using just radio and batteries/generator.
  2. Dodge the massive amount of attackers depending on the Internet.

  3. Leverage all the cheap, easier-to-inspect hardware for transport stack that is harder to do with Internet protocols.

  4. Building relay networks similar to meshes or anonymous remailers supporting decentralized messaging (maybe key or directory exchange).

  5. Be more fun than installing a router. 😉

So, it’s not a hobby I’m about to jump into or a security solution against NSA. It’s more like an area I think has potential to reduce certain risks and better manage others. Might even help against NSA. Might not. (shrugs) Also thought someone might read it, enjoy the idea, and start on a fun project. There mere concept of hitchhiking a ride on meteor trails to get a message a thousand miles with cheap equipment is… amazing.

Nick P December 31, 2013 3:09 PM

@ Iain Moffat

“I think the challenge is how to find an acceptable balance between the convenience of PKI in today’s web and the key security inherent in the various forms of key distribution practiced by state level operators. ”

No doubt.

“Where most private use and state or corporate use differs is that state or corporate users are hierarchical and can work with a single root of trust. For most private users a need for an external CA-like root of trust is at best an overhead and at worst a risk of key compromise. ”

To be clear I’m not advocating a CA model as it does specific, nearly worthless things. The centralized architectures I’m advocating both manage security critical operations and provide strong assurances that they do it correctly. That last part is quite critical. The implementation is flexible: whole thing might be processes on a user’s machine or a network of separate systems. Depending on design, the systems might also be decentralized in at least a verifying way. BitCoin and CVS access to OSS software are examples here.

“Most real world web users want something that “just works” and displays a padlock in their browser when they need the reassurance to part with financial data. They don’t want to be consciously involved in making it work.”

Which is why they’ll be continually compromised. The majority’s wishes and what works in practice are difference between night and day. It’s simply impossible to avoid being compromised by high grade attackers if one (a) has no participation in security of their activities, (b) trusts black boxes and (c) uses insecure by design architectures for operation. There’s not even a theoretical approach for securing that across system lifecycle. Tradeoffs have to be made. Hence, my own tradeoff is that I don’t develop solutions for those users is a domestic or foreign TLA is in the threat profile. They’ll simply let attackers do an end run around my hard work so why waste time. I focus on those willing to put at least minimal effort in. (Meme: no free lunch)

That said, my proposal to copy govt approach doesn’t mean we have to leave majority without benefit. The principles in govt approach can be used to provide better than existing security with tunable convenience. Here’s some examples:

  1. Having management commands sent encrypted by symmetric key over UDP might be way safer than most interactive protocols. This might work for distributing keys, passwords or configuration data. Original master secret is hand entered into the system. A home router might come with a CD for user that installs software for safe communication to the router, a simple interface for user, proper setup of master keys, site specific mods to reduce one-size-fits-all attacks, etc. Much can be automated.
  2. In COMSEC devices, use separate chips (or nodes) for internal network, external, crypto and/or processing with careful interactions between them. This can limit the various covert channels, DMA attacks, etc. One can’t attack what is physically impossible to access via software instruction. A hardware version of Micro-SINA VPN would be very easy to implement. Network Pump’s approach to memory separation is also instructive here. The chips are getting so cheap that the cost would still be in SOHO range, probably.
  3. Trusted path might be implemented by having dedicated IO chip for user interface to hardware. A command or a physical switch decides where its UI focus is. Untrusted chips have no access to UI processor’s internal state to stop secret stealing and authorization spoofing. It’s one extra, easy step protected at lowest level.
  4. A dedicated physical port (like serial) for administrative commands or initializing can make administration very easy.
  5. Write protect of firmware and/or system image helps. Must be in a specific mode to apply an update whose integrity is verified by symmetric or asymmetric process. Another version is hard drives with built-in support for a read-only system partition, a user data partition, and so on. User can have it blocking changes, accepting changes, or push button restore to last clean state. A physical interaction over trusted path is used here (eg button or switch with LED’s). A form of this exists in OEM recovery solutions: recovery partition is created during installation and can be activated with a command on boot.

  6. A secure messaging service that uses a dedicated, portable device for messaging (opt plugged into a PC). Without going into detail, it’s possible to provide highly assured messaging by combining a decentralized security protocol, a centralized service to make it more convenient over Internet (eg NAT issues), an isolated transport stack, a more robust client stack, & possibly CPU enhancements for trusted software protection. It could be easier to use than Hushmail while providing much better security. Which could improve over time transparent to user like most of my designs. (“incremental assurance”) Put it in a sleek case too. 😉

  7. Transaction & capability machine approaches taught me to handle untrusted comms interface by separating client side IO and system’s operation. The idea is that DMA, tightly coupled IO, a constant stream of interrupts, various firmwares, and insecure legacy protcols cause problems so best to just eliminate them from main system. The success of mainframes shows batch processing & time sharing can still run all sorts of business processes. It’s also way easier to secure than real-time, Internet systems. So, a system might periodically ask IO processor for incoming data, disable IO, process all data in a safe fashion, enable IO, and then output to IO processor. IO might be an OpenBSD gateway speaking TCP/IP or HTTP to clients with owner defined protection level. The use cases for such a design include asychronous remailers, key management systems, notaries, user/configuration mgmt, SCM, auditing, various root of trust, voting, etc. The delay for IO doesn’t need to be huge: it might do it every fraction of a second to every minute depending on use case. User’s end must be changed from “server returned work finished” to “server accepted work & will notify your client when finished.”

So, for all products, it comes down to using the strongest techniques/components, eliminating/reducing the riskiest, safeguarding user interactions with system, ensuring secure initial state, secure update process, and trustworthy recovery process. Following best practices for installing, configuring and maintaining certain components can also benefit. A feedback mechanism from users for bugs and usability enhancements as well. Those targeting convenience tune for that. Those targeting private individuals with bigger worries must give them a certain amount of insight and control into system operation. Where central mgmt and compliance is feasible, the IT/IS staff might implement/manage something closer to NSA’s methods. Many tradeoffs are possible but the basic pricinples have always been the same. They’ve just been ignored by security community as a whole that prefers band aids and complexity. (Largely a market driven problem, though.)

Bryan December 31, 2013 4:22 PM

On trust.

First we need to be able to know and trust the press, or at least know how they are biased. The statement of ownership for all news organizations must be resolved to living persons. No hiding behind a corporation name. All owners with greater than a 0.5% ownership interest, or who have an editorial, or managerial control in the organization must be listed. This includes board members. If the statement of ownership means a bunch of people who are part of a trust or mutual fund are listed, so be it.

Iain Moffat January 1, 2014 2:28 PM


Re. ‘ “Most real world web users want something that “just works” and displays a padlock in their browser when they need the reassurance to part with financial data. They don’t want to be consciously involved in making it work.” Which is why they’ll be continually compromised.’

That is what I was hinting at; I agree. At best we can design the operating system so they dont have to take more risks than necessary – I think a key switch to physically block unexpected writes to specific critical areas of persistent storage is probably the limit that can be accepted in that market ! You and I probably are prepared to work for a higher degree of assurance.

I think your “mainframe and I/O” concept and my concept of a front end I/O and back end application processor communicating over some kind of internal message-based interface for a secure machine are actually quite close – the question is whether to build the front end as a simple system with a small attack surface or to use a trusted open OS for I/O and build more of the security into the back end. I tend to see the main remote risk being via I/O ports so see the front end I/O as the critical part that needs to be minimised to reduce its attack surface.

Your mention of Mikro-SINA (the TU Dresden one with a ‘k’ I presume, not the Iranian hardware product) led to some interesting reading. As I see it their solution based on Nizza ( ) is in effect a software implementation of the dual machine architecture using a microkernel as the data passing component between trusted and untrusted linux instances, and a provider of isolated internal and external network devices. I think what we are both discussing is a re-partitioning of that single processor architecture over multiple CPUs and hardware.

There are I think different partitionings over 2 or 3 processors based on whether the goal is to run a normal computing experience with strong protection from external threats or to achieve maximum security against internal and external threats without regard to backward compatibility.

  1. A two processor solution collapsing front end and security on one processor, with a fairly normal PC back end. This is what I had in mind before reading the Nizza paper. It is optimised for protection from external threats and assumes that the front end is better placed to identify signs of intrusion in traffic from the back end if it can see the plaintext.
  2. A two processor solution with a minimised front end managing storage and networking for a secure back end which passes only encrypted data out to the front end. I think this is what you had in mind ? For me this is optimised for protection of data originating in the back end.
  3. A three processor implementation of the Nizza concept mapping the untrusted (front end), security, and trusted (back end) roles to 3 separate processors. This makes the security processor much simpler as it doesnt need to support the external I/O device drivers for the untrusted OS. There is the potential for the security processor to run something much smaller than Linux or *BSD entirely from real ROM with only enough persistent storage for keys since it only really needs to encrypt user data passing between the front and back ends and act as a reference monitor independent of either front or back end processors. Having separate user input and output for the security processor is necessary to ensure that the security processor does not depend on a (possibly compromised) front end or back end to report suspicious traffic or receive keys. The back end computer would boot operating software from local secure media and access encrypted persistent storage via the secure processor and front end.

Options 1 and 2 could be prototyped using two PCs communicating using ethernet over a crossover cable or else SLIP or PPP over a fast serial link – the latter is possibly harder for a 3rd party to subvert without tailored code. The real difference is which PC runs the crypto functions. Option 3 could be prototyped using an industrial single board computer with its own (textmode) LCD and keypad for crypto configuration and security alerts connected between the two PCs.

I don’t have the time spare to work on (3) but I think I may have a go at (2) using a CD-booted laptop (Knoppix) as the secure back end and some old PC with two network ports and a big disk as the front end. I have verified that I can use encfs on top of sshfs in Knoppix to mount physical storage on the front end which is encrypted at the back end (strictly I could use any network file system instead of sshfs but it is convenient and securely authenticated). Similarly I can readily use
ssh -D to proxy Web Server traffic via an SSH connection over the front end. What remains is to prove a secure messaging / email solution and provide a secure persistent key and configuration store (clearly not an SD card based on your last post …) for the “back end” to use …

Regarding the radio sub-topic that’s where I come from (a short wave listener since age 9 and a radio amateur since 1986) and to me the Internet has now become much like HF radio (worldwide coverage with the properties that everyone can listen, anyone may be listening, and someone will be listening).

The positive side of radio is that you don’t depend on any infrastructure beside the transmitter and receiver and they really are buildable by an end user (although you might have to learn glass blowing – see ) – although that is going to an extreme! In practice there are plenty of malware-free radios from the 1960s and 1970s to be had by those with amateur radio licenses ( e.g. ).

So for me the Internet today is no more hazardous a medium than a radio link 30 years ago, even knowing all that has emerged during 2013. Where internet based communication today fails is that the end user terminals are vulnerable to remote exploitation in a way that a radio and morse key never were. Some of this is inherent in the multi-functional nature of modern PCs with the same device being used for data creation, transmission and viewing, some of this is an avoidable consequence of the PC hardware and software architecture which nearly everyone uses. The fact that personal computing is so standardised in the form of PC, iOS and Android platforms makes large scale exploitation by remote parties even easier.

Best wishes for 2014


Figureitout January 1, 2014 2:44 PM

–You’re really thinking far out, what a different world that would be; at least more positive than mine…

Nick P
–Yeah, you can’t fight their tactics using their tactics b/c they give themselves legal immunity and its their job to wait for a target to go to work or leave an area physically unsecured. Also they can have all the embarrassing blackmail they want on me but framing me, hell no. Want to talk about embarrassing?–How about not even knowing how many documents one of your own employees walks out with? Then successful social engineering…now all their employees can look forward to not being trusted and subject to interrogations, fun!

Anyway onto better topics…Glad to hear you haven’t completely given up. If this project happens, we need someone like you who’s methodical and organized. Maybe it’s just useful brainstorming and collaborative research. Hopefully I can eventually get a job w/ a large company like TI/Microchip/ST and get some access to the fabs and look for possible cheap runs if we get that far. Hopefully the market isn’t more cutthroat and monopolized by then…

I’m curious on your thoughts about intercepting hardware in the mail (more breaking laws) through a site like ebay…My grandma hanging onto life by a thread just had our christmas card sent to her all torn up…This is why I prefer going to stores, so all hardware needs to be compromised. Or if a new service popped up solely for assured delivery (probably get infiltrated and still trust issues).

RE: Radio
–Yeah, that’s one of the reason I like it. No memory (still can have it for assured delivery or to prevent resends though), typically easy to inspect/resolve issues (digital radios maybe not so much) just a comms device. Maybe use a reliable/practical channel like phone then send messages on preset freqs and modes; but this gives away times and name metadata. Prefer to have a mobile station you set up quickly and easily, than a static one for obvious reasons.

Clive Robinson January 1, 2014 4:32 PM

@ Nick P,

Whilst I agree with the layered software aproach I take a broader view and have two basic assumptions about the computing stack and those developing for it,

1, There will always be a lower layer you either can not secure or can not trust.

2, The majority of developers do not have the training thus the ability to develop securely.

Further due to the “Known knowns, Unknown knows & Unknown unknowns” issue all systems will have “future vulnarabilities” that may not be “future” to some (ie zero days etc).

On the face of it whilst a realistic view point of current consumer level computer security, it is some what pesemistic in that it paints a bleak picture that apears of little hope.

However I’m a great beliver in “divide and conqure” not just to manage complexity but to minimise resource requirments and simplify signitures etc at any given node in a system and can see how this can be used to mitigate many issues from the “lower layers”.

As for developers at the higher levels, well lets be blunt even if they have the desire to code securely, the current marketing / managment driven model positivly discorages it for a whole host of reasons.

Thus I see one type of development methodology and practice for “below kernel” firmware, another for kernel level development and so on up the stack to application code. It’s also fairly evident that the shear level of development forms an inverse pyramid with by far the most development being at the upper application layers the least at below kernel development.

Conversly the need for secure development is greater at the lower development levels.

Likewise if you look at software close to or part of the hardware (microcode & assembler) this is the most difficult to make secure simply because the developer is working with almost logic level instructions. Where as an application developer is using complex library functions the internals of which they need little or no knowledge and thus can more readily be made secure.

Further most studies still show errors and omissions as being proportional to “the number of lines of code” which sugests that for any level of development the higher the level of development language/tool the more productive the developer is going to be not just in development time but in time saved on finding and rectifing errors and ommissions (which is a win-win situation for managment). More importantly the types of errors and ommissions are less insiduous and thus less harmfull which has a coresponding greater security margin.

As I’ve observed in the past applets and shell scripting whilst not particularly efficient in use of hardware are in general very high level, thus fast to develop and generaly low in faults and security issues.

Thus if application level developers plumb together well designed applets they will produce more secure code as a consiquence without having to be any more aware of seccurity issues than they currently are.

The applets are developed by using a high level language with strong security supporting internals supported by appropriate formal methods. The developers working at this level have a comensuratly higher level of security awarness.

Likewise the high level language and tools used by the applet developers are developed using a lower level language which is less secure simply because it needs the ability to be more flexable at a lower level. The developers working at this level need to be very security aware and need the support of formal methods.

As the developers work at lower levels their knowledge of security realy does need to be very much an integral part of their abilities. However the numbers of developers required at this level is a very small fraction of those working at the application scripting level.

The problem of discussing this is trying to keep comments concise so if the above looks a little light on details, please remember the idea is to give a high level view of the issues and how they can be realisticaly be mittigated without getting bogged down in lots of details.

Nick P January 1, 2014 6:17 PM

@ Iain

“I think your “mainframe and I/O” concept and my concept of a front end I/O and back end application processor communicating over some kind of internal message-based interface for a secure machine are actually quite close ”

More than you know: “your” idea is one I’ve posted here before quite a few times. 😉 It was in my transaction appliance for secure banking. Idea is that the protocols and processing are inherently risky. So, do them in a dedicated device that then sends commands/requests in a simpler, safer way over safer hardware to trusted device. Certain operations also speed up as the front end can act as a filter against all low capability attackers whereby the main device never has to check such requests. If comms system isn’t trusted, then the main system can do checks as well but will have an easier time of it. (Not to mention might use secure hardware/software techniques.)

Side note: servers also do a form of this called TCP/IP offloading for performance boost.

“the TU Dresden one with a ‘k’ I presume, not the Iranian hardware product”

Wow. Didn’t know about them. I better get the spelling right next time. Should I tell German intelligence that they named their IPSec variant after an Iranian hardware company? Lulz.

” I think what we are both discussing is a re-partitioning of that single processor architecture over multiple CPUs and hardware. ”

Close. I was talking about two things. What you just said in that quote and abstract strategies. The abstract strategy in Mikro-SINA is that they extracted security critical processing into a dedicated component, Viaduct, with these properties: very low TCB, straightforward interface, internal state protected by microkernel. At that point, they could incrementally improve their system assurance by reducing the TCB/complexity of transport stacks. If I recall, they went from a full user-mode Linux to a barebones stack on internal network. So, I think how the systematically minimized risk and decomposed the system is illustrative. I think similar approach can work with dedicated hardware too. So, onto that…

Regarding your three categories, they each have merit in different forms. My thoughts on each one.

  1. “Two processor architecture with front end doing security & backend being a PC.”

Believe it or not, you’ve just described the original government standard for network security (Red Book). The overall approach was putting a very secure device between the computer and network. The device would do packet labeling, encryption, access control, filtering, etc. Depending on vendor, it might be a PCI card or a dedicated server (eg guard). DiamondTEK LAN, GEMSOS GNTP, and Boeing SNS Server (“MLS LAN”) are examples. Boeing’s OASIS contribution also had an “Embedded Firewall” on a PCI card that did stuff like this.

So, what are the benefits? Well, obviously it will be pretty good at dealing with transport attacks and can be uniquely hardened. That such devices achieved A1-class assurance in the past gives us hope. A little known benefit is that the secure front end can be modified to store a trustworthy image of backend PC and do recoveries by powering it off to force trusted boot. It can also send configuration/administrative commands. It can also be configured to store or pass along audit data from the backend so it can’t be deleted upon compromise. Far as IO, dedicated cards with trustworthy hardware or non-DMA hardware can be used between the systems. Naturally, they’d support TCP/IP or something useful over that physical medium. Probably the best benefit of such an architecture is it can be done plug and play with transparent, always on protection.

Drawbacks? The main drawback is that it can’t stop application level attacks. This should be assumed, anyway, because we know PC architecture provides attackers nearly limitless opportunities. Security on the front end, unless its application layer guard, will not stop application layer attacks targeting OS app, comms stack, etc. The application might be attacked to leak data through both overt and covert channels. The backend system might also be attacked in first stage, with second stage using it to target a vulnerability in front end. This is easier if front end is something like a NIX that they undoubtedly have a 0-day for. So, against most bad guys the architecture is quite effective but has inherent risks against top TLA’s due to PC.

  1. ” A two processor solution with a minimised front end managing storage and networking for a secure back end which passes only encrypted data out to the front end. I think this is what you had in mind ? For me this is optimised for protection of data originating in the back end.”

This is conceptually similar to how my mainframe-inspired scheme would work. The concept is separating almost all risky IO from CPU/memory doing trusted computation. Matter of fact, the backend pretty much just does computation and distrusts other components. Hence, it does input validation on any incoming data from IO system and uses crypto for privacy + integrity for anything leaving. It’s security-critical internal state must be protected from other components. You were wise to notice what it’s best for. I used this architecture in designs for root of trust services, software build systems, key management, serving static authenticated content, asynchronous guards, and high integrity datastores. It used OpenBSD, Linux & RTOS’s then. The mainframe/transaction processing stuff is a recent development for me as a response to both IO risks and concurrency issues.

The good news is that IO can be isolated and managed. Mainframes also show that offloading most IO, not just certain protocol processing, can result in faster and more reliable systems. Proper memory layout and MMU’s/IOMMU’s can do most of the heavy lifting on isolation if designed right. There’s also huge covert timing channels that get knocked out in one fell swoop. The final benefit is that the computing node can use hardware or software protection methods that “would be secure except IO and interrupts break our model” (common gripe in academia). Precedent for high security is that some capability-based machines did it this way. Enough said, right?

This architecture has drawbacks. Although mainframes today are highly interactive, it took them forever to pull that off. I bet whatever we design wouldn’t work for low-latency or real-time applications without seriously good engineering for that. This will probably make it incompatible with modern Web. (sigh) If the user interacts with it directly, its primitive command shell will make for an uglier, less convenient experience. I’ve seen enough highly usable terminal/text apps that I don’t find this drawback hopeless. Just hard to market. The last drawback is that this design might take custom hardware or at least plenty extra hardware. It might be larger, use more power and cost more than a fully integrated hardware/software system.

  1. “A three processor implementation of the Nizza concept mapping the untrusted (front end), security, and trusted (back end) roles to 3 separate processors.”

My favorite one. It’s very flexible as it can actually implement no 1 and no 2. The thing I like most about it is that the center component can be highly secure. The center part might be one simple chip or a whole set of them. There just needs to be a way for each side to communicate with it. It must also be able to implement the security functionality. It should also, in theory, reduce risks. Probably the simplest example of this is the NRL Pump.

The Pump is a data diode that supports TCP. Most diodes are strictly one way so TCP with its ACK’s isn’t an option. The Pump is one-way with an exception for allowing ACK’s. This creates a timing channel but that’s not so bad as developers reinventing TCP is riskier. The architecture has three pieces: internal network, security enforcing middle, external network. Each part has its own physical memory. The extra memory mainly prevents one side from having direct access to another. Internally, data transmission uses safer method. The end result is a device that does its job with highly assured protection.

Seeing how this works, this tells us the middle device in VPN-type device needs a CPU, its own untouchable memory, an IO interface to each network, and buffers for messages to/from each. It might be scheduled similarly to a DSP where it works on each end for a fixed amount of time. Each interval it pulls in data over IO, processes it, and (if needed) sends output somewhere. Just keeps doing this. The devices on each side probably also have transport stacks configured to act as if middle node isn’t really real-time to reduce dropped packets. Might be a step in there for admin tasks. The result here is that this computer can be very simple. It can be customized even down to the ISA level to securely do its job & nothing more. It’s hard to see how the security critical component can get more secure. It might also have a “maintenance” or “admin” mode where it is configured over a dedicated interface by an admin/user.

“Options 1 and 2 could be prototyped using two PCs communicating using ethernet over a crossover cable or else SLIP or PPP over a fast serial link – the latter is possibly harder for a 3rd party to subvert without tailored code. ”

Cross over cable is easiest. It’s what I use by default as setup is a simple checklist of steps. For other method, one thing I did in one design was use ATA in a non-DMA mode. Cheap embedded devices often support ATA/IDE by default. ATA PIO 2 can almost send as much as 100Mbps Ethernet, with ATA2 PIO4 sending more. The cycle time is ns. I basically used it like a serial port. Side benefit there is that people worried about NSA subversion can use old hardware to eliminate DMA risk. (Still a risk that ATA controller get’s attacked but one can always roll their own ATA card.)

“Option 3 could be prototyped using an industrial single board computer with its own (textmode) LCD and keypad for crypto configuration and security alerts connected between the two PCs. ”

Yes. Many SBC’s have crypto acceleration now, too. If NSA isn’t in threat model, then both Freescale and VIA have chip with excellent crypto, performance, power and cost tradeoffs. If NSA is in threat model, a foreign equivalent should be obtained or one can just use a DSP/GPU. I’ve seen good acceleration done on the latter for way cheaper than commercial products’ charge.

“I don’t have the time spare to work on (3)”

Unfortunate as it’s the best against TLA’s. You could always do it incrementally starting at the software level on a generic hardware base.

“I don’t have the time spare to work on (3) but I think I may have a go at (2) using a CD-booted laptop (Knoppix) as the secure back end and some old PC with two network ports and a big disk as the front end. I have verified that I can use encfs on top of sshfs in Knoppix to mount physical storage on the front end which is encrypted at the back end (strictly I could use any network file system instead of sshfs but it is convenient and securely authenticated). Similarly I can readily use
ssh -D to proxy Web Server traffic via an SSH connection over the front end. What remains is to prove a secure messaging / email solution and provide a secure persistent key and configuration store (clearly not an SD card based on your last post …) for the “back end” to use …”

If this device is only used for secure messaging, then this is an interesting start. Far as I know, NSA didn’t subvert SSH like they did SSL and IPSec so that’s a good choice. If you’re using Knoppix, though, I’m not sure what your eventual security is going to be like. Your TCB is Linux which is something many TLA’s are capable of hitting easily. Before I do any real commenting on your design, could you explain how you intend for it to be used? Will it run Web browser, server or dedicated client-server app? Will your setup be on one or both parties’ ends? Very importantly why are you motivated to chose option 2?

“So for me the Internet today is no more hazardous a medium than a radio link 30 years ago, even knowing all that has emerged during 2013. ”

That’s an interesting (and accurate) perspective. Perhaps Internet users should adopt such a view.

Iain Moffat January 1, 2014 7:58 PM


To answer your questions:

“Very importantly why are you motivated to chose option 2?” – mainly because I can see an obvious way to do it without a lot of work !

“If this device is only used for secure messaging, then this is an interesting start” – in the short term I want a more secure home computing platform for messaging and web. If something better than POP/IMAP/SMTP does emerge from the NSA revelations and lavabit/silent circle/darkmail it will need a secure endpoint to run it and store received messages with any meaningful improvement over a standard multirole single CPU PC. I’m not really happy with the “ssh -d” approach to the web browsing half because the browser still runs on the trusted CPU. I need to separate the display and execution between the two ends next (noting the recent CCC paper on X as an additional issue).

“Your TCB is Linux which is something many TLA’s are capable of hitting easily.” – I know Knoppix is less than ideal for that both for unix security reasons (no barrier to becoming root “as shipped”), and incorporating a lot of application software. I just wanted to prove the concept of separating I/O and persistent storage from processing of plaintext on different CPUs today and Knoppix is something I know well and have to hand. A proper solution will need smaller and robust distributions – not necessarily of Linux – for front and back ends although I will likely stick with CD booting as the main solution for preventing and cleaning persistent threats (although BIOS remains an issue).

I have had two near misses with fairly ordinary persistent malware (one on a linux server injected via Exim and one on a Windows desktop) recently and feel motivated to do something better now that the NSA leaks will have inspired the “private sector” to raise their game to a similar level. I’d rather spend a week building something new than a week getting back to where I came from using off site backups “next time” 😉



Iain Moffat January 1, 2014 8:07 PM

@Figureitout: Even “steam radio” did memory for the spooks … – an amateur radio friend of mine had one to use with a UK/PRC-316 – as far as I know he had not finished reconstructing the receiver, tape recorder and display to complete the chain before he passed away unfortunately 🙁

Nick P January 1, 2014 9:21 PM

@ Iain

Gotcha. Well, want a shortcut to get around most of those hacks that use malware? Use a non-x86 chip. That alone will stop the vast majority of them. Linux and BSD is still maintained on PPC, SPARC, MIPS, etc. China’s platform is Loongson MIPS + a BSD-based OS. Linux already ported to that one with an open firmware too. (Richard Stallman uses it.)

Now, anyone targeting you specifically with expertise can get around that with no trouble. The point is to stop all the others with hardly any work. The simpler architecture (and/or open firmwares) might also help in your design goal compared to the bug-ridden mess that is Intel architecture[1]. Finally, the new BIOS attacks shouldn’t work due to non-mainstream firmware.

Note: I came up with this when Apple abandoned PPC as I figured all malware would target x86. I figured the powermacs would be usable due to volunteer OSS software and there would be plenty used hardware for years to come. Proved out & still is case: I bought one a short time ago with 1GHz processor and plenty ram for under $100. 🙂 Old SGI’s, Sparcs and Alphas go for $100-1000 although they got plenty muscle.

Another Tip

Look up SVA and SVAOS work. It provides many protections with legacy compatibility and minimal kernel modification for x86 Linux. Already running Linux kernel. That with simple mandatory access control like SMACK might help you prevent and isolate a lot of trouble with little to no work besides downloading/compiling. Maybe.

[1] The NSA catalog Bruce linked to has numerous attacks against Intel chips and compatible controllers. SMM mode is mentioned specifically. That’s what Invisible Things Lab attacked IIRC. So, dodging Intel (and recent American chips) might help you against NSA. I like to think of their catalog as a HOWTO for avoiding their existing attacks. 😉

Bryan January 2, 2014 11:29 AM

“” “Considering the enormous value of the information he has revealed, and the abuses he has exposed, Mr. Snowden deserves better than a life of permanent exile, fear and flight. He may have committed a crime to do so, but he has done his country a great service,” the New York Times’ editorial board wrote. “”


“” “When someone reveals that government officials have routinely and deliberately broken the law, that person should not face life in prison at the hands of the same government,” the newspaper wrote. “”

From the News Daily on New York Times Opinion, and a link to the New York Times Opinion piece: Edward Snowden, Whistle-Blower by The Editorial Board.

Figureitout January 2, 2014 4:10 PM

Iain Moffat
–Cool link, yeah I’m pretty curious how some of these Morse decoders work besides FFT (a free one I got off the internet sucks, just “eeet eet t ee te…” but I never fed it straight audio from the radio). Would also like to hear what 300 wpm sounds like lol.

And yeah if the whole supply chain is poisoned then no radio is safe. Still, those are tapes which can be destroyed easier than other memories (still got the other antennas, satellites and Air Force jets to think about too).

Clive Robinson January 3, 2014 6:31 AM

@ Figureitout,

    I’m pretty curious how some of these Morse decoders work besides FFT

Which bits?

The good ones will pick a signal out of the noise that most of us cannot hear.

The old fashioned way was obviously to use a very narow band filter (in my “hens teeth” junk box I’ve got some eight pole XTAL filters with a 150Hz bandwidth). However there are a few probs with them in use. Firstly they limit the keying rate, second they require high stability VFO’s and thirdly they require either highly accurate VFO’s or long “search times” and fourthly don’t work to well if the signal has doppler issues.

Whilst modern synth technology such as a Direct Digital Frequency Synthersiser (DDFS) with a GPS or Atomic Lamp Frequency Standard [1] can resolve the VFO issues it does not resolve the search or doppler issues.

There are various search stratagies that humans use which computers can do just as well if not better, and a simple extension to them alows the receiver to be “locked” to the transmitter and thus like the loop in frequency / phase locked loops follow it up and down the band to remove doppler or tranmitter instability.

Search stratagies come in many different forms but they usually all involve signal and pattern recognition / differentiation.

The real problem is you are not looking at just pulling a signal from noise but from other often stronger signals, fading and joys such as the “Luxemburg effect”, impulse noise etc as well as noise.

To do this in an efficient manner normaly means you need atleast two variable bandwidth filters, the first being the roofing or out of band rejection filter the second being the signal selection filter (those used to playing with older more manual spectrum analysers will be familiar with this two filter aproach). These are adjusted to match the search sweep rate and expected signal modulation (keying) bandwidth.

If you have the big bucks you use a wideband high dynamic range I/Q A-D converter and run various signal processing algorithms in banks of DSP chips using multiple FFT algorithms to do various tricks. You can get code to do this from various Software Defined Radio sites and the sites of various chip manufactures such as Analog Devices and Intel.

However whilst these algorithms will find “signals” you still have to differentiate them into candidates and garbage. You then have to test the candidates further to probables and more garbage and so on. To do this you need to examine the signals “impressed information” or modulation.

However you expressed a non FFT aproach.. the old school hardware way was to recognise signals was “in band / out of band” ratios fed into short, medium and long time integrators and use the output of these to make the search/hold choice on incrementing the VFO. To see this sort of circuitry in “textbooks” go and hunt up Direct Sequence Spread Spectrum (DSSS) and Code Division Multiple Access (CDMA) signal aquisition techniques, whilst not all are relevant reading about them will give you a feel for the proplem domain.

Oh and remember when it comes to “signal recognisers” they all boil down to signal band filters, detectors and integrators where the integrator is realy a baseband / envelope narow band filter thus adjusting the signal band filter has similar effects as adjusting the integrator time constant and there are trade offs to be made.

Further a “software integrator” is simply “add the last X sample values and divide sum by X” and there is a short cut way to do this that can be very efficiently expanded to do many other things and for quite a few things more efficient than using FFTs. What you do is employ a circular buffer that is long enough to store all X samples of your slowest integrator (narrowest filter), when a new sample comes in store it as NEW inc the pointer mod X read the buffer value and store as OLD and write NEW into the buffer. Then subtract OLD from SUM and add NEW to SUM, copy SUM into OUT and divide it by X (if you want to scale it but often it’s not required). To initialise the buffer and sum very simply, set them and OLD to the AD zero value, less simply and better is for the first sample write NEW into all the buffer values and OLD and SUM then multiply SUM by X. There are better initialisation techniques but usually they are not worth the complexity involved, further maxing X a power of two like 2^8 makes things considerably faster as does aligning the buffer in memory.

Now you have the long integrator set up you can use it to create the other shorter integrators with simple pointer offset techniques. Lets say that the short integrator only has X as 8 not 256, it’s effective OLD is at pointer – 8 in the buffer. So you need seperate OLD SUM and OUT and X values so lable them OLD0/1/2/etc where 0 is for the long integrator, storing these in an appropriatly designed structur in memory will make things faster. So to update OUT1 get buffer-X1 in OLD1 and subtract it from SUM1 and add NEW to SUM1 copy to OUT1 and scale by X1 if required.

Circular buffers can be used as “quenched resonators” which can be used to do many tricks including acting as phase / frequency detectors for loop locking and modulation detection. If the circular buffer is initialised to zero and a suitably scaled down input is used you store one cycles length of values, thus X determins the resonante frequency. This time instead of replacing buffer values with NEW you add new to them. Over several cycles the values will if a signal at that frequency build up a sin wave which becomes greater in value and less noisy. Close frequencies will likewise initialy build up BUT then start to decrease and at a time equal to the reciprical of the frequency difference will return close to zero (ie you will have a residue of noise and other signals). If you pick a suitable time interval you will not only know if you have a signal present but it’s phase, if it’s below your acceptance threshold you simply quench it by reseting all the buffer values to zero.

I could go on at length as to how this and simple DFTs (not FFTs) on just the four upper bits and sign will be enough to detect nearly all modulation systems including complex multi tone and phase signals. As I’ve mentioned before I’ve done this for HF audio bandwidth complex modulation systems such as Picolo on the 8bit Z80, I’ve also built systems using very fast ROM chips clocked by simple DDFS to do I-SSB and carrier suppresed AM demodulation with a 455KHz IF signal and one running at 10.7MHz where the AD was simply a zero crossing detector to “pulse detect” narow band (12.5KHz) FM.

However having once detected the signal and recognised the modulation type you then have the issue of decoding the data back to human understandable form. Even with morse you can employ error correction stratagies for missing “bits” which will improve readability in not just noisy signals but those hit by impulse type interferance. You will find food for thought when looking at some forms of coding error correction which work on the information and it’s timing not on error correcting codes put in at the transmitter. One such area is “spelling correction” and another being “voice recognition”.

[1] Supprisingly for many people is that atomic lamp frequency standards are fairly readily available second hand on the likes of auction sites fairly cheaply due to the very high numbers used in the mobile phone industry. Basicaly their stability decreases with time for various reasons so preventative maintanence scheduals pull them early. In most cases you will not need that level of stability (but it’s still nice to have one or two on the work bench 😉

Schneier News Network January 3, 2014 8:18 AM

NSA seeks to build quantum computer that could crack most types of encryption
In room-size metal boxes secure against electromagnetic leaks, the National Security Agency is racing to build a computer that could break nearly every kind of encryption used to protect banking, medical, business and government records around the world.

According to documents provided by former NSA contractor Edward Snowden, the effort to build “a cryptologically useful quantum computer” — a machine exponentially faster than classical computers — is part of a $79.7 million research program titled “Penetrating Hard Targets.” Much of the work is hosted under classified contracts at a laboratory in College Park, Md.

Figureitout January 3, 2014 10:31 PM

Clive Robinson
–Damn it Clive, rhetorical questions, do you Brits know what those are?! 🙂 Well, looks like I got another couple years of reading and study for that (when I get the spare time). Always wondering about something, still have some serious holes (especially the physics still) but more and more patterns are beginning to emerge as time goes. Thanks as always for the tips and reading suggestions.

Leave a comment


Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via

Sidebar photo of Bruce Schneier by Joe MacInnis.