Anura May 30, 2014 4:27 PM

I can’t even make pancakes that are readily recognizable as a circle, I’m curious as to how he made those look so good.

DB May 30, 2014 4:32 PM

@ Anura

pour fairly quickly and pour only from the center of the pancake on a level pan… you will get close to circles every time.

Dave May 30, 2014 5:24 PM

That’s really interesting about the breakthrough on the Discrete Logarithm problem.

name.withheld.for.obvious.reasons May 30, 2014 6:18 PM

I’m imagining that soon federal employees will resemble pancakes, the NDAA out of the house is essentially authorising the use of continuous employee monitoring for compliance regimes–the NSA monster is looking inward for pancakes. Over time this “system of surveillance” will serve as an operational model for not only federal employees and contractors but also provide the private sector with such services. Why do you think CISPA (without the P) is being ginned up. Interesting thing is level IV and above executives and congress (along with judges) are immune. Welcome to your world–behavioral analytics is nothing compared to behavioral control.

Forgot to mention that decision processes will be automated–don’t subscribe to the “watch for your ‘pink’ slip” listserv.

unamerican May 30, 2014 7:28 PM

When the CIA can’t co-opt the dissent of a very politically moderate Edward Snowden, it’s a sign that it’s time to upgrade to America 2.0.

American NSA Resistance

01 boycott American internet/telecom/tech sector
02 free software, encrypt all comms
03 arm for self-defense

America 2.0 is a program that uses anarchist and libertarian techniques to achieve socialist and communist goals. It’s a start-up that’s currently being crowdsourced across the globe from Donetsk to Berlin to Seattle to Beijing.

Reproduction of content is permitted under a Creative Commons CC0 1.0 Universal License.

Thoth May 30, 2014 8:03 PM

The problem with creating a fully secure and fully trustworthy system has always been the result of us not being able to fully exercise of democratic rights to rein in state powers via democratic mechanisms to prevent overreaching laws from being enacted.

Warrantless tapping, unaccountable and unauditable secret organisations, laws requiriing decryption on demand, seizures without the need of warrants and many unreasonable laws have slipped pass many country’s parliaments and become legally acceptable and even socially acceptable to be enacted. Many people have lost their sense of privacy and security as they put full trust into organisations and simply give up their own rights to decide what they want.

These powerful state actors and organisations took such liberty to continue their indoctrination to ensure no one could resist their decisions anymore.

Most people had high hopes for modern cryptography as some form of saviour but cryptography has mostly failed to deliver what it had promised.

Where do we currently stand now ? Many of our trusted systems have been subverted. Many of the projects that we once trust (Lavabit, Truecrypt as examples) have shown that they are vulnerable to coercion and open source projects (OpenSSL) that so many of us relied upon for security is not as organised and secure as we thought.

What should we do to reverse these seemingly gloom scenarios and improve everything around us ?

Chris May 30, 2014 9:54 PM

Hi noticing the turmoil regarding Truecrypt, I made a decision allready longtime ago not to use Truecrypt it didnt feel right for me, however that is and was my personal opinion.

For Windows I ended up using:
And I have never had any issues with it, would be nice to see an audit someday.

For Linux i use LUKS and for the Cloud I use ENCFS
Just my 2 cents


uh, Mike May 30, 2014 10:15 PM

You’ve all read Thompson’s “Reflections” paper, eh?

I think the best answer is Zimmerman’s web of trust. Any system that has a single point of trust will have to takes sides someday.

Note how Bitcoin has bank failures, but not currency failures. You can transact bitcoins without an authorized agent. In states with legal MJ, even cash is rejected by banks.

Look for the system that doesn’t have a central authority.

Chris May 30, 2014 10:43 PM

Hi something that have been bothering me lately is how Ubuntu is handling DNS requests.
Anyhow, I have found a solution for it and now my DNS works exactly the way I want,
and not how Canonical wants.

They claim that its a good thing to use Split DNS by default when using VPN, I think its a horrible idea unless you want to know whats happening in the VPN tunnel by listening to the DNS requests locally. (Read snooping), if you need to do Split DNS there are other ways to do that, no need to put this garbage on by default!



Jacob May 31, 2014 2:01 AM

If the claim in The Register article that Bruce is moving to PGPDisk is valid, I don’t understand why one that has already trusted Windows does not trust BiTLocker, and prefer instead to usher in another big attack dog with dubious connections into the house.

Chris May 31, 2014 2:26 AM

Regarding, the SSL Cipher order at

I Just noticed that RC4 is ontop of preferred ciphers here, since you cant change the preferred order in firefox, i have removed all but PFS ciphers, however sometimes I endup to a site that has only RC4 so I have a plugin called Cipherfox that can enable this temporarily.
which was when I realized that RC4 was infact given a highprio, since i forgot to disable it again.

Anyhow, just curious as of why RC4 is at the top of the ciphers on a security site such as this, in my book the most secure cipher need to be ontop and for sure RC4 at the very bottom not the otherway around.




Figureitout May 31, 2014 3:35 AM

–No one actually trusts Windows just like the false statements Bruce makes about all the trust in the world. There’s no other choices, so we merely go along w/ the flow and accept the sht b/c there’s no other choice! It’s used b/c technical software is developed for that OS primarily. There’s way more Unix technical software today than before, but there needs to be a lot more, and be reliable. What a fcked up system. It’s never going to change barring an all out blow up. All the businesses out there reliant on certain software that runs on either Windows or god forbid..Apple (that’s an artist’s computer, sorry) aren’t going to develop their own Unix command line versions. As someone working in the same place I am now says, “Are you going to have the installers of product use a Command Line? They can barely even use a f*cking Windows GUI w/o extensive support.”

Jacob May 31, 2014 3:44 AM


I am not sure that I was clear with my question:
If Bruce trusts Windows, and he must to some extent – he is using it for sensitive info too (“there is no other convenient choice, no time to learn and secure Linux” etc.), then why not trust BitLocker (part of Windows, created by the same Corp.) but instead go and enlarge the uncertainty surface by adding Symantec to the “trust” mix?

Wesley Parish May 31, 2014 5:57 AM

Well, wouldn’t you know it!?! Not content with inserting their feet in their mouths for that special taste of toe just as Mom made it, US “law enforcement” have discovered a way of inserting the foot up to the gluteus maximus. And coating it in ketchup before doing so. It’s all in the interests of grossing out the sensitive investor, just like in Primary School. It’s amazing how quickly the US has advanced to Second Childhood.

Clive Robinson May 31, 2014 7:13 AM

OFF Topic :

As many of you know the base statistics of a language such as letter frequency are used by cryptoanalysts to break real world use ciphers.

Well the most known frequency count of English was made in the early 1960s and published in 65 by Mark Mayzner.

Well not so long ago Mark contacted Peter Norvig who worked at NASA and now works for Google to see if a more modern set of stats could be produced from Googles trove of information. Well Peter did just that,

Mr. Pragma May 31, 2014 10:43 AM

@Nick P, yesme, Clive, Figureitout (and possibly others)

No matter, how I turn it, I arrive at some basic observations and conclusions again and again.

Simplicity, for instance.

RISC rather than CISC, only 1 io bus type, e.g. PCIe or SERDES rather than a plethora, simple languages with a reasonably equipped and sized stdlib rather than “fat” languages (like C++) and fat stdlibs, etc etc.

Clear and concise understanding

We seem to live in a world with a rich variety of solutions – but we actually don’t. Let’s cut through the blabla.
There are, for instance, only 2 architectures of any wide significance, namely IA32/64 and Arm. Yes, there’s Power, Sparc, Mips, and whatnot, too, but those hardly and sparingly or not at all cover the whole range from “Buy it, plug it in, run it” to a variety of boards to a variety of sources and 2nd sources down to a variety of available (also to Joe and Jane Smith) IP, compilers, etc.
As soon as you leave the IA32/64 and Arm world the air gets thin and increasingly expensive.
For the biggest part the impression of a world with with a rich variety of options is propaganda, business interests, and actually to a large degree just copies, me-toos, variants, and “specializations” (lock in).

The “why” leads to “how” and “what”

In a world that was basically unipolar and usa-centric since the beginning of the digital age, the “why” quite usually is business/capital/profit related. Contrary to common belief and quite possibly to the nature of man we are actually not (acting like) curiosity driven inventors. The few who are usually end up either poor or as the 1,5% shareholder of a company monetizing his inventions.
This can also be seen in universities which often are to a shocking degree driven by business and capital interests, too.
Accordingly there are the very centers (like intel) around which, on a global scale, pretty everything is turning, down to me-too companies in e.g. Taiwan which basically, at the lowest end, thrive on delivering a little cheaper.
The amounts involved are so staggeringly high that almost nobody, even not whole nations, can participate. Actually even well-known brand names and billion $ companies themselves, can not afford their own fab. In other words: The whole world basically has to take and to eat whatever very few us corporations please to sell.
FPGAs might look like an alternative. Unfortunately they are not for some reasons, price probably ranking high.

Basically the same core problem, in different disguises and with diverse faces, also creates security problems in other areas like software by brutally favouring “market mechanisms” and profits over sound reason. microsoft, adobe and the like do not produce low quality (and high security risk) because their programmers are bad but because they are driven by criteria that care little or nothing about quality; they pretty much only care about marketing and sales. Features are important, quality and security are not.

Risking to make more enemies than I can count I dare to state that I consider open source to be another major source of problems.
We like (or they like) to see open source as a good thing, almost as a peoples movement against ultracapitalism. Unfortunately we forget that actually a very large part of foss is just another layer around the status quo created by usa corp. interests. Truecrypt is just 1 example; it is centered around windows, one of the major cornerstones of the “corporate/agencies/usa empire”, it has has tu use and respect the windows interfaces, tools, etc.
Another major problem of foss is that some of its core values are not at all related to the matter. To assume, for instance, that democracy generates better code is pure idiocy. What it actually does create is lots of problems and watered down ideas, products, and quality (and hence security).
There are, of course, exceptions, products of high quality developed by professionals and given away for free. But again, these are exceptions. The vast majority of foss code is not only not better than corp. code but actually worse. Anyone doubting that observation should have a look at the openssl code (that, after all, should be assumed to be at the higher end of quality one would think).

Not coincidentially one of the uglier evils in software, bloat, is to be found both in corp. and in foss code. I’m not yet sure, which one is worse, corp bloat, mostly based on insane featuritis, or foss code, based on diverse factors whicg, I think, can be subsumed as sheer idiocy mixed with gross incompetence.

I do have some ideas about solution approaches but those are for another post.

Nice weekend everyone

blah May 31, 2014 1:03 PM

Snowden on {9/11, Boston Marathon bombing} [pre/post dragnet surveillance] #spy-on-all-the-things #total-information-awareness #screw-internet-security-and-privacy #needle-in-haystack #unaccountable-power #cost-benefit-analysis-in-centrally-planned-economy

In response to a question from Williams concerning a “non-traditional enemy,” Al-Qaeda, and how to prevent further attacks from that organization and others, Snowden suggested that United States had the proper intelligence ahead of 9/11 but failed to act.

“You know, and this is a key question that the 9/11 Commission considered. And what they found, in the post-mortem, when they looked at all of the classified intelligence from all of the different intelligence agencies, they found that we had all of the information we needed as an intelligence community, as a classified sector, as the national defense of the United States to detect this plot,” Snowden said. “We actually had records of the phone calls from the United States and out. The CIA knew who these guys were. The problem was not that we weren’t collecting information, it wasn’t that we didn’t have enough dots, it wasn’t that we didn’t have a haystack, it was that we did not understand the haystack that we have.”

“The problem with mass surveillance is that we’re piling more hay on a haystack we already don’t understand, and this is the haystack of the human lives of every American citizen in our country,” Snowden continued. “If these programs aren’t keeping us safe, and they’re making us miss connections — vital connections — on information we already have, if we’re taking resources away from traditional methods of investigation, from law enforcement operations that we know work, if we’re missing things like the Boston Marathon bombings where all of these mass surveillance systems, every domestic dragnet in the world didn’t reveal guys that the Russian intelligence service told us about by name, is that really the best way to protect our country? Or are we — are we trying to throw money at a magic solution that’s actually not just costing us our safety, but our rights and our way of life?”

Mike the goat (horn equipped) May 31, 2014 1:55 PM

Anura: sad, and criminal. Quite a few people have lost their lives, defending their properties against illegal and warrantless incursions from the so called police. It isn’t like Adam 12 “we’re here to help” anymore. More like, “do as we say or else we will plant a pound of dust on you and you’ll be spending a night in the county jail with a 230lb guy in PCP initiated psychosis” The west is no longer “home of the free” – the sad thing is that thanks to the MSM and an apathetic populace more interested in what Miley Cyrus has been up to than the state of their own country. And this is why we are all screwed, as a collective.

Robin: I checked out Orweb a while back. It appeared to just be a very lightly modified version of the AOSP browser (pre chrome migration) with it statically set to dump everything through the SOCKS proxy setup by Orbot (and a few other mods like not recording history etc). I noticed that the version I had a look at leaked DNS requests – so the traffic was going through the tor circuit, but any lookups were going through whatever was defined in resolv.conf. Note that I did review this using the emulator included with the Android SDK so actual behavior might have been different.

DB May 31, 2014 2:53 PM

@ Mr. Pragma

As an open source proponent I’d agree with your general assessment of what’s wrong with open source. However, I might try to point out that it is those exceptions that we as a people need. Exceptions where some person or team writes open source properly would not happen without open source. You need all that sediment to find the gold in it, so to speak. This is why every time I try to promote open source, I try to put it together with “it’s not a panacea” and explain why. Blindly grabbing open source and trusting it just because it’s open can easily cause much more harm than good.

The issue is, without it being open, I cannot even check it, to see if it’s well done. So there’s no way to EVER check and trust closed source myself. (And no, I’m not signing some NDA, that’s more of a liability than anything else, opening me up to lawsuits if I’ve ever thought about or worked on anything that could conceivably be claimed to be remotely similar)

Figureitout May 31, 2014 3:17 PM

–Read your sentiments wrongly, maybe Bruce is “saying something by saying nothing”. The Windows thing though, even me, I still have to use it a lot (even if infected and exposed to attack) and it was an uncomfortable jump to Unix and now BSD, but that’s how you grow. Bruce is more math/crypto/internet based than coder/computer based, but I’m sure he’s all too aware of the risks.

Clive Robinson
–Yeah like it, thanks. Mesh networks won’t be useful unless they’re really popular. I really liked some of the ways they made it mobile, including on a skateboard and moreso on a bike using the European-style bikes that generate power for a light w/ the wheels, not sure if actually is powered that way or charging a battery.

Hey spotted some of those 8-bit UV-erasable EPROM chips the other day, pretty neat little chips (and they have these really tiny delicate wires from the middle branching out to some bus line to registers I’m assuming) that give like a window into the “black box”. Remembered you had some issues w/ those…they do look cool though. Is black electric tape enough to shield those from random ROM-wipe? Then would the glue on the tape interfere w/ the window perhaps w/ frequent use?

OT by the way, have you ever dealt w/ an “unimplemented F-line instruction”? Dealing w/ an irritating bug at the moment (in a sea of bugs…).

Mr. Pragma
–Keep rehashing the painful reality until it sticks in someone’s head who is capable of coming up w/ a new way of computing (there must be some other way…) that is completely different from programming bits. My (highly unoriginal but extended) design will need an operator who is knowledgeable of computing that takes at least 1-2 years of hard studying to get comfortable w/…

Mr. Pragma May 31, 2014 3:26 PM


Agreed. But …

We actually had open source long before todays foss (religious kind) movement. And there is — and was — lots of open source from universities.

When I rant about open source it’s not about not-closed source but rather about the brutally politicized movement (culminating in the gpl taliban).

Let’s face it, no matter how well meaning RMS might have been, that whole fsf/gpl thingy is basically a political movement that just happens to take place in the technical field.

And frankly, I’m often missing even basic reflection in that matter.

Let me offer an ugly example.

Good developers usually have invested lots of time and, directly or indirectly, money in their education. With foss solutions available (or seeming to be) there is a tremendous roadblock for many professionals simply because companies often feel they can save on development. Obviously this very negatively changes the balance by having lots of hobbyists crap in code.
Another very major disadvantage can be seen with the openssl disaster.
People have come to expect software to be free (as in beer). I remember paying thousands of $$ for professional libraries. If everyone would pay just 10% of the windows license price for, say, linucks (after all, a 90% “rebate” is not bad a deal), the distributions again could afford to pay some small amount to openssl.

Instead, however, we are bullshitted by “a million eyes” blabla and lies and finally end up in a disaster.
Similarly, while web searching today is a vital service, of course one would be ridiculed for even thinking loud about a payed service. Which, btw. touches another important issue, namely stupidization, in part also through foss. Why? because one must be utterly idiotic to not recognize that “for free” basically carries a horror price tag and can actually be translated to “complete loss of privacy and being victimized by secret services, sales spamming, and other creepy vultures”.

Yet the same Joe and Jane Smith who bitterly complain about being spied on and being sold out would book you into a mental asylum if you dared to suggest that their linux CD should cost 10$ and their web service package (search, email, etc) might also cost 2$/month.

Call me an arrogant a**hole if you like but I clearly see billions stupidized users, millions of stupidized and incompetent foss “developers” who care more about their beloved gpl virus code and a perfidiously democracy playing industry that opulently feeds on that stupidity.

kashmarek May 31, 2014 3:30 PM

Well, now it is official. The NSA pattern of activity is bleeding off to the rest of guvmint…

New Federal Database Will Track Americans’ Credit Ratings, Other Financial Info

In actuality, the banks and mortgage instituions have ALL of this information ALREADY. One does not need a legalistic basis for divulging data on 227 million citizens just for the purpose of “…conducting a monthly mortgage survey…” to create a report for Congress. A survey does not require a full population of data. Ultimately, this database will enable corruption that more than offsets any “intended” value.

Daniel May 31, 2014 4:03 PM

My own view is that nothing has changed for the end user when it comes to Truecrypt. If you would have used it a week ago, use it today. The MD5 hashes for the program exist so one can make sure on is downloading an unmodified version of 7.1a.

If Bruce really has switched over to using a different program I hope he will come along and let us know what he knows. Because based upon what I know (all public info) I can’t think of a good reason to still not use TC.

CallMeLateForSupper May 31, 2014 6:18 PM


Regarding a method for covering the quartz window of an EPROM, there are (used to be, at least) individual little squares of metal foil, self-stick, for that purpose. Probably “unobtainium” today; windowed devices themselves are hard to find. Whatever, I would NOT recommend electrical tape for that because the adhesive turns gooey over time and makes a mess.

An uncovered EPROM can retain data for a long time, even exposed to fluorescent lighting. Sun light is bad news though; high in UV. If an uncovered device makes you skittish, a PEELABLE paper label – or several on top of each other – is better than nothing.

In the 90’s I developed nearly all my uC projects with Motorola MC68705P3, a uC with windowed EPROM. An indoor/outdoor digital thermometer from that period is still finctioning as I write this, and the uC’s window was never covered (though it is in an enclosure).

DB May 31, 2014 7:39 PM

@Mr. Pragma

You’d think “good developers” would be encouraged to go into business for themselves, and release their products and make a few dollars…. but the state of the way patents are, all that does is invite endless lawsuits and misery and bankruptcy if you ever become remotely successful in any small but meaningful way.

Therefore, all “good developers” who actually think about such things, either a) move the heck out of the USA, or b) decide that they’ll never go into business for themselves in their lifetimes, and only write code for other people and/or open source and/or just as a secret hobby and keep it to themselves only. There’s literally no other way to write “good” code and have anyone other than patent trolls benefit from it.

Mr. Pragma May 31, 2014 8:44 PM


You know that better than me (I’m neither living in nor travelling to or through the usa) but I’m afraid what you say is unpleasantly often sadly true.

But even (or particularly?) employed developers are in a good position to privately develop good code and share that as (real) open source. Possibly they might sometimes even, to a degree, help others to understand what they work on in their daytime job.

But again, the bad news is, as I’ve written, that foss also created damage in that many companies employ fewer developers and rather rely on free-beer code.

Figureitout May 31, 2014 11:07 PM

–Neat and noted, thanks for sharing. I’d have to look at the product number again as it may be the same uC, there were quite a few chips, even one Japanese one which was oddly not very well documented (5-page datasheet, yay…). Just another consideration for people looking to build computing devices, and the security implications are obvious (quick wipe).

Anura May 31, 2014 11:55 PM


I hope to start up a video game company in the future. My biggest fear isn’t that I will fail, despite that being a very high probability- my biggest fear is that if I am successful I will be sued for some trivial patent.

I’m youngish, and if I fail today, I can recover; I won’t regret it if I take a chance. If I am successful and then get sued out of business, then that means I never had a chance. That probably doesn’t make sense, but in one case I can say I took my chance, in the other I would feel like I completely wasted my time.

DB June 1, 2014 12:40 AM

@ Anura

I wish you well… but you might want to consider immigrating to another country first… one that doesn’t have such a ridiculous system of broad software patents 😛 At least if you write open source you might get the EFF or someone to help you if you get sued… and if you write closed source for another company as just a paid worker, it’s on them to deal with it.

Leon Wolfeson June 1, 2014 12:43 AM

Anura – Patents? In Games?

I strongly, strongly suggest you do a lot more research, since you’re worried about totally the wrong issues.

Jacob June 1, 2014 12:45 AM

Fairly explosive material: the collection and analysis of faces by the NSA.
– through intercepted on-line correspondence and video chats
– through social media postings
– through intercepted video conferences
– through (possibly) State Dept.’s database of Passport/VISA applicants
– through attempted hacking of foreign govs national card databases

Also: the experimental ability to locate people by matching intercepted photos showing background outdoors with spy satellite terrain data.

DB June 1, 2014 4:07 AM

@ Anura

I never intended to imply you could move to “just any old country” and be safe from crazy patent trolls… you’d have to find the right one, obviously…

yeah, that one patent looks pretty bad… looking at it for 30 seconds it looks like most anything in programming where something automatically “gets out of the way” of something else might infringe… never mind that people have been jumping out of the way of cars, wagons, or horses for millennia…

Clive Robinson June 1, 2014 4:55 AM


The problem I was having with a windowed uC was not loss of stored code but random behaviour in use.

The prototype worked when upside down when I was probing signal lines but went for walks in the park when the right way up.

The cause photons of ordinary and lower light not UV were geting into the CPU logic and causing the logic to misbehave.

Sometimes I miss the days of 5&1/4 floppy disks, because you used to get metal foil write protect stickers and they were spot on for 28 and above pin windowed DIPs. They not only covered the window nicely but you could also write a version number on them as well, and they stayed put without the sticky residue issue. Speaking of which irespective of what lable you use if at all always clean the window with solvent just before erasing, yould be surprised at how quickly those windows get an invisable to the eye layer of UV resistant gunk on them (co-workers that nip out for a smoke break being a major cause, fuel oil burning vehicals and heating been another).

name.withheld.for.obvious.reasons June 1, 2014 5:18 AM

@ Nick P
Thanks for throwing down some suggestions–but–I have a few reservations. First, Marvell–look into the last five years–and Intel, you cannot be serious? Is this “really” Nick P?


This is the platform I’d hoped google would launch OSH. I am tempted to write a solid license agreement–it’s a bit trickier for hardware. The liabilities associated with not having to “Click to agree to the EULA” allows software, and I’m using these words loosely, manufacturers to get away with essentially protections and discriminatory preferential treatment by the government. Now follow the logic on this one:

  1. U.S. based commercial entities benefit from the U.S. government’s role in contract law (statute), the protections of specific transactions(currency portability, supported banking industry via indirect subsides and guarantees) and Property Rights (courts), trademark (branding) and copyright (licensing), infrastructure costs (sea, land, and air ports, roads, communications, monetary system, harmonized legal frameworks, etc.)
    And if company X is based and operates out of Burma or Sri Lanka, they cannot benefit from the aforementioned services.
  2. U.S. based Commercial manufacturers and suppliers are not separable from there relationship with the laws of the country. Under contract law, I could not be a party to the contract (with rights) AND remain legally separable from the contract. It is as if I entered into an agreement by signing a contract with the letter X, trading under the name letter X, receiving royalties and licensing fees under the letter X–but not liable as letter X for any reason–instead liability rests with letter U.
  3. Citizen’s have the right to ask for redress; if the government is allowing corporations to operate under the color of law with all the rights and protections one expects for a citizen (though corps have much stronger property and deference rights), they can be held answerable to a court order or action.

For example, failure to comply to lawful requests, such as a subpoena issued by a grand jury or judge where a claimant has alleged that their is evidence of material losses due to the use of a particular product.

A claimant in Utah asserts that while using Microsoft Office products to produce the law firm’s IP products (proprietary contracts, agreements, or other legal instruments that they sell online) and run from systems using Windows operating systems and is serviced and maintained by their Windows-based server(s); the firm is regularly subjected to losses of protected properties ex-filtrated from their systems. The claimant in this example, can prove that Microsoft has not consistently exercised the necessary due diligence to provide a modest level of protection from this type of loss. A grand jury is empaneled and finds sufficient evidence that the court concludes that a civil liability lawsuit can proceed with orders asking Microsoft to make remedies by repairing and rendering useable the products or stand to refute in open court the claimant.

On point three, this is why these things don’t get fixed…

Instead, laws state that insurance must be held by the claimant to mitigate risk where just the mere fact the claimant took possession of a product (irrespective of its use–as in never used). How messed up does our legal system have to get in the United States? Oh, I know–you have to believe that insurance is the problem (oops, same head on both sides of the coin–lawyers and insurance are brokers between plaintiffs and defendants in civil and sometimes criminal cases. We don’t have a legal system, we have a dysfunctional third party “Dispute and Crisis Management” system.”

Clive Robinson June 1, 2014 5:44 AM

@ Anura,

Your pancake problem may be due to two other issues, the amount of liquid in your batter and the amount of fat/oil in your pan, and it might also be due to the pan surface and heat…

The ideal thin spread pancake mixture should have a consistancy of “single cream” for thicker cakes such as “drop scones” it should be of “double cream” or thicker. The difference in liquid can be as little as 10%, which allso brings up the issue of “standing time”. When you make a batter the length of time you leave it to stand before cooking can have a considerable difference. Idely recipies that don’t use a raising agent should be left to stand for a while. I’ve found that “Yorkshire Pudding” mixture is best left in the fridge over night. Importantly always “sift the flour” before you use it as this not only gets air in it alows the liquid to work faster, it also helps spread rasing agents and addatives such as salt more evenly.

If you use a self raising flour or add raising agents and want fluffy cakes quickly sothey are “blond” not brown, then consider making part of the liquid “butter milk” or add a half teaspoon of lemon juice to half a pint of ordinary milk and let it stand for five minutes befor mixing in. The acid acts on the bicarb more quickly than heat does.

The pan / griddle you use should have an unpited or scratched surface and prferably not be “non stick” and well “seasoned” (look up crep pan or wok seasoning on the internet). Idealy the pan should be heavy based. Do not pour oil into it you will get to much in, heat the pan and then oil the surface with kitchen paper which has been dipped in oil. Expect the first couple of pancakes to stick a bit or be irregular. If they continue to stick and are pale in colour then the heat is to low, if they stick but go brown or worse than the heat is to high. Idealy you should make two or three batches of “cooks priveledges” before you turn out those for your guests, these should go onto the preheated plate as “insulation”, as you stack the pancakes up, cover with a teatowel to keep them warm and to stop them going leathery.

The Scottish recipie for “drop scones” is half a pound of self raising flour, half a pint of good milk, two beaten eggs two ounces of melted and clarified butter and a good pinch of salt. If making sweet scones then add an ounce or so of sugar, if savoury two ounces of grated cheese. You traditionaly make it a little over single cream thickness so the pancakes have some thickness to them. You drop a tablespoon of the mixture in the pan off the side not tip of the spoon, and when it bubbles evenly on the surface flip it over. If you make the batter thicker and use part butter milk and use a dessert spoon measure they can rise to be a third of an inch thick thus look more like a crumpet than a pancake (crumpets are actualy made half and half bread and plain flour with yeast added so they are technicaly breads not cakes). Thicker cakes/scones can be made in advance and alowed to cool uncovered then toasted when you want to eat them later.

name.withheld.for.obvious.reasons June 1, 2014 5:46 AM

From the Federation of American Scientists, Steven Aftergood has reached the same conclusion that I had expressed earlier–the Intelligence budget authorization will use tne NSA surveillance sysetem on federal employees in a continious monitoring mode.

What Steven Aftergood added is that the use of the NSA’s monitoring of employees would not have flushed out Mr. Snowden. Duh! This is not why the Intelligence authorization to monitor federal employees improper actions–it is to control any employee from taking any action.

Robin June 1, 2014 7:01 AM

@ Mike the goat (horn equipped),

In my case the normal DNS requests are performed through the SOCKS proxy (Orbot). Nevertheless, it has some other annoying facts that might rely on the fundamental AOSP browser, e.g. some header variables not to be changed.

But my Orweb 0.5.2 on Android 4.3 is definetly vulnerable to the real ip disclosure (check: In this case, the DNS and HTTP request is made via my home router in plain text. In my opinion, the bug is a no-go for an anonymity browser.

Nick P June 1, 2014 7:17 AM

@ name.withheld

It was in his requirements: convenience (and cost) for end user. He was looking at RAID controllers. That sort of start means he’s using COTS. So, I gave him an example of a few SOC’s that might meet his requirements. People worried NSA will kick in their door can buy from Chinese or Russian competition, etc. Then they must worry about them. And that assumes their stuff isn’t implanted during shipping. 😉

There’s more threats than the U.S. govt out there justifying FDE. And yes our legal system sucks for software and assurance. This isn’t new. It isn’t changing soon either.

CallMeLateForSupper June 1, 2014 9:12 AM

You are welcome. We whitebeards must endeavor to pass on our sage knowledge (or thyme knowledge, if we don’t know sage) to younger generations, for they are the future. 🙂

“I’d have to look at the product number again as it may be the same uC.”

Ya know… when I wrote “uC”, no alarm bells went off. But when I read that term in your reply, all manner of noise sources sounded. (What a difference a day makes.) I believe the MC68705 is a proper microprocessor (uP). It is a member of the 6800 family; its instruction set includes ADD and SUB; it is light on I/O.

Clive Robinson sagely reminds us (above) of both the 5.25-inch floppy disk and the fact that write-protecting that love-it-hate-it thing involved a self-stick, metal-clad paper thingie. That “thingie” was indeed the cat’s pajamas for covering UV-erasable electronic packages, and every box of diskettes contained a sheet of little silver-color tabs.

I feel like UV-erasable EPROMs are in my DNA. I still have boxes of them, from 8k to 512k, inclusive, and I would no more get rid of them than I would get rid of a family member. Do I use them in this age of flash devices? Not often. Two years ago I showed a teenager (a girl! YES!) how to realize a state machine by marrying a suitably programmed ROM and some TTL flip-flops and glue logic.

Mr. Pragma June 1, 2014 9:45 AM

name.withheld.for.obvious.reasons (and Nick P)

Frankly, my reaction was similar to yours.

If someone wants COTS he might as well buy an lsi controller or similar. intel certainly isn’t a company I would trust and Freescale has decades old involvement with pentagon and other agencies (I like their products; this is not Freescale bashing but merely not ignoring some relevant circumstances).

If I’m not mistaken the request is to encrypt and raid disks without the NRE of an ASIC (and the whole thing shouldn’t be too expensive anyway).

The Disk controller question can be handled by a cots sata (or whatever) controller. The raid could be done either in hardware or simply in Software by the cpu. As it is probably desired to have hardware support (or even full implementation) of encryption and as, it seems, there is little trust in widely available hardware crypto devices, my approach would be to go for an fpga solution to implement both, the desired crypto algorithms and raid (and possibly (via IP) even sata).

One should understand, though, that even this approach quite probably is complete overkill to defend against local police, burglars or the average (evil) competitor while it offers little to no protection in other scenarios. It should also be understood that such a solution is based on certain assumptions.

In other words – once more: It’s quite senseless to build some protection without defining a threat scenario.

The best crypto is quite meaningless if police (or mafia or …) has other ways to get at the data (or to enforce you to decrypt) or if all an evil competitor needs to do is to pay some money to a disgruntled employee (because they almost certainly have to work with the data and hence to decrypt them).

Btw. I don’t see that big a problem to use crypto hardware support available in cots processors. Just stay away from (at least using only) built in hardware PRNGs.

Clive Robinson June 1, 2014 10:44 AM

@ Mr. Pragma,

Btw. I don’t see that big a problem to use crypto hardware support available in cots processors.

I do and don’t, depending on how you use it. The problem is isolation from flaky/suspect OSs and Apps. The first hurdel being the general case that the more efficient you make it’s use the more likely you re to open a usable time based side channel attack.

It’s fairly clear that the likes of the NSA don’t break algorithms that’s way to resource intensive except in a few cases. No they break implementations, by manipulating standards and protocols. The classic being the way the AES competition was structured, by emphasising software and hardware efficiency as the bench marks and making sure the code was freely available to download, they had cooked the goose of the winner before the contestants had submited their ideas for consideration…

Mr. Pragma June 1, 2014 12:14 PM

Clive Robinson

You are, of course, right.

Two remarks:

I had my reasons to say “crypto hardware support” (as opposed to “crypto algorithms instructions” (like aes)). Because, frankly, basically all major cpus are produced in the usa and must therefor be considered tainted and not trustworthy. Not that all the us companies are necessarily evil but considering the “generous” use of nsls they can, and possibly are, easily forced to support nsa.

And once more it is illustrated that “I’m crypto protecting!” is a senseless statement/undertaking without a context incl. a threat definition.
The sidechannel/timing attacks are certainly nothing Joe and Jane Smith have to care about. A crypto hardware provider on the other hand might need to consider that case and a producer of boxes for use in embassies in potentially hostile countries certainly needs to consider that kind of threat (but not the threat of some state agency coercing the ambassador to tell a secret password (which in any reasonable security setting he wouldn’t have anyway))

I honestly understand those (frankly rather unreflected) questions, often more emotionally driven than rationally, in particular after Snowden. But no matter how we turn it or how well intentioned we are, we simply can’t provide reasonable answers to questions like “How can I build a good security device?” without a clear and well reflected context/situation/threat understanding.

Not meaning to be rude but frankly, 99,99% of those people might be better advised to avoid the cloud, to put sensitive data on usb disks, and to not buy some “secure” usb stick with funny number dials, than to tell them which processor/OS/controller is “the securest”.

Mr. C June 1, 2014 12:23 PM

One sentence jumps out at me from today’s NYT article on NSA bulk image collection for facial recognition purposes ( “While once focused on written and oral communications, the N.S.A. now considers facial images, fingerprints and other identifiers just as important to its mission of tracking suspected terrorists and other intelligence targets, the [Snowden] documents show.” Wait a moment, “fingerprints”? How is the NSA obtaining fingerprints? The only thing I can think of is iPhone. Thoughts?

Jacob June 1, 2014 12:26 PM

Mr. Pragma,

If you are looking for a reasonable uC which is non-US designed and produced (Taiwanese), good for crypto work and is X86 compatible to boot, please see the VIA line of processors.

It Has been in use for a few years, and has a good dev support. I think that it is fairly trustworthy compared to the common alternatives. No US, Russia or China has a hand in it.

Mr. Pragma June 1, 2014 12:59 PM


Thanks. I know about the Via processors and actually use them for quite some (not too sensitive or performance hungry) applications.

Unlike you (if I get you right) I do not trust taiwanese products; actually I see them as the worst of two worlds because there is considerable chinese influence on taiwan and, way worse, us influence as taiwan happens to be usa oriented, dreaming of usa being their protection power, and focussing much of their economy on usa.

The most trustworthy for me is clearly Russia (because having a strong intelligentsia, excellent academia, and lots of bad experience (with usa) Russians have learned and understood some things), followed (not too closely) by China, and by european countries. Finally, usa products are to be considered the least trustworthy or, to put it less diplomatic and more realistic, as completely rotten. I try to avoid us products for anything even remotely sensitive like the plague.

For common home or sme scenarios, however, via processors (or even us-american stuff) may be a viable alternative provided some prudence re. OS and software.

Bruce Schneier June 1, 2014 1:48 PM

“Bruce, I noticed this article in The Register that claims you told them you switched to Symantec’s PGPDisk in light of the TrueCrypt discontinuation.”

I had been using PGPDisk for years and it was on my hard drive, so switching was quick and easy. I have no inside information that Symantec hasn’t given the government a back door.

Reflecting on it, my switch was hasty. I agree with those who write that TrueCrypt 7.1 is no less secure now than it was a month ago. And I recommend that people don’t switch until we figure out what’s going on.

Bruce Schneier June 1, 2014 1:49 PM

“:If the claim in The Register article that Bruce is moving to PGPDisk is valid, I don’t understand why one that has already trusted Windows does not trust BiTLocker, and prefer instead to usher in another big attack dog with dubious connections into the house.”

That’s a good point. The reason I didn’t switch to BitLocker is that I have never used it and don’t know it. Switching to PGPDisk was easy for me; I had used it before and I knew how it worked.

herman June 1, 2014 2:54 PM

@Mr. C: All foreigners who want to visit the good ol’US of A is fingerprinted. So now we know that the NSA has access to that database too.

Mr. C June 1, 2014 3:11 PM

@ Herman
I thought that went without saying. Plus, that’s not really NSA collection so much as database integration.

name.withheld.for.obvious.reasons June 1, 2014 5:50 PM

@ Pragma

…and Freescale has decades old involvement with pentagon and other agencies (I *like* their products; this is not Freescale bashing but merely not ignoring some relevant circumstances).

Yes, Motorola was one of the better shops twenty years ago. A job at Motorola was considered prestigious. Also had analog powerhouse national semiconductor. The loss in recent years of some of the best analog and component engineers is tragic and I don’t see any notable replacements.

My primary purpose for mentioning Motorola was that prior to selling Freescale google had the perfect opportunity to make a fantastic business decision–build the Open Hardware Platform. The needs to be some way to develop the locus such that it could achieve success. Without a patron, there will be no way to get a handle on the situation. It might be worth it to try and convince some fab shops that have engineering level products that it would be in their economic interest to float an open hardware initiative. Two hurdles should be eliminated up front, a vendor neutral standard on toolchain “interfaces” such that a companies don’t lose individual business advantages and that in the long run this can help reassure/establish trust THAT WILL BE LOSF IN THE NEAR FUTURE. Some trust in the industry has been lost but it may soon be enough to be irreversible.

Mr. Pragma June 1, 2014 6:54 PM


As you mention them …

National Semiconductor once also had a very modern and really good processor (series), the NS32x32, the 32732 was, as far as I remember not any more sold but served as an important influence in intels pentium.

Also Siemens (and possibly others) built quite nice systems based on that then very high-tech and modern processor.

As for the Open Hardware Platform …

For a starter I don’t expect too much from google and even if they did something, I wouldn’t trust them and that a microsecond.

And would it be in the interest of fab shops? Not sure. Now with all those Arm thingies the industry probably prefers to not disturb the set up game (intel/arm architectures). After all, don’t forget, in their eyes everything is about shareholder value; they just happen to be in the semiconductor business.
I guess our next real chance are the Chinese. Once they go full scale and global, for instance with their Loongsons, that’ll probably stir up the industry enough for some significant changes. But then, maybe they’ll never do that and rather build up their own eco system with Loongson driven phones, routers, PCs, etc, etc.
Until then we will have to continue using OpenRisc, Lattice Mico, and other soft/firm cores either in fpga or asci. Which isn’t bad at all as long as one isn’t in the high performance area.

Chris Abbott June 1, 2014 8:07 PM

@Clive Robinson

I’ve been messing around with file entropy and frequency analysis using histograms in a hex editor. This is probably a silly question, but would truly, truly randomly generated bytes, given that the file size is derived from 8, have a perfectly equal amount of ones and zeros and the exact same amount of every type of byte? I can’t imagine that it would have perfect entropy, but I just wanted your opinion.

Chris Abbott June 1, 2014 8:14 PM

@Clive and anyone else that has an answer continued…

I’m comparing histograms of identical files encrypted with Twofish and AES. Neither are perfectly flat, but the Twofish one seems to be slightly flatter. I imagine that flatter histogram = better cipher, correct?

Thunderbird June 2, 2014 9:41 AM

I’ve been messing around with file entropy and frequency analysis using histograms in a hex editor. This is probably a silly question, but would truly, truly randomly generated bytes, given that the file size is derived from 8, have a perfectly equal amount of ones and zeros and the exact same amount of every type of byte? I can’t imagine that it would have perfect entropy, but I just wanted your opinion.

There are a large number of tests for “randomness.” It turns out that no sequence can satisfy all of them. I would be very suspicious that a large collection of bytes was not random if there were exactly the same number of zeros and ones and the same number of each byte value. It would be the case that there would be nearly the same number of zeros and ones, and of each byte value, for some value of “nearly” that my statistics classes are too far in the past to allow me to determine quickly….

Chris June 2, 2014 12:12 PM

Hi I not sure if I understood the question correctly but I cant see that a “truly” random generator say from an atom or background noice from a radioset or what ever can or will have a perfectly flat histogram, since in nature also there is hysteresis between elements.

Found a cool paper regarding that when I was thinking about it that ihavent seen before:


k9 June 2, 2014 12:39 PM

Which news sites use HTTPS, and how would you know this, if their pages that you were receiving were unencrypted?

Mr. Pragma June 2, 2014 1:14 PM


I think there are many misunderstanding regarding random generators.

“Perfection” of distribution, for example, is IMO overrated and misunderstood as a measure of quality.

In the end it’s about unpredictability, i.e. an adversary should not be able to know in advance or in real time about a random series. Granted, that’s a somewhat contorted example but actually you can have perfectly fine encryption with a series of a bln ‘1’ if an adversary has no chance to know that a bln ‘2’ will be/are used.

I think that perfection (perfect equality) of distribution has come into the focus as a major criterion of quality because it is an inherent quality of algorithms that they are predictable. So, obviously, one desires at least a good distribution of random.
Unfortunately, real random (or what is perceived as such by many) is, due to many reasons, very rarely truly random. An example I like to give to students: Assume you are at a very busy pedestrian crossing in Tokyo; moreover assume that 99.99% of adult Japanese people are in between 1m55 and 1m85; also assume that only adult Japanese are allowed to be at that crossing.
Many people think that that would offer good randomness. One could, so they think, install some, say laser based height system at 1,70 and have nice randomness.
An obvious counter example would be (by some coincidence, haha) all Japanese basketball teams having some conference around the corner. But there are more subtle premise errors like, for instance, the — quite frequent — erroneous assumption that the 50/50 of height difference (as compared to 1m70) matches a 50/50 height distribution in the population. Considering that there are also immanent problem factors (like, say, a small quake badly influencing our laser system) it becomes quickly obvious that the matter is more complicated as it might seem.

On the other hand: So what? Suppose the population height distribution is 65/45 rather than 50/50 that is, that we have nor nice equal distribution. As long as our adversary doesn’t know our source for random that’s not a problem. The real problem is the adversary very often knowing our random source, particularly when it’s (math) algorithm based; then it can come down to knowing the seed/start situation.

I personally prefer high frequency event source picking combined with a brutally simple filter to filter out pretty all typical problems like hysteresis, sample jitters, etc.

So the recipe works something like that: Find a source offering events with a very high frequency (radiation, atmospheric noise, etc), pick events with a relatively high frequency and then brutally AND them down and use just 1 bit (the lsb). Of course an adversary might (although very very unlikely) find out about your mechanism, given it’s important valuable enough for him. To do even better you can change a number of parameters like the source frequency, the picking frequency, etc. To do even better, you can use a second random source to control (frequent) changing of parameters. The good thing is that your are hardly limited because your only need is to create unpredictability and not perfect random distribution.

Why this somewhat long elaboration? Well, to be honest because it stroke me one of these days that, given the lousiness of some PRNGS, their shockingly frequent use, and the almost unlimited resources of nsa and similar, one might be reasonably afraid of an PRNG analogue to rainbow tables. Considering that there are, for example, quite some rather simple algorithms in use that, to make it worse, start off with a 16 bit seed, such an attack actually seems not unrealistic.

Anura June 2, 2014 2:50 PM

With 10 unbiased coin flips, there are 1024 possibile results. Of those, 252 possibilities have an equal number of 0s and 1s, a probability of less than one in four. For 8 kilobytes of data, the probability is less than 1%, and as the amount of data goes up, the probability that there will be an equal number of zeroes and ones diminishes increasingly for a truly random source, and this means that if you have a large amount of data with an exactly equal number of zeroes and ones, then the odds are that the data is not random.

From a cryptography perspective, statistical tests are entirely insufficient. I can easily make an RNG that passes every usual statistical test: shannon entropy, chi-squared distribution, bit/byte distribution.

Here’s one for you:

Z0 = 512-bit seed
Zi = SHA-512(Zi-1)

For n bits of output, compute k = CEIL(n/512)
Output the leftmost n bits of Z1 | Z2 | … | Zk

This will pass every generic randomness test in the book given a reasonable amount of data, but it’s trivial to predict future output, making it horrible for cryptography (and slow compared to other non-cryptographically secure RNGs, making it useless all around).

Generic randomness tests can only be used to fail an algorithm; passing an algorithm requires study of the internal components.

Mr. Pragma June 2, 2014 3:24 PM



But then, using a strictly logical perspective randomness can never be proven because the very act of proving it would constitute a mechanism.

There are many other problems, too, some of them philosophical and while deep (reaching down to the Spinoza question) quite funny.
An example: Assume a trick player were capable of flipping coins seemingly random (out of his control) but actually controlled by him and as he pleases. Then the decision whether any given flip (throwing? pardon my clumsy english) would come to the point of questioning (and proving) his honesty and intention.

Another set of problems arises out of the (highly probable, according to our world view) assumption that we live in a deterministic universe, i.e. that everything happens according to mathematical and physical laws. Which leads us to an funny definition (actually a set of them) of random. I bring this up because it leads us (back) to the (for us) relevant question in that we can reasonably define random as the result of sufficiently complex interactions (and the laws behind it) so as to not be recognizable (and even less synthesizable) to us as non-random.
Which again is funny because this basically states “random ~ complexity” which might make us think again hard about PRNGS.

Happily enough we fell in a trap in (often) confusing intention and means. Random, incl, of course RNGs, is a means in security application; the intention is to deny predictability and pre-cognition and, to a degree delay analysis and breaking (our secrets).
Which, that just as a reminder (for those who always happily blabber “obscurity is no security”), means that RNGs in the end are an important part of a form of obscurity.

Nick P June 2, 2014 4:27 PM

@ Mr Pragma

I think you’re onto something. The comparison with obfuscation/obscurity measures is a good one. I’ll add that the “truly random” things we notice in nature seem to be these:

  1. Hidden or modeled mechanism to transform states.
  2. Hidden internal state (more the better).
  3. Externally observable state.
  4. Often a large number of internal or external interactions adding up computational costs.
  5. Results that are statistically random.

The combination of these seem to be sufficient to make a TRNG. I add No 5 to represent the fact that many processes in nature don’t come off as random, but some do. I’m focusing on the latter. The fact that 1-5 is good enough for a TRNG in nature shows it’s probably good enough for an artificial one. One could even say our Crypto RNG’s such as Isaac work this way. If anything, the only difference is that nature seeds TRNG’s itself, hides the mechanism by default, and leans toward complexity produced by many interactions of simple parts.

I always use the concept “random in practice.” A number of things are random in practice enough that (a) nobody’s predicting them and (b) they’re usable in real systems. Some might be mathematical constructions, some might be natural phenomenon, and so on. They tend to work so long if they have my five characteristics. I always prefer results over philosophy. 😉

Mr. Pragma June 2, 2014 5:41 PM

Thanks, Nick P

for your constructive feedback. As the usual views on random are pretty well understood and have been discussed ad nauseam – and – as, looking closer, most practically used attacks (concerning crypto) actually do not attack the crypto algorithms per se (but rather e.g. rely on random related weaknesses) I suggest that we follow the approach somewhat more, that does rely less on statistical criteria and qualities.

In concreto I suggest to follow an alternative perspective which focusses the intended goal rather than the means. In other words, in crypto our real intent is not certain statistical qualitites but to deny predictability and analysis (with reasonable time so as to be useful). The fact that statistical criteria and qualities usually are (to be considered) important is not a function of random but our usual way to pseudo-create it, in particular due to the fact that our approach is usually based in a a) rather limited and b) generally available system (to our adversaries, to). Even worse, almost always the adversary has way more resources in terms of that limited approach. In other words, we all use bits and bytes and processors, quite some adversaries, however, have way more resources — and expertise — available than we.
Complexity per se is available to everyone (and be it because it’s inherent in nature). Complexity limited to the realm of computers, however, is by no means available to the same degree for everyone; in fact, often our means are quite low compared to adversaries.
And there are other issues, too. One is that we have a reached a level of “unidirectional” complexity that is so high that Joe and Jane Smith are often almost bound to fail using the means properly. Another one is that complexity has the inherent quality of not being simple (haha) which quite probably (or so experience seems to show) contributes to bad implementations, vulnerabilities, etc.

In this regard I suggest to look at random in terms of complexity rather than in terms of statistical criteria like distribution. But I’d like to limit the view to a “sweet area” where existing complexity can be simply “grabbed” and where a good result isn’t achieved by ever more complex mathematical algorithms (and their complex implementation).

That’s how I arrive at a model where one uses multiple sources “existing” high complexity (rather than creating them) and extremely increase the resulting complexity by mixing those inputs from different sources in well managable (and understood) ways (whereas current solutions tend to simply xor different sources and such waste complexity).

One (intentionally extremely simple) demonstration might be to use source B to determine when and how of source C is mixed into A and when and how much of source D is mixed in. Yet another source E might be used to pre-invert certain parts of A.
Such one could, using per se relatively simple mechanisms to not simply create higher complexity but to also introduce multiple levels of variations, each one quite unpredictable (if a proper source is chosen). As a welcome side effect one might actually use weaknesses currently enabling, for instance, timing attacks.

Wael June 2, 2014 6:26 PM

@Mr. Pragma, @Nick P, @Anura

Great points! I still maintain my view on randomness. I can’t prove this view, nor have I tried to do so — it’s a conjecture, I guess… This view may seem to add nothing to the discussion on the surface. However, it implies that what maybe considered random at present maybe deterministic in the future; what maybe random to someone, may not be so to another — subversion of RNGs that pass randomness tests is a good example. @ Anura’s examples support this view as well. Also, in addition to the “unpredictability” property of a random number generator in a crypto device or algorithm, the RNG itself should be immune to an adversary controlling or “influencing” it’s output — to make it more “predictable”, or more “weakened”.

Figureitout June 2, 2014 10:33 PM

Clive Robinson
The cause photons of ordinary and lower light not UV were geting into the CPU logic and causing the logic to misbehave.
–Yeah that would freak me out, surely other radiation (maybe at the right strength) can cause some similar effect…

–Turns out the chip is also a MC68705(P3S), small world eh? And yeah I know how it feels to have weird things in your DNA, radio’s in mine (besides it literally flowing thru my cells), ole’ granpa was a radio op. in WWII and on my dad’s side an electrician gave birth to a radio fanatic. It was only a matter of time ’til I succumbed to my destiny…

To All RE: TEMPEST Hardware
–Made contact w/ Dr. Skorobogatov at Cambridge, nice quick response. Asked him to make an appearance but we’ll see. Didn’t agree w/ my chip choice of a Z80 w/ regards to TEMPEST security, thought some SoC would be better due to density (confusion of signals) and small bus lines so there’s literally less room for signals to be injected. Also recommended using a smart card due to the magnetic side of radio waves, that they had mitigated some of those signals.

Well, I’m still going w/ my build and will consider SoC’s and smart cards in a later build when I get more experience. Hopefully I can attract some kind of attack like that and see its effects.

–Haven’t forgot about you of course. Am working on an email scheme (lots of services have been shut down), I know you probably don’t want to play “hopscotch” and such, but I need to get a back up means of contact set up too; while I’m busy now so it may be awhile. Still want to see your progress and I’m itching to get started building.

Chris Abbott June 2, 2014 10:57 PM

Thanks guys for the insightful conversation on random! It would be true that nothing is really truly random since it’s happening for some reason (wind speed, ocean currents in nature or complex math in crypto). I also like the point made that randomness tests can only fail an algorithm rather than pass it. It shows just how difficult these things could be to get right.

tjallen June 3, 2014 12:10 AM

@Anura and Mr. Pragma

The phrase “unbiased coin toss” may be an oxymoron – maybe we ought to stop using the coin toss as an example of randomness:

Popular article:

Research paper:

According to the article, a coin lands on the same side it started 51 percent of the tosses.

Also, the popular article mentions that, “Diaconis [a magician] himself has trained his thumb to flip a coin and make it come up heads 10 out of 10 times.”

The research paper has lots of math and reviews the literature on coin tossing – probably interesting to some here.

Wael June 3, 2014 12:56 AM

@ tjallen,

The phrase “unbiased coin toss” may be an oxymoron

“unbiased coin toss” or “unbiased coin” toss?

maybe we ought to stop using the coin toss as an example of randomness:

It’s used as an introduction to more advanced concepts. Search for “Bernoulli trials” vs. “Bernoulli processes”. Also see the difference between classical and Axiomatic interpretations of probability on page 20. This book is a classic (no pun intended) and where the distinction between classical and axiomatic – (with all due respect to those who abuse Kurt Gödel’s work 🙂 – understandings is referenced in the previous link.

koita nehaloti June 3, 2014 4:30 AM


Maybe the only way to have a simple and easily verifiable processor that has lots of computing power is to have a simple core that can be copied to a large grid in a repeating pattern. Verify some common parts and one core and then verify that all other cores are same as the verified core. Former needs an electron microscope, but latter can be done with just an optical microscope, although not with complete confidence. Other way is to map the whole chip with an electron microscope and then put computer to check that verified core’s pattern repeats in every core.

I thought about a processor that can be described as cellular automata with very complex cells for a cellular automata and state consisting of kilobytes or megabytes instead of just one single bit (like “Conway’s life” and most others):

Core / cell could be derived from some old intel x86, but preferably from something better and is made to fit this weird system. Core / cell can read memory from 4 neighbouring cells/cores and it’s own memory. Core / cell can only write to it’s own memory. The processor works in 2 phases that repeat constantly. Let’s call them even(2,4,6,8) and odd(1,3,5,7,111) phase. During even phase, part A of a cell’s memory and 4 other cell’s part A of memory (5) can be read and part B of memory can be written to. During odd phase, B parts can be read and A parts written to. During even phase read-only memory A (or B in odd) is current state of the cellular automata and write-only memory B (or A in odd) is next state.

These cell-cores are connected to outside only via special edge-core type cores that are placed on the edges of the square shaped processor. All data and code to cores in the middle of the processor has to be transferred by many cores on the way. Small parts of the operating system in every core decide what and from where to transfer. Depending on the workload, operating system organizes some percentage of cores to form pathways that do more of the data transfer while other cores run for example threads and processes of userland programs like gimp, gnu privacy guard or chromium.

I think it is very difficult to modify GCC or clang to compile existing source code even badly for this weird system. I think it is more difficult than making single thread c++ programs take advantage of multiple cores in current intel or amd chips (maybe this kind of improvement is on the “pipeline” coming in some future version of some compiler). But im no expert in compiling and I may be overestimating the difficulties, like people overestimate difficulties of breaking their own crypto.

This processor consists of at least 2 types of cores: cell-cores and edge-cores.
But it could have more types: memory core for having more inside-the-processor-memory than with only processing cores, and math core for doing floating point and /,* and sqrt etc.

Different core types can be arranged uniquely in cheap way by moving square segments of pattern masks before fabrication. Custom chip would not cost millions of dollars or euros anymore.

It is useful to enable the operating system to work with unique chips anyway because that way every chip can have defects so it reduces production costs because quality can be lower and fewer defective chips have to be discarded.

Uniqueness with chips can also be used in a randomized way for automated obscurity so that attacks are more difficult.

Maybe the bytes should be 9 bits intead of 8 bits so that 9th bit tells if it is program or data… Maybe 10 bits so that 10th bit tells if it is read-only… Then maybe only edge-cores can change those extra bits. If the processor compiles something for itself, the binary data has to be in specific format when arranging it to be read by edge-cores while asking them to change the bits. If data comes from RAM to an edge-core, it can be tagged as executable if it is in correct format and a processing-core asks that.

tjallen June 3, 2014 4:47 AM

I see you want to test (or maybe help) a commenter you find mathematically naive, and that’s okay. The philosophical underpinnings of random numbers are interesting to me, and the math is indeed beyond my training; my background is in philosophy of language, Frege and Russell through Kripke. I know of Godel and truths that cannot be proved in any axiomatization complex enough to capture basic arithmetic; I’ve been carried through the proof but couldn’t now reproduce it, from my logic classes 30 years ago. I did find page 20 (and more) of Stengel’s lecture useful, also the discussion of Bernoulli trials and Bernoulli processes in Wikipedia, thanks for the pointers. The classic book you reference will have to await more time and money.

Coincidentally the coin tossing result was mentioned in the Huffington Post (myth 4), and I googled around to find the info on Diaconis and the Stanford paper, which I thought might add to the Friday squid discussion of coin tossing and random numbers, above. What is of interest that I wanted to highlight was the difference in theory vs. practice, that arises in both biased coins (and thumbs) and in the attempt to produce truly random numbers; this gulf and the attempts to measure and bridge it are philosophically interesting, too.

In fact it was the distinction between theory and practice that first brought me to this blog several weeks ago, as I struggled to complete a password gated, HIPPA compliant message service for a local dentist. The gulf between theoretically secure messaging and what I created in reality was wide, even with php crypt, blowfish, ssl and Tectite’s formmail encoder. This blog and its knowledgeable commenters are such a relief, after a boss and client both of whom have no idea why I am so concerned about what I consider a barely functioning product. How ridiculous the law that hands out $250,000 fines for each instance of a HIPPA breach in the current web environment is something everyone here understands (I hope).

Anyway thanks for the welcome, and I hope I can occasionally add something of value from my perspective, even if it isn’t always mathematically sophisticated enough for this crowd.

Chris K. June 3, 2014 5:37 AM

Report: NCIS Hid Medical Evidence About Guantanamo Suicides
By: Jeff Kaye Tuesday June 3, 2014 1:35 am


blockquote>The Senior Medical Officer (SMO) at Guantanamo who attended at least two of three high-profile “suicides” at Guantanamo nearly eight years ago concluded at the time that, contrary to the conclusions of a later government investigation, the detainees did not die by hanging but by “likely asphyxiation” from “obstruction” of the airway. Moreover this SMO found a prisoner he examined and pronounced dead had “cotton clothing material in [his] mouth and upper pharynx.” (See pgs. 5-7 of this PDF to view the SMO’s original findings.)

The finding is consistent with other accounts, and with the theory the three prisoners died from a torture procedure known as “dryboarding,” as researcher Almerindo Ojeda described in an 2011 story at Truthout.

Mr. Pragma June 3, 2014 6:10 AM

Now, before someone gets hurt: I intentionally used the coin tossing metapher because it’s quite evident that it’s not really honestly random producing and/or that it’s at least a mechanism with an obvious potential to be bent or tricked.

But that is not even the real problem. The real problem is the (more or less) simple and unidirectional/single mechanism. With coins it’s “Toss it 8 times and you’ve 1 Byte random”. And again, many will immediately recognize and criticise the poor mechanism.
It’s important to understand that we can achieve only actually quite limited gains by enhancing that mechanism, say, by not tossing but by using impressive math algorithms on some seed. Yet, the basic mechanism is the same.
Of course, we are impressed by the immense speed which modern semiconductors can offer. Spitting out gazillions of digits is easily mistaken to indicate a complex mechanism, and there indeed (often) is a complex mechanism – but in the machine used to apply our still simple mechanism.
That quickly becomes dangerous, when
– the machine implementations on which we employ our simple mechanism are very, very, complex and such error prone (read: fertile vulnerability) in themselves.
– the basically simple mechanisms we use are — and must be — so complex that they can be weakened by experts but hardly be understood, and hence often misimplemented, by non-experts (let alone the final users).

Looking at real world attacks suggests that my understanding is not off by far. There are plenty weaknesses, some of them introduced by malevolent parties in the right position, some of them introduced by poor hardware implementations or plain hardware problems in the underlying highly complex hardware (e.g. asic), some of them introduced by poor software implementations or plain problems in the usually fat software stack beneath, some of them introduced by usage errors.

In the end the major difference between coin tossing and modern crypto (real world, not academia) is basically in the complexity of the tosser, of the tossing, and other such factors – but not in the mechanism itself. We’re in a way still tossing coins, albeit with immensely complicated equipment.

Doing this we are the weak party next to one guaranteed to be strong party (cosmos) and one to be assumed (and often rightfully) strong party like the nsa.
Both of them have more atoms, more capability to deal with complications and sheer size, and sheer size themselves than we do or have. In other words: Mother nature is known to have virtually arbitrary complex mechanism to create “random” and nsa and accomplices should be assumed to have more, more advanced, and better means and resources.
So, it seems, largely ignoring one of them and racing against the other one is an undertaking with rather low chances of success.

Hence my proposition to co-operate with the strongest player, mother nature, and to stop creating random but rather to use what existing random is available and to create “managed random” (because that’s what we’re basically after) by mixing and muxing those sources in a relatively simple — but very versatile — mechanism.
After all, we are not after statistically excellent random; that’s just a means. We are looking for unpredictability. We are not working on fair and just lotto machines, we are working on the other side having to pick the right values out of crazyllions of values the order of which they have no chance to know.

Maybe waves are not fairly and justly distributed across any stretch of ocean. But they certainly are highly enough random to deny us any realistic chance to predict them (in fact so much so that we only recently found out that there are quite probably quantum mechanics at work).

Thanks for being patient with me and looking again at random. I’m convinced random, its diverse qualities and mechanisms behind it deserve that (myself possibly not but thanks anyway *g)

Clive Robinson June 3, 2014 7:14 AM

@ Jacob,

There are problems with pooling entropy sources that can be quite problematic.

For instance lets take a “perfect die” it’s distribution is flat for as long as you care to keep throwing it.

Now take two such dice and throw them together but add their values together. After quite a short while you will see that the distrubution is now nolonger flat but pyramid shaped.

With three dice the pyramid starts to get curves and the middle looks like a normal distrubution curve. The more dice you use the better the aproximation is, uaually four successive readings from a TRNG are sufficient to give a normal distrubution for the majority of simulations.

So adding the values is not a good place to start. However the next thought of XOR is not good for the same reason the addition is on a bit by bit basis (ie a parity count, not parity bit), thus XORing two or more byte wide generators is going to not give you what you want either. The same is true for subtraction and multiplication, and as we are realy talking about integer fields division as well….

The big problem with “home brew” entropy pools is the designers use a simple function such as add to mix in and then use the likes of a hash to produce the output. It will because of the hash pass the “usual suspects” tests of die hard(er) etc, but the input to the hash almost certainly won’t… Thus the designers are guilty of “magic pixe dust thinking”, which the likes of many in chip –supposed– TRNG designers are guilty of (cough Intel cough).

If you are having trouble visualising the problem, think of ten sided dice three of them could be seen as an oddmeter style counter going from 000 to 999 where you add the digits up cross wise thus the total ranges from 0 to 27 the thousand individual variations fall in that range in aproximatly a normal distrubtion.

There is a simple way to get a flat distrubtion back but you have to be aware of it and use it correctly (Donald Knuth goes into it in his compendium “The Art of Computer Programing”).

Clive Robinson June 3, 2014 7:28 AM

@ Jacob, tjallen, wael, Mr. Pragma,

Some of you may remember discussing “coin tossing” on this blog some time ago.

As I pointed out then very occasionaly the coin lands on it’s edge and stays there, I’ve seen it happen twice in fourty years (and once chatted with Terry Pratchet about it as he had also seen it happen). When it does happen infront of other people they behave as if you have just pulled off the most amazing magic trick they have ever witnessed.

I also mentioned on that thread how to cheat at coin tossing, which judging by some of the responses people did not think was possible… and were thus surprised how easy it is to do with only a little practice.

Jacob June 3, 2014 8:03 AM

This is a very interesting fact. In practical encryption software that I’ve looked at, I noticed that authors normally either XOR each additional entropy source with the current pool content, or hash the new source and then XOR the hashed result with the current pool content. If I read you correctly, this still leaves us with a non-uniform pool distribution but rather a normal distribution. I will need to think this over.

Thank you.

Wael June 3, 2014 8:52 AM


I see you want to test (or maybe help) a commenter you find mathematically naive, and that’s okay.

No, that’s not OK. I didn’t think you are mathematically naive. I don’t think that way. The comment about Gödel was directed at @ Clive Robinson. We had a an unfinished discussion about it a year or so ago. Glad the other links helped.

Clive Robinson June 3, 2014 10:46 AM


Your definition of randomness fits in with the idea that our tangable physical universe is a subset of the intangable information universe (which is something I also an idea I subscribe to).

Which is a part of what Seth Lloyd and more recently others have been pondering on,

Interestingly considering David was the major proponant of the many universes theory, it appears he belives that only the subset of information within our universe can exist for us…

It’s fairly new so something I will have to ponder and mull over for a while…

Jacob June 3, 2014 12:20 PM

Apropos (T)RNG – from the rebuilding crew of openSSL:

A few months back there was a big community fuss regarding direct-use of the intel RDRAND instruction. Consensus was RDRAND should probably only be used as an additional source of entropy in a mixer.

Guess which library bends over backwards to provide easy access to RDRAND? Yep. Guess which applications are using this support? Not even one… but still, this is being placed as a trap for someone.

Send this support straight to the abyss.

— theo

juelvo June 3, 2014 1:14 PM

Russian FSB (Federal Security Service) strengthen their grip on the runet.

Thus, the companies will be required to keep extensive records on their users’ activities (user identifiers, e-mail addresses, contacts lists & categories, amount of sent/received data, account removal attempts etc). Data retention requirement is set to 6 months.

That’s a really nice addition to the recent law regarding bloggers (Bruce has posted about it). Sadly I was not able to find anything about this in English.

Mr. Pragma June 3, 2014 2:32 PM

juelvo (June 3, 2014 1:14 PM)

Oh yes, those evil despotic Russians!

As we just talk so nicely about evil Russians … it might interest you that quite the same rules are about to be enforced by the “good, democratic, freedom-loving (TM) european union” and are already applied by some euro-countries. In Germany, for instance, some providers are known to keep those data not for 6 but for 12 months.

Never mind, western counties are “good, democratic, freedom-loving (TM)” and those Russian are evil and led by a dictator (who just happens to have way more votes and support in his country than many western leaders combined …)

@anonymous ass (June 3, 2014 2:02 PM)

If that makes me think or assume then I’d rather assume that those issues were factors for Bruce Schneier to leave brit t’com.

But, of course, implying that he, too, is a part of the evil side, makes you sound so much more interesting and smart.

Clive Robinson June 3, 2014 3:20 PM

With regards The Register article and BT’s involvment in spying,

First off BT have been at it since long before it was called BT, it has been involved via the so called “secret squirrels” –security cleared Post Office engineers– back when it was part of the General Post Office (GPO) which combined what is now Royal Mail (post&parcels) and BT (telex,telephone and wireless) under UK Government at Cabinate Ministerial level.

Secondly the article referes to what it calls, the little known,

National Technical Assistance Centre (NTAC)

NTAC is mainly the result of RIPA and a couple of other UK Acts and Statutory Instruments. RIPA gave “uncle tom cobbly and all” right down to some clerk in the local town hall the right to snoop on people in all sorts of ways. One issue with this is “tapping of phones” and other communications (mobile, Internet) is way beyond the abilities of those alowed, and it would be undesirable for them to venture forth with cabinet keys and wire cutters, for not just their safety but the safety of the networks. Further it would be undesirable for uncle tom etc to have access to GCHQ at Cheltenham and the MIs over at Hanslope Park etc, so NTAC as part of it’s duties has become the interface between uncle tom and the squirrels&spooks.

Anura June 3, 2014 3:47 PM

@Clive Robinson

The big problem with “home brew” entropy pools is the designers use a simple function such as add to mix in and then use the likes of a hash to produce the output. It will because of the hash pass the “usual suspects” tests of die hard(er) etc, but the input to the hash almost certainly won’t… Thus the designers are guilty of “magic pixe dust thinking”, which the likes of many in chip –supposed– TRNG designers are guilty of (cough Intel cough).

The big problem is that you might inadvertantly reduce the amount of information contained in the data you feed the entropy pool. There are two rules you can follow to prevent this: the first rule is to never try to write your own function to normalize the data. That’s what the hash function is for. Don’t XOR every group of bits, don’t add the data, take your results and feed them directly to the hash function. If you roll 5 die, and the results are 5, 3, 3, 2, 6, then your function is HASH(“5,3,3,2,6”); you can encode it as 6784, but I don’t see the point.

The second rule is to always encode your data unambiguously. Each piece of data you encode should be prefixed with the datatype, possibly a unique identifier for the source itself, and the length of the data if it is variable width. If you are adding to an existing entropy pool, then you should probably prefix your data with the existing entropy pool.

name.withheld.for.obvious.reasons June 3, 2014 4:00 PM

CSIS Meeting, Washington DC 3 June 2014
Vickers, Undersecretary of Defense for

Intelligence, Key Speaker

  1. Announce the launch of the continious
    monitoring program for/of DoD personnel.
  2. Intelligence sharing with need-to-know,
    and updating the systems ICITE – Joint
  3. Information Sharing needs to be more
  4. Needs more money to be “resourced”.

I believe they need to be held accountable
for the spending that has already accrued.
Don’t take my money, privacy, and future–bitch.

AlanS June 3, 2014 7:55 PM

Another legal ruling on NSA spying today: Smith v. Obama

The judge dismisses the case because he feels constrained by precedent but expresses the opinion that the precedent (Smith v Maryland, 1979) is dated and needs overturned. He concludes by describing Judge Richard Leon’s decision in Klayman v. Obama 2013 as “thoughtful and well-written”.

“[Leon] distinguished Smith by finding that the scope and duration of the NSA’s collection is far beyond the individual pen register at issue in Smith. Of critical importance to Judge Leon was that Smith could never have anticipated the ubiquity of cell-phones and the fact that “people in 2013 have an entirely different relationship with phones than they did thirty-four years ago.” As he eloquently observes, “[r]ecords that once would have revealed a few scattered tiles of information about a person now reveal an entire mosaic—a vibrant and constantly updating picture of the person’s life.”

…Judge Leon’s decision should serve as a template for a Supreme Court opinion. And it might yet. Justice Sotomayor is inclined to reconsider Smith, finding it “ill-suited to the digital age, in which people reveal a great deal of information about themselves to third parties in the course of carrying out mundane tasks.” See U.S. v. Jones, 132 U.S. 945, 957 (2012) (Sotomayor, J., concurring). The Fourth Amendment, in her view, should not “treat secrecy as a prerequisite for privacy.”

AlanS June 3, 2014 8:04 PM

The plaintiff in Smith v. Obama is taking the case to the 9th Circuit U.S. Court of Appeals.

Wael June 3, 2014 10:33 PM

@Clive Robinson,

Your definition of randomness fits in with the idea

I started reading the first link until I reached this sentence:

Tasks may be composed into networks to form other tasks, as follows. The parallel composition A ⊗ B of two tasks A and B is the task whose net effect on a composite system !M⊕N is that of performing A on M and B on N. And when Out(A)= In(B), the serial composition BA is the task whose net effect is that of performing A and then B on the same substrate. A regular network of tasks is a network without loops whose nodes are tasks and whose lines are their substrates, where the set of legitimate input states at the end of each line is the set of legitimate output states at its beginning. Loops are excluded because a substrate on a loop is a constructor.

You know, I like sushi and sashimi to some extent. I eat salmon or tuna sashimi, I don’t like white fish sashimi. If normal sentences were sushi, this paragraph would be a live lobster that got its tail cut off in front of you, the rest of it is crawling on the table and it’s blood was collected in a glass and presented as a drink. No thanks… I’ll take your word. Needless to say, I stopped there. Searched for “random” — could not find it. A lot of “conjectures” — 11 of them…
Second link, eh doable. Seems we are heading back to the old ages where mathematicians and physicists are becoming philosophers 🙂

Ok, so he added another level of indirection. How hard is it to say that?

koita nehaloti June 3, 2014 10:45 PM

The young turks tell:

Russia’s Online Troll Army Is Huge, Hilarious & Already Everywhere

“Russia’s campaign to shape international opinion around its invasion of Ukraine has extended to recruiting and training a new cadre of online trolls that have been deployed to spread the Kremlin’s message on the comments section of top American websites.

Top sites? Like this, maybe?

UnitedStatesConstitution June 3, 2014 10:51 PM

“Another very major disadvantage can be seen with the openssl disaster” a bit over dramatic I think, I didnt see any reports of people loosing bank accounts or anything just alot of hype and bullshit, the mailing list for my distro, didn’t really get excited, they patched it but it was not bleeding private keys like freebsd. Most users don’t even use SSL. I have worked with people trying to get them to use it,most people that got excited were people that used clear text passwords for years and had no idea anyone could grab it. I used to work with windows,not for, since I hit windows update one day and my NOD32 I used for years locked up the box trying to install it, if NOD32 blocked it from going there I didnt need the patch ,when I looked at what the update was I knew it was the welcome to China update, they were files that had always been things you had to add on to windows ,for microsoft networking control of end user machines, I used to use them to control what users could do .My box was stand alone.
Some distros only had the bad code a short time, and in that time how many people actually knew about the bug? Some distro’s do not run to the latest greatest code asap like jackasses,people whine about these distro’s for not doing so but they are aware new code free or paided has BUGS ,every new java has had these bugs ,people on windows have been using an opensource web browser Firefox for years to have a secure browser, many sites would not let you use IE .

There have been a ton of windows disaster over the years and with one of them Microsoft themselves ran to Linux to save there own ass. Anyone remember window NT 4 service pack 5 or was it 5a, that wiped out servers.

I have been running the same box on the same hard drive for along time its on 24/7, It does not need to be rebooted. It has a web sever running a mumble server, a TOR exit node, i2p, yacy and freenet. I admit I have to turn off freenet when I game. All on free software except some of the games. 8mb of ram and a bad $135 CPU, I use DD to clone backup.
For the price of windows 8 I can buy a new CPU and probably some more ram
There have been tons of patches lately they are looking for the NSA and if they got in here they will find them, you will not find the NSA in windows or mac, or maybe someone already did with the encryption hijacking thats been going on<Not hijacked Ill keep my linux thank you ! Ill be dammed If i know why this distro is the number one web server in the world, must be because its cheap certainly not because its better

barak Osoma June 3, 2014 11:12 PM

@juelvo that must be the reason there are far more i2p routers in Russia than anywhere else

Clive Robinson June 4, 2014 1:07 AM

@ Wael,

Ok, so he added another level of indirection. How hard is it to say that?

Not hard at all but it lacks both context and meaning… which is why I gave the two links and the warning of,

    It’s fairly new so something I will have to ponder and mull over for a while…

Wirh regards randomness I’m not sure he realy belives in it in any way differently to you. If you look at his many universes idea he indicated that “all possible states” existed within them. Thus what may seem random to us in our universe would be ordered and explainable in (an)other universe(s) with the appropriate states.

The problem as far as I’m concerned with the many universe view, is we are effectivly bound by our own universe (via conservation of mass/energy). Thus I view our POV as being one element from a set of elements the size and number of which we can not –currently– know, but we do know that combined they represent all other states of information in some combination and therefore the logical minimum is one other element. Thus I call the set the intangible information universe for the sake of easy explanation.

The problem I’ve had with the whole set of ideas is the idea that you cannot have a true reigion of emptyness, that is even in the best of vacuums energy/mass spontaniously appears, and thus something else has to disappear at the same time to ensure the principle of the conservation of mass/energy (“You cant get ought for nought”). Whilst the mass/energy might remain constant entropy certainly does not, which raises the intreaging possibility that information flows from universe to universe… The question then arises as to how this potentialy context less information presents it’s self in a universe such as ours.

As you note,

Seems we are heading back to the old ages where mathematicians and physicists are becoming philosophers

Which raises the thought of “Natural or otherwise”, or as Shakespear more eloquently put “A rose would be as sweet by any other name”. Perhaps the difference between musing and mulling is the existance of a muse to fixate upon 😉

It’s now gone 7AM n the UK and the night has been near sleepless again, so time to drag the weary carcass to the kitchen to make a pot of Douglas Adams random generator to “establish my normality” 🙂

Wael June 4, 2014 1:34 AM

@ Clive Robinson,

Thus what may seem random to us in our universe would be ordered and explainable in (an)other universe(s) with the appropriate states

Got it. Thanks!

Wael June 4, 2014 1:46 AM

@Clive Robinson,

kitchen to make a pot of Douglas Adams random generator to “establish my normality” 🙂

Almost missed this one! Very clever! You’ll need to drink at least three uniform cups to get closer to “normal”, or Gaussian..

Wael June 4, 2014 2:05 AM

@Clive Robinson,
Thank you for posting the links. Now I realize why you posted them. You posted them to help me with this part:

I can’t prove this view, nor have I tried to do so

I think from now on, I’ll read your posts a few times before I reply… I’ll stop on this thread now before the Moderator loses his temper.

unimportant June 4, 2014 6:45 AM

How to comfortably silence standby mobiles: Someone could manufacture a cover with two open ends, which is supplemented with a fabric pouch fixed inside the center of the cover, and which holds the mobile when inserted into either side of the cover. One open end of the cover is used as usual, and the other open end is used to shield the mobile’s RF antenna (e.g. with an aluminium foil which covers the protective half of the cover).

Clive Robinson June 4, 2014 7:28 AM

@ keiner,

Everybody who has ever worked directly or indirectly for the GPO / BT has had part of their payment as a result of UK Gov payments made for interception work. Whilst most employees on the technical side were aware of legal warrented wire tap work and counter espionage for the MIs, the Peter Wright “spy catcher” book spelled it out quite clearly for anyone who could read.

However what The Register artical refers to was sufficiently classified that I doubt those at even senior managment, executive or director level would have known. Even those fairly intamately involved with setting it up are highly unlikely to know, what they were working on was going to be used for.

Further it is not unknown for large companies to have little or no knowledge of many parts of their business except via the accounts, and sanitised management reports. Sometimes some are even effectivly “off book” as “skunkwork” projects for sensitive R&D work hidden within an unrelated divisions projects budget. It’s also not unknown for companies to not have visable reporting lines for low level parts of the organisation so that other parts of the organisation can not easily if at allcheck what part if any another office is responsible to… One example is IBM, it’s global reach and name made it an ideal cover for amongst others the CIA and Mossad to set up entirely ficticious organisations where even the local employees believed they worked for IBM… They even purchased or had transfered equipment from other parts of IBM and in atleast one case involving mossad I’m aware of actually subcontracted work from other parts of IBM.

When it comes to this level of security “no need to know” applies to just about everybody, which is why it’s possible for the likes of what went on with the CIA and Contras, Air America, cocaine sold on the streets of the US, and money laundering of drug profits through the later to colapse bank BCCI, which was only half jokingly refered to by many as “The bank of cocaine and criminals international”. Often officialy dismissed in one way or another as part of operational necessities, but neglecting to mention how those involved profited either financialy or through promotion/power etc.

Mr. Pragma June 4, 2014 7:46 AM

koita nehaloti (June 3, 2014 10:45 PM)

What incredible Bull**it!

For a starter: Assume that there is such a group; is that then “the Russians”? There’s many, many us-americans engaged in diverse groups and no reasonable person would suggest that therefore “the americans” do propaganda.

Next, after that sabu rat the credibility of Anonymous and similar groups is next to zero. Yet that young turks guy sells their blabla as if it was right out of the book of truth.

Most importantly though, Russia has no need whatsoever for propaganda in that matter because there is lots of proof. Proof of the usa instigated and sponsored nazi regime shelling civilians as well as proof of Russia being the responsible party in negotiating and trying every diplomatic approach.

It seems that president Putins laws against usa sponsored social terrorists are still too soft.

Jacob June 4, 2014 9:25 AM

Fresh off the BBC:

Germany will open a formal investigation into the tapping of Chancellor Merkel’s phone, as well as into the surveillance of the German population by the NSA.

My favorite line from the news item:

“Mr Obama told the German chancellor last month that he was “pained” that Mr Snowden’s disclosures had strained the US-German relationship.”

juelvo June 4, 2014 9:30 AM

Mr. Pragma,

Oh yes, those evil despotic Russians!

As we just talk so nicely about evil Russians … it might interest you that quite the same rules are about to be enforced by the “good, democratic, freedom-loving (TM) european union” and are already applied by some euro-countries. In Germany, for instance, some providers are known to keep those data not for 6 but for 12 months.

Never mind, western counties are “good, democratic, freedom-loving (TM)” and those Russian are evil and led by a dictator (who just happens to have *way more* votes and support in his country than many western leaders combined …)

I know about EU (although the Data Retention Directive has recently been deemed illegal by EU Court). What’s your point?

juelvo June 4, 2014 9:33 AM

Something has mangled the quoting. My response starts at “I know about EU…”

Mr. Pragma June 4, 2014 9:35 AM

Clive Robinson (June 4, 2014 1:07 AM)

I’m feeling somewhat unhappy to counter (some of) you remarks but I’m afraid I have to (e.g. random in another universe).

What is random?

We can’t be sure (as long as quite some Atoms and mechanisms in the universe are unknown or not fully understood) but it seems that random is … tata .. complexity.
Behind what we perceive as random is (almost certainly) a level of complexity high enough to produce something that seems to be unpredictable. A wave on the ocean for instance.

Unless we assume that we would — by some magic — be all-knowing in another universe we would experience random there, too.
True, in that universe mechanisms that are not well understood (at least not well enough to produce repeatable prediction) here might be perfectly disclosed and “visible” to us in that other universe; other mechanisms, however, would not, et voila, random enters the stage again.

That’s also why I don’t care too much about distribution. Yes, shifted distribution currently — i.e. in our current systems based on our current understanding beneath — usually indicates a weakness.
In the end, however, random in crypto systems is about unpredictability. I dare to postulate that we can create a very high level of unpredictability even out of mediocre sources if we properly “mix” them up non-linear, i.e. multiple domains, in complex ways and preferably with no or very little state (although, of course, non-mediocre sources are highly desirable).

The decisive factor to look at is the complexity of the mechanics behind the sources. Given 3 sources of random based on partly high and partly very high complexity (combinatorial quantum effects come to mind) and using one of those sources (source A) as the frequency at which source B applies itself as a selector on functional variations of source C would create predictability way beyond nsa’s cracking abilities, in other words, of extremely high unpredictability although, btw, distribution might be non-even.

Simple reason: We are not good at creating random and we can’t be because in the end optimal random is by definition complexity beyond our intellectual (or at least processing) reach. Nature, however, is.

Another point that might be interesting is that crypto in general is largely based on non (or not in reasonable time) computable problems. Which again might as well be re-worded as “complexity beyond our capabilities”.

This “we understand the basic math and mechanics behind it but we can’t compute it in reasonable time” approach has served us well. It seems that we would be well advised to use that basic approach again but from another angle to generate random ~ unpredictability by not trying to create it ourselves mathematically but by making use of the very fabric “real” random is made of, complexity over multiple domains.

Apologies for probably unnerving some users here 😉

Mr. Pragma June 4, 2014 9:41 AM

Jacob (June 4, 2014 9:25 AM)

Yes, right!

obama sounds like a guy telling you “I fu**ed your wife, cut your tires, abused your daughter. What a shame that some evil guy told you about all that such straining our good relationship”.

Not quite as bad but similarly telling is the Germany governments reaction. Basically they confess “yes, it’s true; we are but a usa vassal and colony. Too bad that certain events disturbed our ‘independant, free, and democratic country simulation'”.


try something smarter. Meanwhile nuland yourself.

Clive Robinson June 4, 2014 11:32 AM

@ Mr. Pragma,

The problem as you identified is “What is random?” Short straight answer, it’s what ever you want it to be, because nobody has come up with a reasonable definition that is not self referential.

Thus there are many definitions of Random, that range from “unpredictable” through “noise like” and such as “determanistic complex”, “entropy”, “faux entropy”, etc. Which means it is better to define your version of random by the application you wish to use it.

So your points relate to some of the practicalities of “random sources in use”. However this does not answer the philosophical questions of what truely is random and how do we recognise it and even if it’s possible to actually measure it to do so…

As I’ve said before I’m not a great believer in things that cannot be measured, thus clasified etc, because it does not fall within the fundemental tenants of science. However that said it’s relativly easy rto show that information like mass and energy is finite in this physical universe, because to transmit it, store it or process it information needs to be encoded or impressed in some way on matter or energy (which are assumed to be both equivalent and finite in our physical universe). Further and importantly is the notion that information at a fundemental level cannot be destroyed. Together this means we cannot know everything unless particles of matter are capable of infinate storage capability…

So random data could be regarded as the result of a process we do not currently know, and may never know due to the finite information capacity of our universe. This raises a question of what happens to random when our universe has reached it’s storage capacity (if not the universe it’s self).

Are these questions of practical use probably not, but we still seek answers to them.

As for random being complexity, no it’s not, it’s a measure of possibility, complexity is a measure of the transform from the current (ordered) state to the new (disordered) state.

As for using the output of one source to sample another source, this usually produces faux random signals. To see this consider a D-type latch and two fixed but unrelated squarewave sources, the Q output has an apparently complex waveform, however it’s not, it’s high frequency component is defined by the frequency into the CLK input, it’s low frequency is defined by the frequency difference between the two signals. Further looking at the waveform with a suitably setup oscilloscope shows that the pulse widths of the output pulses produce a sin wave, which can easily be recovered by a suitable lowpass filter be it analog or digital. The only rear randomness in the waveform is the result of jitter on the edges of the two input squarewaves, which can in fact be very very small (ie less than -77dBc of the low frequency waveform). Using any number of squarewaves will actuall only improve the randomness on average by the root of the numbef of sources… It’s a fact that appears lost on the designers of some in chip RNGs, and the organisations they get to test the RNGs… which is maybe why they like a hash of a sin wave as the hash provides the complexity of transform that hides the otherwise minimal entropy from the entropy tests.

Moderator June 4, 2014 12:18 PM

Mr. Pragma, please refrain from insulting other commenters like that. The word substitution really doesn’t make it any better.

Mr. Pragma June 4, 2014 12:46 PM

Clive Robinson

a) If random would mean whatever one pleases we could immediately stop all discussions and any work on it. Fortunately your opinion is incorrect.

b) In the context of crypto random is just a means, not the end/goal. What we try to achieve is actually unpredictability. This may be a lousy approach compared to philosophical musings but that’s what it’s about here in this context.

c) We may bring up as many observations and reminiscences concerning sine waves and whatnot and we may indulge in arbitrarily many very academic sounding know-how droppings – but in the end the issue at hand is to achieve high levels of unpredictability.

And indeed — what a surprising coincidence — nsa and accomplices have again and again attempted to weaken and undermine exactly that, unpredictability by weakening, undermining, and even crippling the mechanisms devised to achieve unpredictability.

I’m not driven to convince anybody about my view and I don’t care batshit whether my thoughts and observations get any significant attention by the western crypto community. What I do care about, however, is civilized and constructive dialog with colleagues (like you). I would welcome if you had a similar attitude (as you usually do).
And I think that should be quite achievable because we almost certainly share a common goal, to understand and to create better security.

Mr. Pragma June 4, 2014 12:54 PM


What is it that you find unpleasant? The term “to nuland” or its usage and the perceived insult?
If it’s the term you should resolve that with the (self censored) in your state department.
If it’s the perceived insult you might want to consider changing your perspective that finds the request “nuland yourself” inacceptable and insulting but has no qualms whatsoever about other users insulting and ridiculing a whole people and, along the way, many victims of usa-sponsored terrorism leading to children and hospitals being shelled.

And no, I’m usually not debating a moderator. When you gave me hint recently, I obliged and stayed away from a matter that you wanted quietly passing away.

And btw maybe to nuland oneself means to treat oneself with cookies. Obviously my intention was not to insult but to show utter dismay; otherwise I would have used the word that you felt to be implied.

Moderator June 4, 2014 1:10 PM

And btw maybe to nuland oneself means to treat oneself with cookies.

I am not going to debate on you this, especially not when you come back with this kind of obviously insincere nonsense. You need to treat other commenters better. If you can’t or won’t do that, you need to find another forum.

Benni June 4, 2014 2:08 PM

DER SPIEGEL writes here that not everything in its articles on NSA originates from
Edward Snowden.

Instead, Snowden’s documents provided the starting point for others to share their knowledge:

That’s very funny I think. Perhaps NSA will soon face a similar fate as the german BND where most of its weekly reports to the government end up in DER SPIEGEL.

juelvo June 4, 2014 3:00 PM

Mr. Pragma,

try something smarter. Meanwhile nuland yourself.

what’s your point again? Aside from rambling thoughts about “evil Russians”, which I have never mentioned in my posts.

Moderator June 4, 2014 3:18 PM

Juelvo, I don’t think anything good is going to come out of you and Mr. Pragma continuing that discussion. Please let it go.

Clive Robinson June 4, 2014 5:58 PM

@ Nick P,

Even on low power eight bit microcontrolers interupts need a sensible stratagy to deal with. Those new to embedded programing usually make the mistake of trying to overload the interupt handler that talks directly to the hardware.

A sensible way to do it is to have a fast interupt handler read a byte or word into a circular buffer (or write) from the hardware register, set a flag and clear the hardware interupt as fast as possible.

The data in the buffer is then processed by a slow interupt handler. This is usually hung of the system timer update or heartbeat function. The timer interupt causes the tic-counter or clock to be updated, it then increments a counter which it then uses to select the slow handler for a hardware fast interupt buffer, usually this will move data from/to the fast interupt circular buffer, through some sanity checking (such as CR/LF handeling or flow control) into a more normal linear array buffer which the “application” sees.

Whilst this might appear as overly complex it effectivly deals with most types of interupts that are sporadic but need fast response without locking the CPU out of it’s foreground app and stuck in the interupt code. Which makes the “real time” response faster and much less likely to cause other interupts to be missed or off time.

From this it can be seen that off loading the fast interupt handlers onto another CPU or core would be benificial and this is done in some hard disk microcontrolers where the response and data handeling would not otherwise be possible.

The big issue though is the interupt response time in terms of switching context, for small and medium size microcontrolers this is usually very fast and happens at the end of the currently executing instruction. However the more complex the CPU and interupt structure gets the longer this context switch time takes, and this “setup&teardown” time can quickly mount up. It gets considerably worse when it involves cache or certain types of RAM where real random access to single words likewise carries a significant time penalty (when compared to burst reads of contiguous memory locations).

All that said there are still times where RTOS systems will starve due to bursts of interupts, thus sometimes there is no choice but to use a lot higher specification part to cover this contingency or take the heratical view of dropping lower priority interupts and cleaning up down the line…

There is in reality “No Right Way” just “Wrong ways that work” most of the time, a thought you don’t want to be thinking about the auto brakes system on your car as you stamp on them just incase it turns into the last thought that crosses your mind…

Figureitout June 5, 2014 12:29 AM

Clive Robinson
–I think you should emphasize “wrongs ways that work most of the time“…Weird bugs and the horrible code nightmare happening in modern PC’s that have been connected to internet, have all sorts of peripherals, and have downloaded all sorts of software; it’s just a mess that continues to interact w/ itself. Hardware bugs are an even more fun mystery which you need to whip out some test equipment that needs to have…more pre-programmed chips…and push out and receive signals…which leads to a link…

From further correspondence w/ Dr. Skorobogatov:

It’s a powerpoint-like link that goes over some personal nightmares like test equipment that fails (providing false readings, which will lead to development hell), FPGA backdoors, Smartcard backdoors (according to him Java/BASIC cards are the least secure…), even the 16-bit MC68000 which a writer on hackaday is making a computer based w/ that chip. I liked the “metal mesh” anti-tampering that proved to be a bit of a hassle for RE. Eventually “putting the cherry on top” w/ fault injection (cringe…). Lots of assembly breakdown.

All research expected to be published next year.

Clive Robinson June 5, 2014 2:09 AM


OK, how about,

    Two fish in a tank, one turns to the other and says, “I hope you know how to drive this?”[1]

Back in the distant past of this blog, I used to post a few funny items on a friday as a sort of a “dress down” for the weekend, sadly such humourous items appear to have been taken over by “Cute Kitties” these days. Personly I blaim the dancing hamsters if you are old enough to remember them…

[1] No it’s not my best joke, but it is “safe”, as @Wael can confirm sometimes the moderator can be “shocked”, so people should take care.

Clive Robinson June 5, 2014 6:37 AM

OFF Topic :

As I mentioned baby cats earlier… how about some items that might just cause an effect similar to a cat in the pigeon coop.

First up more on Florida and the use of Stingray units in ways that are probably ilegal,

For some strange reason Microsoft is going to Israel for it’s Cyber Security development,

Third up, Google is using encryption as a talking point to indicate deficiencies with Comcast and Microsoft webmail etc services,

If none of the above have caused raised blood preasure or palpatations, how about,

As allof you should know tommorow is the 70th Anaversary of the D-Day landings, where many died trying to get a toe hold into Europe to drive back Hitler’s troops. Slightly later the paciffic war ended what many regarded as WWII. Less well known was the contribution of native Americans to that conflict. Know as code talkers they were in effect human cipher systems which removed the delays of traditional coding and decoding of messages that enabled significantly more flexability and rapid response that saved many lives. The last of thesecode talkers has died agwd 93,

Finaly one for those thinking about lightweight OSs for a number of tasks, weighing in at around 30KBytes it’s small enough for even 8bit micros (and has been ported to the 6502 to make the point). Although it’s been around for over a decade and is thus moderatly mature, it’s not that well known. However with the IoT getting slowly closer day by day Contiki may well be a considerably better bet than Microsofts closed source offering and many other OS for smart devices or phones,

Mike the goat (horn equipped) June 5, 2014 8:05 AM

Clive: I recently read a wonderful book (whose title escapes me but I will post when I remember) about the Navajo code talkers and their tactical role during WWII. It is quite amazing that the Nazi’s didn’t know what to make of it. I imagine – being, after all, a natural language – that these days they wouldn’t have got away with it.

Nick P June 5, 2014 4:28 PM

Is IBM accidentally delivering us good news?

I was gathering papers from a conference focusing on high assurance systems. It’s kind of old but had some papers I didn’t have. Anyway, I found this nice one from 2010 by IBM:

HWMAC: Hardware-Enforced Fine-Grained Policy-Driven Security

Here’s the techs they mention are in their Secure Processor Architecture:

“There are nine technologies in different stages of investigation
– Object and subject labelling with mandatory access control (
– Tagged architecture with automatic state save
– Architecture Support for Modular Software
– Secure recursive virtualization
– Logical partition memory: a)Improved MMU with hypervisor translation
b)Recursive logical partition memory
c)Hardware supported recursive virtualization with hierarchical TLB. – Secure Message Passing Bus
– Hardware enforced protection against timing channels
– LPAR Isolation
– Hardware support for modularizing the kernel”

There are similarities to designs I’ve been working on, although they were a few years ahead. The reason I bring it up is that IBM, esp security legend Paul Karger, betting on such an approach is a huge endorsement that a chip-up method is best. I also thought you all might find this statement interesting:

“This talk focuses only on HWMAC – Only technology for which we have received export control clearance”

This paper is dated 2010 and the full design is not cleared for export. What!?The main group covering software/security export clearances is NSA. This might mean that NSA is quite concerned about the full design ending up in their targets’ hands. That should be a hint to NSA opponents just getting started that putting their effort into such a solution is best option. If done right, it apparently ruins NSA’s plans so much they’re hesitant to let it leave the country. My bet is they’re delaying export so they can scramble to find methods of compromising it, with interdiction or chip-level subversion being prime strategies given the design’s strength.

So, teams focusing on secure architectures with tagging, encryption, protected control flow, etc. GREEN LIGHT! GO! GO! GO!*

  • And stay the heck out of the Five Eyes countries. And send plenty of backups of your papers, source, etc to trustworthy people in various countries. Just in case your development team gets bored and tell us to use BitLocker instead. 😉

AlanS June 5, 2014 6:38 PM

@Nick P, Skeptical

Follow-up on Champion of Freedom thread post here</>.

Alexander and various others in the executive have made various comments about the damage caused by Snowden. I am guessing that in their opinion all the documents fall under Skeptical’s Part B classification, although in hindsight and in the public glare they might now admit that some mistakes were made. The problem is that they don’t provide much evidence for their claims and they have a well-established track record for being untrustworthy.

I don’t think it should be assumed that whistle-blowing on secret programs that are “legal and ethical” is a bad thing in a democracy. Is lack of transparency and accountability ethical or is it just corrupt and corrupting?

There have been interesting debates about what newspapers have chosen to publish that address some of these issues. See for example the debate on Espionage Porn and then Balancing the Public Interest in Disclosures and Does “Espionage Porn” Make Us Stronger?

Anura June 5, 2014 6:54 PM

I don’t really see the need to release documents from TAO. The mass surveillance, whether it is on Americans or foreigners, I find unacceptable. I have a fairly cosmopolitan world view, so I don’t really see why there should be a distinction between American, Canadian, German, Iranian, or Chinese citizens in terms of basic rights. When we are deliberately weakening software and protocols to achieve this goal, I think it’s more harmful than any potential loss of intelligence is.

In the end, I think we are seeing a slow, but steady move to greater security. It’s not moving as fast as I would like, and I don’t think it will ever get where I would like, but it is in the right direction, at the very least.

Anura June 5, 2014 7:08 PM

“It’s okay, to peak in my neighbor’s windows, they aren’t my family.”

Apathy is dependent on proximity. If a friend or a family member is murdered, it affects you greatly. If it happens to a neighbor on your street, it affects you less so. If it happens to a stranger in your city, you will probably still pay attention. If it happens to someone in a completely different region of your country, you might skim the article. If it happens to someone in another country, then you probably won’t open the article in the first place.

Try imagining a world where we polarize that apathy: you still care greatly about friends and family, but you care about as much about the dead body on your street as you do about that guy in Notmeistan that was stabbed in the alley. It turns into a pretty grim place. My question is what happens if we do the opposite? What happens if we try to care about everyone equally? Do we have the emotional capacity to even do so without a mental breakdown? At least, maybe we wouldn’t treat Notmeistanies as if they are somehow lesser than the citizens of your country. Maybe we would see less war, less need for this surveillance in the first place.

Wael June 6, 2014 2:01 AM

@Nick P,
Re: IBM’s paper…
Took a cursory glance…
Their proposed memory labeling, and MACs is almost a disguised version of ABAC — Attribute Based Access Controls. You can find a paper on the subject from NIST 😉
I think a good addition would be to use CPU specific encrypted instruction sets — make it difficult for external attackers. Computers will run on CPU cores with embedded keys, software comes as a form of semi compiled code — say object code or byte code. Then the computer will finish the linking or translation to machine language then finally sign the code and install it. Even if an attacker attaches a hardware debugger, the CPU instructions, bus data and addresses as well as memory would be indecipherable. Other mechanisms will be needed in addition, as I still see some attack vectors. As you know, I am not a fan of architecting a system wearing the hat of an “attacker” (I wearing the hat, not the system). But just in case I am wrong…

Wael June 6, 2014 2:27 AM

@ Nick P,
Supplementary, Stardate 11157.1 [1] heading to planet IBM…
To reduce the surface of attack further, another component needs to be utilized. This component will be the trust warden. It will take the byte code or object code, and run static analysis on the received code. It’ll take a manifest which accompany the code that lists the functionality, resource requirements, among other fields and validate that the object code does not violate it’s advertised functionality. Then the warden will go to the next stage of going through dynamic analysis with 100% code coverage. When the code passes this stage, the warden passes it to a sub component which will fuzz the code to make sure it’s well behaved. When all is fine and dandy, warden gives it an ABAC based on the outcome of the tests, and will also disable functionalities that seem fishy… Finally, the CPU will commence signing the code and installing it. The warden will still keep an eye on it, and once in a while run a drug test on it…

See the problem with this approach? We’ll keep adding components and functionalities to “enhance” security. What can I say, man! I am a party pooper :). Make it happen, number 1!


Nick P June 6, 2014 12:15 PM

@ Anura

TAO Catalog: Good leak or bad?

The TAO catalog release is one I went back and forth with myself on. Is it necessary to release the catalog of tools we use in legitimate operations on opponents? That would seem to be a bad thing at first. Upon further thought, I decided the TAO catalog release was one of the best things that happened to INFOSEC. The reason is it does several things at once:

  1. Shows that so many weaknesses that “weren’t important” actually were used by TLA’s, arguing for bottom-up secure design I’ve always advocated.
  2. Shows TLA’s are employing both passive and active EMSEC attacks in practice, not in theory. Justifies baking in EMSEC.
  3. Shows NSA has been lying to us and actively weakening U.S. systems.

Each of these is important for those protecting high value assets. Number three is EXTREMELY important. I wrote an essay that gives a few examples of that here:

Can or has NSA used legal powers to force weakened security of US goods?

NSA has been promoting many solutions for our security that have no security advantage at all, but do give them backdoor opportunities. That they do this makes them both liars and a threat to our security. That, for deniability, these backdoors/weaknesses are the same type enemies routinely find and exploit means NSA is “aiding and abetting the enemy.” (Didn’t they accuse someone else of that recently? And say it’s grounds for treason charge? Moving on…) The TAO catalog provides proof by showing the many ways TLA’s such as NSA attack systems. And, interesting enough, many attacks would work on almost everything the NSA has promoted in past 10 years as secure.

So, without TAO catalog, we wouldn’t have hard evidence that NSA has been deceiving us and weakening security of the COTS/GOTS offerings. Various people and organizations would still be describing many risks as speculative rather than real attack vectors in current use. TEMPEST would be totally unjustified. And so on.

Now, did they have to leak entire TAO catalog? Personally, I don’t think that was necessary. Most benefits I describe could’ve occurred if the leak was a description of the capabilities. For instance, they might say “attack kits embedded in connectors (eg USB)” rather than showing all the details. They might also say they have rootkits for various servers and networking switches made in the U.S. They might mention wireless survey kits disguised as cell phones without showing which model is used. And so on.

I think the information could’ve been redacted quite a bit. Several parties might have worked together on the real document to create the safer version of it. The safer version would be clear on most capabilities. The parties would vet that these were in the document. The real document could be released if government acted like the capabilities were fake. (I doubt they would.) People would know how advanced the capabilities were, I’d be able to write my essay showing NSA keeps us vulnerable, and various agencies could still use their devices in the field with minimal disruption. It’s a win-win way to do a leak.

Of course, they didn’t do it that way. So, having to choose between no TAO or the whole thing, I choose the whole thing coming out is better for us all for reasons 1-3 above.

Nick P June 6, 2014 12:35 PM

@ AlanS

I appreciate the links. I also like the term “espionage porn” as I think it’s appropriate to many of the stories. I’m holding off on a response for now until Skeptical comes back with his references on various damage and specific revelations.

@ Wael

I believe the paper did have encryption of some sort in the chip. Not sure if it was accelerator or feature you described. The PDF reader on this new distro is goofing. (“Linux Distro Wheel of Fortune?”) The ISA concept you recommend is also one of my solutions I’ve posted here, even with an academic paper on a prototype hardware design. The thing I like about random ISA method is it’s simple. It’s ridiculously simple and exploits the fact that attacker’s code can’t work if the instructions are wrong. That means it potentially defeats all shellcode attacks with almost no effort. That’s why I’ve kept it in my back pocket for if I ever get even limited funding: license the prototype that already works, make some easy mods, throw trimmed OpenBSD on it, and deploy. The poor mans “Secure Processor Architecture?” 😉

I’m actually glad you liked that concept as I’ve gotten mixed reviews from others on it. Most are worried about “security through obscurity.” (shrugs) I do think it passes your previous criteria of “being cheapest thing to build.”

I also agree with you on the static and dynamic analysis. The high assurance development tool I mentioned here essentially does that. It combines rapid development, export to various static/dynamic analysis tools, automated test generation, automated covert channel analysis, continuous integration, and simplified deployment. The design is at an abstract level where various portions could be implemented immediately, but still a P.I.T.A. for developers. Unavoidable without plenty of extra work.

It didn’t have an ABAC phase so that’s certainly an interesting addition. The hardware integrations I’ve prototyped were automatic generation of descriptors for capability or segmented architecture, along with true control flow integrity. So long as code is modular with statically discernable entry and exit points this can be done easily. The ABAC might be redundant in such a design or might provide more opportunities. Your mentioning it also makes me think of various languages that integrate security labels. Might be able to target them to hardware similarly. I’ll have to think more on these things.

Buck June 6, 2014 4:13 PM


I’ve yet to see any firm evidence suggesting that document was indeed from Snowden’s dump, and frankly, it doesn’t seem to me to fit the presentational style or target audience as most of his other ‘leaks’…

Clive Robinson June 6, 2014 6:54 PM

@ Nick p, Anura, Buck,

Re : TAO Catalog : Good leak or bad.

Plain and simple Good and Good that it was in full.

The reasons for this revolve around human behaviour which we still see exhibited now well after the release.

If you go back far enough on this blog you will see I repeatedly made the statment that GCHQ spied on US citizens and gave it to the NSA, CIA, etc they in turn spied on UK citizens and handed it over to GCHQ and the MIs. This was so that neither country broke any of it’s laws and also senior polititions could truthfully say “we do not spy on our citizens” infront of commisions, enquires, the respective houses and the national press.

Bruce bloged about the BRUSA (now UKUSA) agrement that is “the special relationship” when the document was released for public viewing at the UK National Archive in Kew South West London.

I repeated the point about the swap process and if you go back to that page you will see I was basicaly told I was wrong, by people who were either not in the know or were running deception.

Due to the Ed Snowden revelations it can now be seen that what I was saying was correct, and the neigh sayers wrong.

The fact that there was prior to the revelations sufficient evidence to show what I was saying was correct and fairly widely known, this was disregarded by the neigh sayers…

And that’s the point, there was more than sufficient evidence around to show that the NSA and others were using various bugging techniques. Further all the methods were well know to anyone who could think a little and read open litriture such as Peter Wrights book Spy Catcher, a Dig-Key catalogue and other information about radio communications, data comms, digital techniques including FPGAs and EMC techniques.

Again discussions about such things on this blog producea an abundance of neigh sayers, BadBIOS being a prime example of people making claims things were not possible even though they had been shown to be so.

If the TAO catalogue had not been released in full then the neigh sayers would have continued to deny what was painfully obvious to those who knew all to well that it was more than possible because they had actually designed sold and deployed such surveilance equipment for Corps, LEAs, and GovOrgs…

There is an old saying in the UK about “You can lead a horse to water, but you can’t make it drink” and it aptly applies to the neigh sayers, you can give them all the information required to design and build such devices but they will continue to neigh say. Even when you have what is about as close to “proof positive” as you are going to get, as with the TAO catalogue some still will deny it and probably will still do so if you had one of the devices and rammed it down their gullet…

Thus as with BadBIOS if the TAO catalogue had been redacted, the neigh sayers would be claiming that it was not so in various ways and dig their heals in even further.

As for the harm in full release, it’s minimal at worst. We know that terrorists and many criminals are fully aware of such tactics and devices, and with malware some criminals are way way ahead of whats in the catalogue. And lets be honest a few people on this blog are way way ahead of that and have given reasonable descriptions in a fairly responsible way on this blog.

To think that any government with even moderate technological industry is not also aware of these sorts of devices and systems is delusional. Even third world countries without any kind of technology industry are going to be aware of such systems because private companies who design and build similar if not more capable systems will have dropped a sales broucher or ten on the desks of those potentialy looking to buy.

Which means that the “real harm” is “public awarness leading to political fall out”, not the loss of capability against hostile persons / organisations / countries, because they already knew and had as we know from other sources had quite some time ago taken preventative action such as not using “connected computers” or “electronic communications” that were vulnerable to the Five Eyes.

And if you think back to before the Ed Snowden revelations we had openly discussed on this blog what was required technicaly to record every phone call in the US and how to best deploy it.

As has been pointed out before, there is not a technology gap between the capabilities of industry and the likes of the NSA, in fact the opposit, what there is however is a gap in what we find creditable. That is we down rate our own abilities for the sake of not wishing to be seen either as paranoid or a conspiracy nut. In effect many are sticking fingers in their ears and chanting “nunu nunu nunu” as fast as possible to not hear and thus not have to face real possibilities…

With the unredacted TAO catalogue, it’s a bit difficult to ignore what’s infront of your eyes, which has caused many to realise that they have to wake up and not just smell but actually consume the coffee in large measure.

Wael June 7, 2014 2:39 PM

@NIck P,
On Secure computer design:
I have a suggestion, if you want to continue on this path:
You have several options — these are listed for illustration, not exhaustion..
Create a high level list for each threat and solution as such:

  • Threat, with examples that belong to the “class of threats”
  • Defense (or mitigation) tactic / design / philosophy / …
  • Effectiveness (80%, 100%). If less than 100% we need a supporting mechanism

Populate this list for all the threats you care about, for example, you can start with low level weakness root causes

  • Side channel attacks
  • Subverted software (firmware, device drivers, user mode apps, protocols, crypto,…)
  • Hardware weaknesses…

Or you can start from the top then drill down, such as:

  • Identity theft — Build an attack tree
  • Key-logging — Build an attack tree

When you have these lists populated, then you can fix them one by one. You get the benefit of knowing what you addressed and what you have not, and the effectiveness of your design. Also, by defining all these threats you are intending to mitigate, you would have implicitly defined what “Security” means to you. You would have bounded the target use cases and would be in better position to evaluate your solution. If you publish this list, with your solution, then you effectively produced a “protection profile”, and others can help you run formal evaluations on your solution. What the referenced IBM paper does is show some security capabilities without a clear map to the weaknesses it fixed. I can almost anticipate your reply which will include some colors like “Orange” 🙂

Nick P June 7, 2014 11:10 PM

@ Clive Robinson

“Which means that the “real harm” is “public awarness leading to political fall out”, not the loss of capability against hostile persons / organisations / countries”

“what there is however is a gap in what we find creditable. That is we down rate our own abilities for the sake of not wishing to be seen either as paranoid or a conspiracy nut.”

Interesting points.

@ Wael

It’s a good suggestion. It’s the brute force method I’ve been trying to avoid. Yet, it might be inevitable as a full attack vector enumeration about seems necessary for high assurance in today’s environment. It would have to combine invariants (eg info flow control), root causes, attack trees, and more. All of it. Then, it would have to be organized. I’d further analyze them to see which give attacker control of machine. Those should be the priority in countermeasures. I’ve been designing recently based on a critical subset of them that all others seem to flow from: lack of code vs data separation, no control flow protection, memory issues, concurrency issues, firmware issues, power button being next to knee in most designs, etc.

(Ok, one of those isn’t an issue anymore but was really retarded in its day.)

It might help that some academics have partly done this with their “taxonomies” of attacks. They’ve ID’d and organized them. It still would be quite a bit of effort. I’d rather have a team that’s funded for a year or so before I did it. Each thing I mentioned is its own sub-field. Might have to do it, though. I’ve already determined to do it bottom up, though. It will be much easier.

“If you publish this list, with your solution, then you effectively produced a “protection profile”, and others can help you run formal evaluations on your solution. ”

Excellent idea. Yes, this was standard practice in form of “Security Targets,” albeit for security theater (EAL4-below). However, recent Common Criteria is phasing them out in favor of pre-approved (by govt) Protection Profiles. I’d probably go the route of MULTOS and do what you suggest under the European ITSEC scheme. The countries that support it have some more trustworthy one’s than those backing CC. It also uses security target approach.

“I can almost anticipate your reply which will include some colors like…”

You anticipated wrong. I even cut the quote to ensure it. 😛

Wael June 7, 2014 11:28 PM

@Nick P,

You anticipated wrong. I even cut the quote to ensure it. 😛

Brilliant! You’ve got me pinned! Better change my style…

Figureitout June 8, 2014 12:23 AM

Ok, one of those isn’t an issue anymore but was really retarded in its day.
Nick P
–I hope power buttons near your knee isn’t the issue you speak of. I cherish my old desktops (tarnished for various reasons). It’s almost exactly how a computer should be. Lots of ports (can be shielded if need be), importantly SERIAL ports which I need and AUDIO ports. Also easily accessible PCI card ports for pretty custom hardware, typically more ports, ethernet, or wifi (could be SDR). The only thing is the core of the device (CPU and all memory on board), and the board that doesn’t have hidden antennas and gives the user complete physical shutdowns of wireless comms (not injections of course).

But no, I have a laptop (w/ a power button whose functionality can be seemingly affected by removing screws…) w/ components hidden away like modern cars, forcing you to tear away the obfuscating covers at risk of permanent damage; forcing to go to the “dealership” where you sit in a waiting room while some random has access to all the microchips in your car, and can re-flash them. I don’t even know what this large copper thing in my laptop is, definitely spotted the wifi and the antenna, and of course the board is multi-sided so I have to rip this mother[blanker] completely apart to just visibly observe the hardware.

And now these newer computers are taking away another port (microphone), so I got USB, Ethernet, HDMI, and Smartcard, (some are even taking away the CD-ROM, are you kidding me?!); sht sucks, I don’t care how sleek it looks, fcking worthless I can’t directly interface w/ a radio. Wish I was born in the past and could’ve stocked up on good computers and hardware.

DB June 8, 2014 3:27 PM

“…could’ve stocked up on good computers and hardware.”

Long live eBay…. except you wanted to not have some randoms all over the hardware… hmm…. well in today’s world of interdiction and black bagging, I’m not sure if it really matters, if they want you they got you. But it would be nice to force them to use interdiction or black bagging for EVERY compromise they do… instead of just secretly flipping a switch somewhere and everyone’s compromised by default remotely worldwide all at once or something awful like that…

Leave a comment


Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via

Sidebar photo of Bruce Schneier by Joe MacInnis.