Comments

koita nehaloti May 23, 2014 5:02 PM

It seems that an interesting alternative for wifi is coming. It works by flickering LEDs that provide illumination at the same time:

https://en.wikipedia.org/wiki/LIFI

Range may be 30 meters and speed 100-1000 Gbit/s.

It is claimed that it gives security advantage because light does not pass walls. Maybe, but it seems to me that if a li-fi transmitter is visible from a window, using the kind of portable optical telescope that amateur astronomers use, it might be possible to get raw encrypted data or eavesdrop kilometers away.

There is this open hardware system that enables optical data transmission on kilometers distance, but with the transmitter optically focused too:

https://en.wikipedia.org/wiki/RONJA

It uses red break lights by default, but current consumer level LEDs are much more powerful, up to 1000 lumens.

Unlike wi-fi wlan, optical data transmission on visible light needs to work continuously, or it visibly flickers in annoying way and epileptics get in danger. That is why it has more reason to use lots of dummy traffic to hide when actual data transmission is on. It means lots of raw output of a random number generator. I imagine it presents some unique security pitfalls.

Something like RONJA seems to have more potential for peer to peer mesh networks and alternative ISPs than Li-Fi.

Making narrow wavelength LEDs is easier than making wideband LEDs that give softer, more natural light. Color filters can separate a handfull of wavelength bands to different channels. This is why I think that by using better encoding than ronja + many channels + more powerful LEDs, it would be possible to routinely transmit 40 megabits per second to 10 kilometer distance. It depends on weather, smog and sun directions on the location, but if it works 80% of time, it is enough for many uses.

For attackers, physically getting the raw bits from the side is roughly same difficulty level as with wlan, wimax or 3g, but spoofing transmissions to optically focused receiver is much more difficult.

Instead of LEDs, it might be better to use the kind of laser that DVD writers use. It needs beam expander for eye safety. Wider beam and many beams also scintillate less than one narrow beam. Too much scintillation and therefore too much darkened milliseconds may prevent transparent handling of ethernet protocol and require some special protocol that has higher ping.

One focusing optics can focus many separate LED transmitters from it’s focal plane to make many beams that go to different nodes in the meshnet or to different customers (if ISP provides it as a last kilometer system). The separate side beams may make the physical extraction of bits from the side much more difficult.

On cloudy night, it is possible to reflect transmissions from clouds. On clear night it is possible to use scattering of blue light from a focused light beam.

Josh M May 23, 2014 5:05 PM

Here’s a fascinating piece by Privacy International on the chips GCHQ surgically destroyed in the Guardian laptops.

https://www.privacyinternational.org/blog/what-does-gchq-know-about-our-devices-that-we-dont

(For the tiny bit it’s worth, my assumption is that these are ICs that the NSA has fabbed “custom” versions of, and they want to ensure that they’re destroyed properly. The keyboard/trackpad thing could be entirely mundane places to stash a bit of data, but the backlight DC/DC converter makes no sense to destroy, unless it’s something particularly sneaky.)

tjallen May 23, 2014 5:16 PM

SourceForge did a forced password change, starting yesterday (May 22). All registered users received an email, and there is a sourceforge blog entry explaining that this was not a breach, but rather, sourceforge made changes in the security system, including:

  • We have adopted a longer minimum password length standard.
  • There has been a change in our authentication layer, moving to a more modern Open Source platform.
  • Password hashing algorithm and key length has changed.

http://sourceforge.net/blog/forced-password-change/

Jonathan Wilson May 23, 2014 5:18 PM

The Snowden leaks indicated that the US is collecting complete phone calls from 2 countries. Per https://firstlook.org/theintercept/article/2014/05/19/data-pirates-caribbean-nsa-recording-every-cell-phone-call-bahamas/ the Bahamas was one country, the other one was unnamed by most media outlets.

Now Wikileaks has revealed that the second country where all call audio is being collected is Afghanistan.
https://wikileaks.org/WikiLeaks-statement-on-the-mass.html

Buck May 23, 2014 5:36 PM

@Josh M

Fascinating indeed! Although, I think they’re a bit off the mark with their starting assumption here:

Whatever the actual vendor and role of the chip, we need to know more about why GCHQ believes that these components can store user data and retain that data without power.

Take another close look at those three components… To me, the keyboard/trackpad controllers & power inverter look much more likely to contain an always-on implant rather than any user data…

z May 23, 2014 6:07 PM

@ Buck

The most interesting thing is that they don’t look destroyed; they look removed. If they had the data stored on them, GCHQ might need them back to retrieve the data.

Benni May 23, 2014 6:15 PM

The GCHQ asked the guardian to destroy specific chips in their laptops:
https://www.privacyinternational.org/blog/what-does-gchq-know-about-our-devices-that-we-dont

I wonder what these chips save?

Ex chief of the highest german court testifies before the german parliament:

http://goo.gl/Aho0Ju

From his answers, it seems to become clear they want some individual who is informed enough to file a lawsuit at germany’s highest court against the government, to stop german cooperation with the nsa, and the surveillance that the german secret service itself does around the world and in germany.

He says:

“If foreign agencies share data with the german government, then these data must have been aquired in a manner that is permitted by german law….

If german authorities use data which was illegally collected by foreign agencies then the german authorities act illegaly.

If german authorities tolerate violations of basic rights by foreign services, then the german authorities are accountable for these violations, even if foreign governments will have different laws.

Chairman of the comission:

Do I understand this right? There is no legal justification for foreign services to collect data in germany?

All three experts nod and say yes.

Question:
The german government must act if the NSA creates some building in germany?

Answer:

Yes. But germany is a federal republic, so this is in the hands of the individual states.

MDB Ströbele:

Are you clear that you have declared one of the main activities of the german secret service BND as violation of the constitution?

Answer:

Methods which do not fullfill the standards of the german and european privacy laws are plainly illegal. This can not be changed by some treaties.

British agencies are bound by the EU human rights charta, and the appropriate court is the european human rights court, and one can sue them additionally based on EU law, because they broke EU treaties.

to tolerate violations of the german constitution by foreign services with the german government ignoring the violations and doing nothing is illegal.

The first to interpret the constitution is the german parliament. You are responsible for this in the first time.

The highest court only takes actions when someone files a lawsuit.

But please do not overflow the highest court of germany with too many individual lawsuits.

Question:

Which obligations does the german government have to support the investigation comission?

Answer:

Only in very narrow cases can the german government refuse to support the comission of the parliament.

If you think you are not informed by the government fastly, correctly, and completely enough, then please file a lawsuit against the government at the highest german court.

I guess the friends at NSA and BND must be really annoyed by these judges now….

Nick P May 23, 2014 6:30 PM

@ Buck

I’ll also add that two probably had DMA and had direct access to user input. These traits give attackers many options from keylogging to code injection. From there, they can also use both BIOS payloads and covert communication channels.

Additionally, I’ve always said among best attacks were places nobody looked. How many people do you know who inspect and circuit probe their trackpad controllers for security purposes? 😉

@ Jonathan Wilson

Re Bahamas phone spying

It would make plenty of sense to collect in many of those island countries, Cayman most of all. Reason is tons of tax dodgers and proxy corporations operate through there. If middlemen’s communication wasn’t protected, spies could slowly piece plent information together for uses ranging from catching tax dodgers to blackmailing politicians.

Nick P May 23, 2014 6:40 PM

@ Koita

I’d be concerned aboug intercept if the light source, beam itself, or receiver were visible. The best value I see for this is reducing attack surface. Wireless transmission without light tends to bounce and spread all over the place as if to aid enemies’ antenna. Light is easier to absorb and more focused. So, less leakage in general.

Funny part is that we’re coming full circle: IR ports were once a prominent method for wireless communication. It’s gotten a lot faster now. For best stuff, look up Free Space Optics on Wikipedia. The project you linked is a very limited FSO, essentially.

I’ll also note LEDs for communication is coming to defenders second: researchers already used it as a covert channel in the past. There’s a paper you can Google on that. Actually, two: one leaking processor state incidentally; one controlled by malicious program directly for signaling purposes.

Anura May 23, 2014 7:29 PM

@NickP, Koita

The big question with LiFi is will they get complacent? Great, it’s much harder to eavesdrop… Encrypt it anyway. Even public wifi networks should be encrypted with ephemeral (EC)DH keys, even if they don’t provide authentication.

Benni May 23, 2014 8:14 PM

If someone wants to put malware in BND computers, here are photos from inside the bnd building:

Signals Intelligence control room:
http://www.sueddeutsche.de/politik/fotos-aus-pullach-still-und-dunkel-steht-die-bnd-zentrale-1.1971139-7

I guess thats where the copies from the fibers arrive

And here is the hardware that is used by the german spies for daily work:

http://electrospaces.blogspot.de/2014/05/pictures-from-inside-german.html

The telephone on the desk is a Alcatel-Lucent 4068 IP Phone or a smiliar model, which is a high end full-featured office telephone for Voice over IP networks. Alcatel was a major French telecommunications company which merged with the American telephone manufacturer Lucent Technologies in 2006.

It seems somewhat strange for an intelligence agency to use telephones that are made by a foreign company, as for example the German company Siemens manufactures telephony equipment for almost a century.

Benni May 23, 2014 9:00 PM

And by the way, the hearing mentioned above, where the Ex chairman of germany’s high court testified that most of what BND and NSA doees is plainly illegal, is merely mased in this ruling:

http://www.spiegel.de/international/germany/the-world-from-berlin-germany-s-new-right-to-online-privacy-a-538378.html

In 2007, long before the nsa scandal, the highest court of germany ruled that there would exist a new fundamental right:

A right for a citizen using a computer with an Internet connection to “a guarantee of confidentiality and integrity in information-technology systems.”

And now the german government is foced to protect this creation from 2007, and if it does not, any german citizen can sue them…..

In a single blow in 2007, the justices have adapted German Basic Law to the demands of modern information technology. An existing right to freedom in telecommunications was too narrow; legal protections surrounding a private home included home computers but not laptops carried in public, electronic organizers or mobile phones. The right to ‘protection of the private sphere’ and ‘informational self-determination’ (covering information offered by citizens in a census) have protected computer users so far, but insufficiently. So the verdict in the online-surveillance case has created a whole new basic right … In shorthand (it might be called) the ‘IT right.'”

Tim May 23, 2014 10:26 PM

@Benni “So the verdict in the online-surveillance case has created a whole new basic right … In shorthand (it might be called) the ‘IT right.'”

1) Basic rights are not created, they are acknowledged to exist.

2) ‘informational self-determination’ is entailed by extending property rights to intellectual matters.

Do you have an intellect? Does it have properties? Do you own them? If so, then it follows that intellectual property is just one kind of property covered under general laws concerning property. Theft of that property would be covered under general property theft provisions, making mass surveillance of communications illegal as well as immoral.

Thoth May 23, 2014 10:44 PM

A quick beginner’s Truecrypt guide I made in the light of all these events of Snowden revelations for anyone especially journalists. Truecrypt is still debated if it is the best tool but from a usability and platform compatibility point of view, I would still prefer Truecrypt over anything else if I need quick and portable encryption without needing too much setup time who wants quick and good plus widely reviewed open software.

Read: http://thothtech.blogspot.sg/2014/05/beginners-truecrypt-file-encryption.html

For those who are extremely tech savy and extremely suspicious of Treucrypt/AES and whatever else it is, you have 2 choices – implement your own 100 rounds Serpent512/Xsalsa999 super cascade cipher or take your own time to do your own setup with LUKS or whatever you want.

Please use the blog comments if you have anything to comment or otherwise you may comment on this week’s Friday Squid Blogging post and I would respond here.

unamerican May 23, 2014 10:56 PM

In America, the USA FREEDOM act just passed the House. According to the analyst Marcy Wheeler, the bill fails to stop bulk collection, newly enables cellphone location collection at the telecoms, restarts the internet dragnet and codifies ‘about’ collection, among other things. The civil liberties community and the internet companies withdrew their support just before it passed. It’s going to the Senate on a rushed schedule apparently to preempt the PCLOB FAA 702 report. This legislation will be sold as ‘reform’ and used to protect the left flank of the Democratic party in the midterm elections and to defuse the issue. So basically, we have maybe 30 days to fight.

Now is a good time to organize for resistance to include mass public boycotts of American IT/tech products and services by Americans.

Let’s reach out internationally and resist the fascist US surveillance state.

Benni May 23, 2014 11:03 PM

@Tim:
1) Basic rights are not created, they are acknowledged to exist.

I think it is more complex. Juristically, if someone has a right to do or have something in a certain way, this does of course not exist for a lawyer or a court, unless it is explicitly written down.

In North Korea, there does not exist any basic right for freedom of expression, because they have not included this in their laws. In order to become existent, a law must be written down first.

And that is what the german court has done in 2007. It ruled that the constitution of germany must be interpreted as containing: “A right for a citizen using a computer with an Internet connection to “a guarantee of confidentiality and integrity in information-technology systems.””

The german parliament now as graped what happened and tries to work on new laws for the german secret services. The good thing is now that in germany, there are enough persons who have lawyers and want to sue almost everybody. If these new laws that the parliament is going to make, will pass, then there are certainly many people who will check these laws, and they will not hesitate to sue the government. The entrypoint for this is extremely low.

Every person which believes that he/she observes a violation of his constitutional rights by some german authority cann directly appeal to the highest court:

https://www.bundesverfassungsgericht.de/organisation/vb.html

You do not have to go through several instances. You can go to the highest court directly. And it does not even cost a single penny:

Any appeals to the highest german court are completely free of cost.

Most germans have insurances, which pay the lawyers….

In germany, there is this hacker club ccc:

http://www.ccc.de/

These guys often lend their expertise to the parliament. For example they the highest court of germany forbid automated election computers, because of expertise of the CCC. The CCC now wants to support Snowden with money.

They already filed a lawsuit against the GCHQ:

http://www.heise.de/newsticker/meldung/NSA-Skandal-CCC-Sprecherin-stellt-Strafanzeige-gegen-die-Bundesregierung-2099375.html

and they have filed a criminal complaint against german government at the prosecutor general:

http://ccc.de/en/updates/2014/complaint

And you can bet that they now know to which court they will have to go, after the highest german court made this statements.

In difference to the prosecutor general, the highest court can simply force the german parliament to change their laws, or to stop what the government is doing with a single ruling…..

Figureitout May 23, 2014 11:32 PM

Thoth
–Excellent, I’m doing the same and will likewise guide a newbie along on my computer build and some other things I enjoy. This is what I’m expecting of cryptographers, pure coders, and mathematicians; it’s honestly not as easy as you think documenting everything you do for someone else, every little detail.

But, beware as blogger is a google service and my blog got hacked. Could’ve done some more things in hindsight.

Benni May 23, 2014 11:43 PM

I already have mentioned retroshare:

http://retroshare.sourceforge.net/

It is a friend to friend network for end to end encrypted voice mail, chat, forums, mail and even sharing of files is possible. All that with ssl and pgp encryption.

Here is a description of the cryptography it is using: https://retroshareteam.wordpress.com/2012/12/28/cryptography-and-security-in-retroshare/

Retroshare together with truecrypt, and NSA and BND will need to put an implant on your computer if they want to get the content of the harddrive and your communications.

Thoth May 24, 2014 12:26 AM

I am not leaving hacked blogs for chances to happen. I have considered using some form of mirroring service and I am thinking on which to choose from. Torrents would be a nice way or some form of P2P/F2F protocol with encryption.

The reason why I am doing tutorials is because I am seeing more bloggers and journalists falling preys to malicious authoritarian state actors who wish to take away controls and recently a blogger was politically and economically assaulted by state based actors.

It’s about time to put everyone on the same playing field and make it a more fair game although state based actors usually have more investment but the Community have the whole World.

I used Retroshare long time ago when it was still in it’s early stages but I don’t use it anymore. I am looking at Maidsafe (http://maidsafe.net) due to their security claims.

What we need for distributing cryptographic protocols and instructions to the masses is a highly resilient design.

When will Cypherpunk 2.0 take place 🙂 ?

Nick P May 24, 2014 12:35 AM

@ Benni

Thanks for sharing the info on the German constitution and how Germany’s legal system is handling all this. Very interesting stuff. It would be nice if we had such an acknowledged right or even lawmakers that really listen to pro-liberty engineers. Neither of those things happen here much.

Nick P May 24, 2014 12:41 AM

@ Thoth

“When will Cypherpunk 2.0 take place 🙂 ?”

Honestly, I’d rather it not. They focused almost entirely on coding software and crypto. Most attacks happen from hardware to software implementation. Better architectures and directions are known. I’d rather the next movement focus on the architecture of the endpoints to make everything else easier. There’s lots for them to build on. Point being, if the devices running the “secure” software can’t run software securely at architectural level, then the security is merely an illusion. And advanced resources of opponents will win out for sure.

Thoth May 24, 2014 1:18 AM

@Nick P

Well yes that’s what I meant. Coders and system architects need to really look into their codes and thus maybe some form of reform to switch from a quick implement to push out the codes for sales to implement and secure style.

Most of the ciphers are pretty well done and most of us who are here for a while should have some knowledge. As Bruce always say, “the Maths is strong but have no agency” and that’s where we should be heading a long time ago.

mr squib May 24, 2014 1:30 AM

The operating frequency of LT3957 can be set with an
external resistor over a 100kHz to 1MHz range, and can
be synchronized to an external clock using the SYNC pin.
A minimum operating supply voltage of 3V, and a low
shutdown quiescent current of less than 1μA, make the
LT3957 ideally suited for battery-powered systems.
Perhaps this chip had an embeded sync controller chip inserted next to it before then reaplying the black gloop, that they drilled out. Or perhaps an entirely different chip than the reference design was on that board.
They didn’t want anyone to scan it and turned it to drill dust.

Someone had a youtube video (there are probaly papers too) on similar specced chips quite a while back how they can be RF intercepted using the right sync. From memory macbooks had this problem for quite some time. There are quite a number of devices that can be listened to using the right broadcast clock sync via current induction or inducing pulse modulation via the power delivery system. Intel processors and some others are also susceptible to pulse modulation exploitation via taking advantage of their power control architecture according to a recent paper.
If your device isn’t susceptable to a similar exploit or there is too much signal noise they can always monitor your online purchases, intercept your package and ensure you are delivered a compromised device via a tailored IC or integrated WIFI package in a USB/VGA/DVI cable or similar (or just bump your lock when you aren’t home). Encryption is pretty much useless when they have remote access to your monitor feed and keystrokes via an automated remote relay box that recieves the wireless feed from your system then relays it to the network for logging and analysis.

Benni May 24, 2014 4:00 AM

http://www.taz.de/Abgeordnete-fuer-neues-Gesetz/!139093/

The president of the BND says:

“The BND does not share the juristical opinions of the experts who testified before parliament”

The chairman of the german parlamentarian NSA comission, note that he is in the same political party CDU than chancellor Merkel, is now being quoted with the following:

He wants an assesment whether the current BND law must be modernized. He says that he wants a general revision of the BND. But this should be open minded and could even “lead to more competences of the BND when it comes to technical foreign signals intelligence”

The head of the CDU/CSU fraction in the NSA comission says:

http://www.sueddeutsche.de/politik/nach-aussage-von-juristen-im-nsa-ausschuss-abgeordnete-fordern-neues-bnd-gesetz-1.1973560

“I am sceptic that we need new laws. the german secret service BND works since 1953 in complete agreement with the laws. I can not imagine that all german governments have violated the german constitution….”

Apparently, they do not have learned it yet. Maybe they will learn it only the hard way. The highest court was extremely explicit in its rulings. The judges testified that what the BND and NSA do would be essentiually similar to the “data preservation” practises that the court has forbidden.

Once the new BND law comes into light, it will certainly be checked. And if it will turn out that these politicians have produced something of the BND’s liking, then its clear that suing time has begun. The experts who gave their testimony indicated that one can expect germanys highest court to rule similarly as they did when they forbid the data preservation law.

Clive Robinson May 24, 2014 5:30 AM

@Josh M,

With regards the GCHQ destruction of the Guardian computer hardware, the back story is even more curious than you might think. I posted a link to the Guardian story and pictures of the hardware at the time as well as pointing out that GCHQ had done something realy stupid, and the reason was an idiotic “pissing contest” the Secretary to The Cabinate Office in assosciation with Downing St started with the editor of the Guardian Alan Rusberger (and people wonder why I think politicos are ego centric idiots).

Perhaps the story title should have been “Two country bumbkins come up from the sticks to spill national secrets”.

When the Ed Snowden story broke the US and UK authorities had a major case of foaming at the mouth and spital went flying every where, in the form of threats and spooks appearing all over the place. The Guardian offices in the US suddenly had so many “street/utility workers” outside it it became a major cause for traffic problems and a health and safety risk to anyone trying to get in or out the building. As somebody who worked there remarked it looked like the back lot of a movie production for a James Bond film. While back in the UK the Guardian editor got a number of ever increasingly threatening calls from various civil servants ending up with the “twit in chief” making threats about sending people down and puting surveilance etc on Guardian staff.

Now according to the Guardian these aproaches were either ignored or politly rebuffed, the little we know, from the otherside paints a different picture of unpatriotic obstanancy, prevarication and rudness… Either way it built up to a Mexican Standoff which was altogether pointless and stupid, the UK Gov had tried various threats to get at the documents and the Guardian had stood firm, so the UK Gov finaly demanded that the Guardia destroy them. Again the Guardian rebuffed the request pointing out that the UK Gov knew that there were several copies beyond the Guardian’s control in other countries (specificaly the US where there are laws that give better protection).

So where was the UK Gov to go? It was looking not just increasingly stupid but impotent as well. At this point it might do well to reflect on the political “Special Relationship” that supposadly exists between the US&UK, some sources say that Barack “the control freek” Obama had been rattling the chains of David “the caveman” Cameron via the hot line. Which is why Downing St got the Cabinate Office to try and make their 600lb Gorrila behave like an attack dog. Which as some one pointed out had the same painfully embarising effect as putting a tutu on a pig and expecting it to dance Swan Lake, you don’t know if to laugh or cry, but a call to the men in white coats is probably wise. As Einstin once commented about madness, the UK repeated the idiocy later with holding the “go between” at Heathrow airport and unsuprisingly got the same result…

Any way having established that there was little or nothing the UK Gov could usefully do, they did what all petulant children do and started stamping their feet and throwing hissy fits. So they invented the idea of the hardware being a national security threat and demanding it be handed over to be destroyed. I guess hoping that the Guardian would comply and they might get a copy of the documents that way, showing yet again that Einstein was not as daft as he sometimes looked. The Guardian said no to handing the hardware over and said –after much more pissing up the wall by the Gov– that they would destroy the hardware themselves as they had a more than adiquate staff to do that function.

So after more of the UK Gov proving Einstien correct, a face saving deal was struck… The UK Gov would send GCHQ experts to over see the destruction. So a few days later “Pinky and Perky” left Cheltanham and arived in the big city, now I don’t know if they had straw sticking out of their mouth or not but their reported behaviour suggests it was possible.

The Guardian had sent out it’s IT nomes to purchace screwdrivers, pliers, grinding tools and other items of mass destruction to do the job. So senior people befiting the gravity of the situation rolled up their sleaves but on their face masks and under the artistic direction of Pinky and Perky got down to some serious grinding. This hot action was apparently recorded by the directors on their iPhones and when the action reached a polished finish Pinky and Perky proffessed them selves to be satisfied and after one or two further shots for the family album went home with their bits and pieces and carrier bags of shopping…

The Guardian duly wrote a two page artical and along with some large photos showed what chips etc had been ground off the boards.

Now as regular readers of this blog will know one or two of us regularly go on about semi-mutable memory in things like IO devices and have frequently pointed out that just about anything that could have a microcontroler added to it has –including Apple battery packs– and where there are microcontrolers there is mutable and semi-mutable memory in largish quantaties in terms of Flash ROM and RAM that is either battery backed up or kept powered for the likes of “soft switches” and “wake on lan” etc. So much infact that you could store severel text copies of the likes of Ware and Peace in your IO controlers alone.

However this knowledge appears to be known to the likes of older hardware hacks and Russian / Chinese crooks and cyber-spooks alone. This general lack of knowledge and how it can be abused is considered by many in the Intel Community to be “State Secrets” even though you can look most of it up on the Internet.

However, this brings up the issue of real state secrets of dirty deads already done… did Pinky and Perky actualy reveal the fact that computers heading to news organisations like journalists mobile phones have already had “implants added”?

Personaly I think it does, we certainly know that other ICT equipment in news organisations has implants added. Which brings us to the question of if Pinky and Perky were quite what they were trying to appear to be, or if they were properly briefed. Either way only partialy destroying the boards has let the cat out of the bag.

As I posted on this blog back at the time what official secrets has Pinky and Perky actions revealed all because the UK Gov entered a pointless ego driven pissing contest it could not win.

What does surprise me however is how long it’s taken to be picked up by other people. I was fully expecting it to have been included in various degree/masters level courses on ICTsec by now.

Mr. Pragma May 24, 2014 6:17 AM

Clive Robinson

Yes, right. It was a show.

A funny question that usually gets ignored is how the spooks could know that the destroyed computers (akin to being symbols of “data storage”) really contained the “Snowden archives”?
After all the fuzz the Guardian made about resisting the uk gov. they would certainly not decrypt the hard drives in those computers to demonstrate they were the real ones. Not decrypting the data, however, made that whole thing an even more ridiculous and purely symbolic farce.

I do, btw, not think those computers were bugged. What for anyway? If the spooks are just capable of 1/4 of the magic attributed to them (and by themselves?) They could have “listend in” anyway, for instance by screen radiation or whatnot.

Even if we were to believe that the Guardian destroyed the real drives/computers, shouldn’t we strongly assume that a newspaper would have backups for one of the major stories they ever broke? Now what? The spooks asking for the backups to be destroyed, too? And then the backup-backups? Haha.

It’s pretty clear that this whole thing was mainly a theater performance a) to demonstrate to their own brit citizens how mighty mighty clever and tough their agencies are and b) to at least show some symbol of good will.

But there is a possible c).

It’s known that the Guardian did piss off Assange big time by (as seen from Assange’s perspective) all but colluding with the government (not publishing major pieces and other games).

One has to ask how much a “see, we are the good guys who bluntly uncover the governments” was a decisive factor.

Assume, some newspaper is known to bloodily do but gov. propaganda. Pretty nobody would read them, right. Also as a government, in particular a “democratic” one, one can’t avoid to sometimes be seen in unfavorable light. But how to control that? Well, by managing and controlling the “uncovering” oneself. Sweet deal, everyone is happy. Gov. is happy because at least it can control and manage the “uncovering”. Newspaper happy because being considered the one that “brutally examines and uncovers” drives up reader numbers and income. Citizens happy because they have that cozy feeling of having a “forth power”, the media, on their side.

So, possibly the Guardian wasn’t the victim in that computer destruction theater piece after all. Maybe that was just another win/win media/gov. cooperation. Sure enough the Guardian blew it up to a big time story and very strongly (and not really that subtly) suggested that they are really, really hardcore gov. hunters, so much so that from time to time the gov. gets emotional and gets tough on their media “enemies” – or so it was meant to look to John and Jane Smith.

Just thinking.

Jacob May 24, 2014 6:59 AM

My take about the destruction of the chips in the Guardian’s computer is that this was just a normal procedure by the GCHQ to render a computer devoid of any secrets. The guys just went “by the book”.

The two input controllers contain some EPROM, normally in the the range of 4KB to 128KB (the latter is too expensive for a cheap input device), and depending on the embedded code in them, they may keep the last input entries – e.g. passwords. Standard sanitizing procedures would be to annul those possibilities by physical destruction – although irrelevant in the Guardian’s case but still “by the book”.

I have no idea why the 2 orderlies also attacked the power supply controller.

3rd World USA May 24, 2014 8:32 AM

@benni re “In North Korea, there does not exist any basic right for freedom of expression, because they have not included this in their laws. In order to become existent, a law must be written down first.”

The universal right to freedom of expression is UDHR Article 19, whether or not the North Korean government admits it. The way it works is that states that fail to respect basic rights are not fully sovereign. The internationtal community assumes the responsibilities that the state is shirking. This authorizes a range of concerted interventions by UN member nations, from capacity-building to suasion to sanctions or peacekeeping under UNSC control.

You can see this happening as the US government blows off the peoples’ rights to privacy, freedom of association, freedom of expression, life, a fair trial, and freedom from torture. Treaty bodies, charter bodies, and special procedures are exerting increasing pressure on the US government as it forfeits more and more of its sovereignty. The Human Rights Committee, the Committee Against Torture, the Human Rights Council, the International Court of Justice, and the International Criminal Court have all gotten into the act, along with three or four of the UN special rapporteurs. Individual states like Italy and Spain have taken matters into their own hands under universal-jurisdiction law. The world knows the US government is not worth shit to the public, so they’re doing its job. That is their responsibility to protect us from USG repression and predation.

Alex May 24, 2014 8:34 AM

@Josh M

Maybe the inverter chip was hacked to allow them into the device when it’s powered down?

Having said that, there’s something about this story that doesn’t feel right to me. If the laptop was doctored, destroying the whole thing makes sense. Why leave behind clear evidence of what was doctored? And why is this coming out now, instead of immediately after the laptop’s destruction?

I think it’s best to be skeptical of everything, especially stories that appear to confirm what we already believe.

Jonathan Wilson May 24, 2014 10:14 AM

The story about the NSA recording all calls for the Bahamas is not new. Whats new is that the other country they are recording all calls for is Afghanistan. Although if you are someone the US doesn’t like and are in a country with a whole bunch of US, US-allied and US-friendly military forces in it, carrying a “portable tracking device” (as Richard M Stalman has called them on numerous occasions) is a stupid idea.

CallMeLateForSupper May 24, 2014 11:02 AM

@Clive
Your treatise of the adventures of Pinky & Perky at the Guardian offices sent me laughing out loud, choking on my java, and wiping tears of mirth. Trouble and Strife was moved to leave her gardening to come indoors to inquire, “What on earth are you reading?!” She suspected I was revisiting either “Anguished English” or one of my many Mark Twain books. I replied, “Twain? More like Monty Python on a theme by Mark Twain.”

I very much enjoy and appreciate your contributions here. Even the serious ones. 😉

Petréa Mitchell May 24, 2014 3:48 PM

Continuing Portland’s water saga, in another case of it’s-probably-okay-but-we’re-going-to-be-cautious, most of the area was under a boil-water alert from yesterday morning to today because of E. coli being detected. The affected water was shunted to the same reservoir as the water from last month’s incident.

I swear, Bruce, we’re trying to get this all sorted out before you come here for your lecture. Bear Grylls was in town yesterday– now there’s someone who could teach us all a thing or two about being less picky about what we drink…

Alessandro May 24, 2014 3:57 PM

Dear Bruce,

I’m currently looking at infosec certifications, and in the process of looking at some official CISSP study materials I found the following question:


  1. Two cooperating processes that simultaneously compete for a shared resource, in such a way that they violate the system’s security policy, is commonly known as:
    A. Covert channel
    B. Denial of Service
    C. Overt channel
    D. Object reuse
    Correct answer is A. A covert channel or confinement problem is an information flow issue. It is a communication channel allowing two cooperating processes to transfer information in such a way that it violates the system’s security policy. There are two types of covert channels: storage and timing. A covert storage channel involves the direct or indirect reading of a storage location by one process and a direct or indirect reading of the same storage location by another process. Typically, a covert storage channel involves a finite resource, such as a memory location or sector on a disk that is shared by two subjects at different security levels. This scenario is a description of a covert storage channel. A covert timing channel depends upon being able to influence the rate that some other process is able to acquire resources, such as the CPU, memory, or I/O devices. Covert channels as opposed to what should be the case (overt channels) could lead to denial of service and object reuse has to do with disclosure protection when objects in memory are reused by different processes.

Does this sound correct to you? It sort of baffles me — I dont see the connection between “two competing processes” and a covert channel.

In general what do you think are the most valuable certifications in the field?

Nick P May 24, 2014 5:04 PM

@ Alessandro

Sounds correct to me. Covert channels are one of the most neglected parts of computer security. They’re quite esoteric. Here’s some simple examples to help you understand each type.

Basic Model

Leaker and Receiver. The Leaker has access to secret information, yet no access to network. It might be an app that signs stuff with your private key. The Receiver is a less privileged program with networking access, but not allowed to read secret data in filesystem. The policy of the main access or communication mechanisms is to totally block these two from talking. So, they need to use a method that allows them to communicate, is easy to hide, and bypasses protections.

Covert Storage Channel

There are functions and locations intended for storing data. This include file contents, the data part of a network packet, and an IPC message. The operating system (and admin) usually enforce rules on how these are used, optionally monitoring them. However, processes might have read or write access to something else which can move information.

Examples include newly allocated memory, the header fields on an IP packet, or even grey area of temporary data storage that’s world-readable. In each case, the Leaking program can put data in there, the Receiver can read it, and nobody expects the storage location to be used for unauthorized communication.

Covert Timing Channel

Quite simply, Leaker and Receiver are both looking at something that takes a certain amount of time. Leaker can signficantly change the timing to represent a 1. Leaker can do nothing to timing to represent a 0. Leaker just keeps alternating what it does to resource to represent right sequence of 1’s and 0’s. The Receiver just keeps measuing how long it takes to interact with that resource, getting the data bit by bit. There’s often a certain amount of noise in the system, leading to use of error correction codes like those used for radio’s and such.

One example in a system is the cache. Anything the processor uses from memory gets loaded in the cache first. Then, all access to it are much faster. Cache is limited so new pieces of memory might require knocking something out the cache, then loading in something new. So, if you do an operation on something in cache, it’s fast (1). If not, it’s much slower (0). So, two programs might create a similar chunk of memory, operate on them, and measure timing. Leaker just keeps alternating between using what’s in cache and causing cache misses. Receiver keeps using its data structure and measuring the timing. Hence, without any official communication channel they are able to signal each other. In practice, that channel sends data several times faster than dial-up Internet.

A simplistic example of the same thing can come from networking. Let’s say it’s two computers with network connections that are monitored. The Leaker has secret data, yet can’t talk to external networks. The Receiver can talk to external networks, but isn’t allowed to receive secret data. The monitoring looks at the whole packet so it will see the data leaking.

There’s still hope for the spy as a timing channel is available. Both can send packets of legitimate data to each other. Both systems can measure the time between packets. To leak information, the Leaker process first internally turns it into a series of 1’s and 0’s. Then, it adds a delay to its network packets to represent a 0 or lets them be fast as usual to represent a 1. It might be a slowdown of merely a few hundred milliseconds. It might be every so many packets. In other words, this is “slow” in computer terms as such that Receiver program notices the delay. A human operator might not notice any difference and the admin might write it off as regular network jitter. The effect is that the Receiver measures speed of incoming packets, decides which bit they represent based on delay/speed, and eventually has the data it needs to leak.

And if outgoing connections are monitored, it might send the data out to the next system over yet a new covert channel. 🙂

Resource Exhaustion Channels

This is a newer designation. The situation is the Leaker and Recipient can share a certain resource. They can each tell if there’s any more of the resource available at the moment. If the Leaker totally depletes the resource, that represents a 1. If the resource is available, that represents a 0. That simple. This is commonly memory, processor time, I/O bandwidth, file space, etc.

Conclusion

Most systems contain many timing channels and probably an assortment of storage/exhaustion channels. The definitive method of finding them is Kemmerer’s Shared Resource Matrix. There’s a steady stream of research in IDing, preventing, and/or limiting them. Most focus on automated methods. Remember that there are internal and peripheral hardware components that interact with your software, too. They might be used as channels.

It’s a problem that won’t go away so long as there are shared resources in a system that processes can manipulate to signal one another. Most system designers have no idea how to design compatible, fast, low-cost systems without covert channels. Hence, after all other security holes are closed, covert channels are likely still available to tease secrets out of the system. You’re most practical option is to mostly worry about them at the application-level during data storage or transmission. Those are most likely to be hit, esp if data leaves your system.

re certification

Certification is about marketing. You’re telling employer why they should hire you. CISSP is the most valuabe because it’s the most in demand. Even anti-CISSP, security pro’s often tell newcomers to get it just because it’s a checklist item that gets you in many doors. Certifications in other in-demand technology relavant to the job, such as Cisco, will also help. Far as actually learning, this is best achieved through books, online examples, and hands-on practice. And what your produce or demonstrate with references is the best evidence of what you can do.

CIA Moron May 24, 2014 8:02 PM

@Petréa Mitchell

I thought Bear Grylls was an Alaskan Fast Food chain specializing in grilling bear meat. At least that’s what the CIA entrance exam marked as correct …

@Alessandro and Nick P

Covert Channels are overlooked. They’re also an essential part of OpSec – you know:

“Oh, my darling
Knock three times
On the ceiling if you want me
Mmm, hmm, twice on the pipe
If the answer is no”

Clive Robinson May 24, 2014 8:43 PM

@Alessandro,

One important thing to remember about “channels” is the leaker and receiver need not be on the same machine or even directly connected machines.

One of the reasons these channels exist is the desire of designers to make their systems “efficient”, which makes them fragile to certain attack methods (see “queing theory” if you want a mathmatical understanding of the issues).

The systems can also be viewed in a similar way to “filters” in communications networks and thus you need to think not just in terms of the data being the information but also as a carrier on which other information can be superimposed by modulation in various ways.

A filter in the classic sense alows or does not alow a signal of some type to pass through it. Usually they act on a single vector such as amplitude, frequency or time, and come in two basic types, those that pass values below a threashold (low pass) and those that pass values above a threashold. These can be combined to produce a filter that has two thresholds and it passes only values between them (band pass) or outside of them (band reject).

In general filters only work on the carrier signal not any other signal that might be modulated on the carrier, thus by the process of modulation signals that otherwise would have been considered to be “filtered out” will pass through.

An example of this is “clock jitter” which is a form of phase modulation, the clock signal will pass through a bandpass filter as long as it’s “in the pass band”, thus if you delay and advance the clock edges by a small amount the small frequency changes will pass through as well. If you can control these changes then you can superimpose data you wish to leak on the clock signal, and the rest of the system will be transparent to these changes.

And that is an important point to remember, as “guard systems” like firewalls etc etc can not by their usual design stop such signals so appear to be compleatly tansparent to many covert channels.

There are special design rules that can be used to stop systems being transparent but many are still classified (even though you can work them out fairly easily). The main disadvantage of such rules is they tend to make systems inefficient, unless the designer is very good at their job. It’s why I tell nearly all engineers that it’s a game of “Efficiency -v- Security”, much like the more often quoted (and less true) “Usability -v- Security”.

Wesley Parish May 24, 2014 9:30 PM

On Slashdot, DARPA Unveils Hack-Proof Drone.

” “The software is designed to make sure a hacker cannot take over control of a UAS. The software is mathematically proven to be invulnerable to large classes of attack,” Fisher said.”

I dunno. Gödel’s Incompleteness Theorems comes to mind …

“The first incompleteness theorem states that no consistent system of axioms whose theorems can be listed by an “effective procedure” (e.g., a computer program, but it could be any sort of algorithm) is capable of proving all truths about the relations of the natural numbers (arithmetic). For any such system, there will always be statements about the natural numbers that are true, but that are unprovable within the system. The second incompleteness theorem, an extension of the first, shows that such a system cannot demonstrate its own consistency.”

They actually mention the full surface of attack – networks, PEBKAC, etc – but fail to see the significance. So for the moment, DARPA’s come up with a drone with no known direct attack surface.

Big deal.

CatMat May 24, 2014 10:33 PM

@Wesley Parish,

One way to look at this is that they are saying that if someone does manage to take control of the drone there is no way to reassert control.

No? There is a way?

Then it is not hack-proof.

Somebody May 24, 2014 10:44 PM

Godel’s Incompleteness theorem says you can’t prove everything. It does not say you can’t prove anything.

koita nehaloti May 24, 2014 10:53 PM

Software is global and much of it depends on donations, but giving donations to organizations in other countries have the problem that there are no tax deductions. If a ( let’s take 2 random examples ) german wants to give to openBSD, as far as I know, there can’t be any tax deduction. It would need that there is some kind of donation treaty between Canada and Germany or Canada and European Union.

Every country and/or EU would need to consider every donation recipient separately and make a list of foreign organizations. It might be so that tor project or bitcoin foundation would be too controversial, but no reason not to include Linux foundation or openBSD.

Personally, I have very vague knowledge about donations and tax laws, and have not ever donated, but I might. How would donation treaties or donation treaty work?

Benni May 24, 2014 11:31 PM

now the reports of the three law professors who testified that

a) the surveillance of germany’s secret service BND,
b) its sharing of data with the NSA and
c) the NSA activities in germany

all violate the german constitution, are online in full text.

I have not had the time to read them yet:

http://www.bundestag.de/bundestag/ausschuesse18/ua/1untersuchungsausschuss/-/280848

If somebody wants to go to germany’s highest court against the NSA, this should be excellent cannon fodder. Remember: filing a lawsuit at the german court of the constitution is free of costs. And we in germany have a so called “amtsermittlungsgundsatz”. This means that the german authorities have the obligation in a trial to investigate all evidence. There are no public prosecutors who only fight against your case. But instead, the government has to give all its evidence it has to the court. And then a professional judge reviews it. So you do not even to be rich and have enough money for a high profile lawyer. Any lawyer will work, but of course, it is good if he is better.

And by the way, there recently was an interview the chairman of the parlamentarian comission said they wanted to hear snowden several times.

When they are so eager to question Snowden that they want him several times, then it is indeed better for him, if he follows his lawyers advice: Say not much until they give you a flight ticket to germany. Perhaps Snowden can make the members of the control comission a bit hungry, when he says that he would offer much information if he can testify in germany.

Nick P May 24, 2014 11:51 PM

@ Wesley

The DARPA program behind that included certified compilers, static analysis, separation kernels, etc. I posted some of their work here. Each greatly strengthens an aspect of the system. Unhackable is a stretch by far but the systems do employ much higher assurance methods than typical COTS offerings.

Buck May 25, 2014 12:44 AM

@Somebody

nice thought… but in Humans’ current understanding of the universe, the unfortunate corollary to that great theorem of Gödel is (in your own terminology): “To prove anything, one must first disprove everything else” Hmmm…..?? Wait, what was our original axiom!? I seem to be trapped in a loop again :-\

If the drone is really provably un-hackable, doesn’t that mean all science has been solved..?
Hurray! Let us all rejoice in our successes! 😀
Hang tight though – apparently, this is as good as it gets…

No need to look behind any curtains folks – this is all of it! 😉

So, of course; once all known-known classes of vulnerabilities have been accounted for, it’s case closed! No sort of TEMPEST-type technique or spooky quantum action (that isn’t already understood) could possibly ever come to light…

Why thank you O’ glorious PR reps of DARPA! How will we ever repay you???

z May 25, 2014 1:50 AM

Well I just ran Firefox from a terminal and noticed an interesting error. Anyone know why googleads.g.doubleclick.net wants to access my keyboard API? I’ve seen a few other posts around the internet popping up about it all within the last few weeks and nobody seems to have an answer.

Buck May 25, 2014 2:19 AM

@z

You may be quite interested in seeing some of the relevant reports on the NSA’s QUANTUM program… (there’s https://www.schneier.com/blog/archives/2013/10/how_the_nsa_att.html for a start)

As Bruce rightly pointed out a few days ago ( https://www.schneier.com/blog/archives/2014/05/the_nsa_is_not_.html ):

QUANTUM is AirPwn with a seriously privileged position on the backbone

AirPwn being, of course, a freely available tool for testing your own networks while building you’re own real-wold knowledge & skills: https://airpwn.sourceforge.net/Airpwn.html (No https available for this link… You’re best of searching it for yourself), but more importantly, you should assume that SSL provides zero true protection! Especially if you happen to be a sysadmin…

See also: MITM, squid, tcpdump, etc.

Benni May 25, 2014 2:27 AM

I have now read the reports of the three experts.

The highest court ruled some time ago that the g10 law did not violate the constitution, because the BND told the judges that the agency would be merely able to collect satellite based, wireless communications…

Nevertheless, the judges ruled that only for the reason of preventing a weaponized attack, it can tap wired communications.

Soon the g10 law was extended a bit, with help of our friends at nsa.
The remark on the “wireless communication” was removed.

And now the judges feel somewhat betrayed. The ex chief of the higest court says in his report:

“before any intrusion into basic rights the government must analyze the chance that there is really a danger. A secret modification of a computer system is only allowed to happen if there are concrete signs of a danger for the federal republic. Just suspecting that something dangerous would happen does not suffice….

The highest court previously ruled that the g10 law did not violate the constitution. This was done because the surveillance was not without any reasons and the surveillance would have strong practical and technical limits. In this connection it was mentioned that only the satellite based communication is allowed to be tapped”

When one reads this, one believes the judge is apparently surprised that the german government which lied to him about the capabilities of the BND and then used his ruling to modify its laws and go forward in the creation of the surveillance state.

Benni May 25, 2014 3:58 AM

According to the german government, its secret service can crack encryption used to encrypt email!!!!!!!!!

Here a lawyer has filed a lawsuit against the BND copying all german emails:

http://www.spiegel.de/spiegel/vorab/anwalt-klagt-gegen-durchleuchtung-von-e-mails-durch-den-bnd-a-960203.html

The lawyer believes that copying all data traffic in germany, then searching all emails for words like “atom”, and finally forwarding emails containing these words to agents who read them personally, this is against the constitution.

This is funny for all who do atomic physics in germany. At least BND learns newest research about atoms that way, and the NSA too…..

In 2010 the BND spies had read 37 millions of emails from within germany which popped up in their search algorithms.

But how does the BND do that?
Most email providers use ssl.
How can the BND read something that was encrpted and sent to google mail.

This here is a very interesting but old article on that

http://www.spiegel.de/netzwelt/netzpolitik/regierung-haelt-details-der-e-mail-ueberwachung-geheim-a-834897.html

Members of parliament asked questions at the government what the BND does here.

The government did not answer much. But what they answered contains some funny thing:

http://www.andrej-hunko.de/start/download/doc_download/225-strategische-fernmeldeaufklaerung-durch-geheimdienste-des-bundes

Question: Is the technique used by BND capable of decrypting encryption like pgp or ssh at least partially and to analyze it?

Answer:

Yes, the technique employed by BND is able to decrypt encryption, depending on its strength.

In 1968s, the german BND predicted a soviet invasion in Czech, when the NSA was blind, because BND could read russian cryptography when NSA could not….

http://en.wikipedia.org/wiki/Bundesnachrichtendienst#1960s

Mike the goat (horn equipped) May 25, 2014 5:35 AM

Benni: I assume they mean they can thwart TLS secured sessions between client and server, although it wouldn’t surprise me if S/MIME could be circumvented in some way or another – the standard was a joke, hence its lack of popularity despite default integration with pretty much every MUA around (PGP requires a plugin in both Outlook and Thunderbird, for example – now especially for the former one has to assume that the MUA itself is potentially backdoored with a bit of help from the NSA’s buddies at Redmond, but assume it isn’t for the sake of this discourse).

S/MIME – ignoring potential vulnerabilities in implementation or even the RFC itself – relies upon the ludicrous and already majorly broken centralized CA model. As we have seen with the hack of Diginotar and malware appearing with digitally signed drivers from supposedly reputable software companies (e.g. Realtek, where key disclosure by an employee was the most likely explanation as to how that key leaked out), there are issues but it is even more fundamental than that – you are trusting a third party to effectively guarantee the identity of other users and unfortunately the certification authorities have been pretty lax with guarding such a huge responsibility (case in point Geotrust’s former Georoot product where someone with enough dollars is given an actual functional root cert – ideal for MITMing and funnily enough this has actually been used for this purpose by organizations using HTTPS proxies, ostensibly to protect their organization from threats and to allow their IDS and antivirus exposure to otherwise encrypted streams).

So that the BND supposedly made such a seemingly bold claim doesn’t surprise me all that much.

That said – those who are using a known secure OS and copy of PGP/GPG (compiled from source) and encrypting their plaintexts to blobs before dumping them into their mail user agent are likely to be safe. PGP has been a thorn in the side of intelligence agencies since Zimmerman conceived it. Of course – and I know Nick or Clive will mention this especially – having a “secure PC” to run your encryption software from is easier said than done. We can probably safely assume that post-UEFI generation PCs are “born broken” in that regard. I believe that given what we now know about the reach of the NSA and FVEY’s intelligence programs that they would surely have seen to it that PC hardware was effectively broken to facilitate government compromise. Given the increasing capabilities of “helper” ROMs and management controllers (iLO, IPMI, etc) there is a veritable smorgasbord of ways that your PC could betray you – and potentially do so remotely. We know that even ten years ago they were engineering monitors to deliberately be ‘noisy’ to facilitate TEMPEST style (man in a van) surveillance. LVDS – ubiquitous in laptop display controller technology – is an EMSEC nightmare. These engineering decisions were likely not by accident.

Man, I have digressed. I apologize.

Mike the goat (horn equipped) May 25, 2014 5:46 AM

Figureitout: all the tutorials in the world are not going to help until we can start building and distributing clean – and importantly verifiably clean – hardware. This is, of course, easier said than done and why most of us are stuck using ancient hardware and effectively just crossing our fingers.

Of course the “clean machine” I speak of won’t exactly be a speed machine. I am thinking along the lines of a Z80 clone – easy enough to organize fan, easy enough to verify and not too many places for anything malicious to hide.

Clive Robinson May 25, 2014 5:58 AM

@Mike the Goat,

Is that a microwave horn you have attached 😉

Don’t appologise for knocking the point home, it appears that the industry either has the memory of a goldfish, or for some reason wants people to forget the hard won lessons of times past.

And personally I feel is’s the latter for a whole variaty of reasons including absolving themselves of blaim when they fail accidently or otherwise.

Thus “banging on” about it is benificial as it slowly opens peoples eyes and hopefully stops them sleepwalking into trouble.

Benni May 25, 2014 6:01 AM

@Mike, Well and there is another candidate:

The openssl library has developers who are mainly germans. The BND is a foreign intelligence service. For this reason, the company Cryptothat sold broken BND modified crypto hardware, had to be installed in switzerland. And perhaps for this it was good that one of the few openssl developers is an englishman.

Why the hell, if not for spying, should one implement a function with assembler into openssl, that can jump to every codeline, thereby breaking the usual C language rules?

the openssl developers state they have implemented this for windows. But actually, it was compiled for all openssl versions……

given that BND has tried subversions of crypto hardware before, why not modifying crypto software….

http://freshbsd.org/commit/openbsd/f868fc6f39a2c45a6c2bab70addc92525d467904

So it turns out that libcrypto on i386 platforms, unconditionaly compiles this
little gem called OPENSSL_indirect_call(), supposedly to be “handy under
Win32”.

In my view, this is a free-win ROP entry point. Why try and return to libc
when you can return to libcrypto with an easy to use interface?

Better not give that much attack surface, and remove this undocumented
entry point.

ok beck@ tedu@
+0 -39 lib/libssl/src/crypto/x86cpuid.pl
+0 -39 1 file
diff –git a/lib/libssl/src/crypto/x86cpuid.pl b/lib/libssl/src/crypto/x86cpuid.pl
index c7a57a3..169036d 100644
— a/lib/libssl/src/crypto/x86cpuid.pl
+++ b/lib/libssl/src/crypto/x86cpuid.pl
@@ -257,45 +257,6 @@ for (@ARGV) { $sse2=1 if (/-DOPENSSL_IA32_SSE2/); }
&ret ();
&function_end_B(“OPENSSL_atomic_add”);

-# This function can become handy under Win32 in situations when
-# we don’t know which calling convention, __stdcall or __cdecl(),
-# indirect callee is using. In C it can be deployed as
-#
-#ifdef OPENSSL_CPUID_OBJ
-# type OPENSSL_indirect_call(void *f,…);
-# …
-# OPENSSL_indirect_call(func,[up to $max arguments]);
-#endif
-#
-# (
) it’s designed to work even for __fastcall if number of
-# arguments is 1 or 2!
-&function_begin_B(“OPENSSL_indirect_call”);
– {
– my ($max,$i)=(7,); # $max has to be chosen as 4n-1
– # in order to preserve eventual
– # stack alignment
– &push (“ebp”);
– &mov (“ebp”,”esp”);
– &sub (“esp”,$max
4);
– &mov (“ecx”,&DWP(12,”ebp”));
– &mov (&DWP(0,”esp”),”ecx”);
– &mov (“edx”,&DWP(16,”ebp”));
– &mov (&DWP(4,”esp”),”edx”);
– for($i=2;$i – {
– # Some copies will be redundant/bogus…
– &mov (“eax”,&DWP(12+$i4,”ebp”));
– &mov (&DWP(0+$i
4,”esp”),”eax”);
– }
– &call_ptr (&DWP(8,”ebp”));# make the call…
– &mov (“esp”,”ebp”); # … and just restore the stack pointer
– # without paying attention to what we called,
– # (__cdecl *func) or (__stdcall *one).
– &pop (“ebp”);
– &ret ();
– }

-&function_end_B(“OPENSSL_indirect_call”);

&function_begin_B(“OPENSSL_ia32_rdrand”);
&mov (“ecx”,8);
&set_label(“loop”);
Raw diff

Clive Robinson May 25, 2014 6:14 AM

@CallMeLateForSupper,

I’m glad it amused and more importantly caused your wife to consider you were having a little more illicit pleasure than she thinks you should be 🙂

I’m often reminded of two sage pieces of advise, the first from an unknown doctor who considered laughter a good cure for most maladies and safer than jogging. The second was from the comedian Ken Dodd, “Laughter is a noise that comes from a hole in your face, anywhere else then you best see a doctor!”.

Mike the goat (horn equipped) May 25, 2014 6:14 AM

Benni: I have repeatedly called out the openssl devs both here and in other public forums on the myriad of issues I have with both a lack of transparency from the foundation and additionally the code quality itself. I have come to the conclusion – and made the recommendation to a business that contracted me to review the src – that it isn’t to be trusted and importantly need not be relied upon especially when there are high quality drop in replacement SSL libraries, some of them up to ten times smaller whilst retaining most essential functions. When you are integrating into embedded systems that makes a difference but more importantly it is far easier to audit something tiny (for example wolfssl – cyassl’s embedded offering is TWENTY times smaller yet still provides TLS 1.2 with a selection of the most popular ciphers).

Mr. Pragma May 25, 2014 6:38 AM

Mike, Nick P, Clive

Well, yes and no.

Yes, old processors are slow(er than todays). But a) the software was less bloated, too, and b) performance can also be gained by multi-coring those “simple” cores. So, that approach of yours doesn’t look that bad, considering that an OS, some important libraries, and some software (like servers) can be kept relatively small.

The major issue I see with that approach is all the supporting stuff around (IO, busses, …) which usually are micro-{processors|controllers}, too.
And there one can not simply fall back to old chips. Ethernet, disks, etc. must be more or less up to todays standards to be useful.
Sure, it can be done, but it’s a major project.

Ad “bnd”:
Considering Germanys position (basically a us-colony), if the bnd had any significant crypto breaking capability nsa would have that, too, the next day. Moreover, Germany is virtually no ICT area anywhere near leading so such an assumption seems little credible.

I also think that they can crack only (very) low level encryption, if that. Quite probably it’s just the usual making use of badly implemented or configured security of their victims.

General:

Security is about delaying. It’s not a binary secure/unsecure thing but good security equates basically to delaying unwanted access long enough to be effectively denied.
So, we should not easily do away with “x just delays …”.

And any reasonable OpSec begins with knowing what to protect and what from to protect and at what cost.
At krebs on security I just fell over a story about ebay having diabled copy/paste for passwords and about that leading to shorter passwords being used. So, the cost question is very pragmatic and more important than is often seen.

Bluntly speaking, transmission security is profoundly fucked anyway. Too many routers and devices and almost always end points outside ones control.
Rather than trying oneself (as opposed to designing and implementing sane standards) to invest heavily in transmission security, I feel that using a reasonable average level is sufficient and adequate (so as to be one of the stronger elements and not one of the weaker elements in the chain).
I’d love to say that one should put more effort at securing the payload but alas, that’s shockingly often not reasonable either considering the endpoints as well as, at least usually, the PKI problem. What good does it to encrypt the payload properly and securely if the public key can be played with by any number of parties and if, for instance, the password database is easily crackable because someone felt (and many do!) that MD5 was damn good enough?

I see basically two practical approaches.

One is to fall back to non-ICT mode for some things. Example: Have a dedicated account for electronic payment and a old-style walk to bank and fill in a form account for your paychecks and savings.

The other one is more political. Some guys who know what they are talking about and some credible leaders like Bruce should create a “standard” (in ” because it wouldn’t be official) of reasonable crypto (incl. specs) and some kind of professionals guild which would then hand out some kind of nice trust logo – and – an online list of companies who have signed to support that and what levels they support.
Such a logo could, for example mandate that, along with the trust logo, any member must fully specify the level of security they offer/support. For instance “Password store SHA256 or better” or “TLS version X, using Y ssl library and supporting A,B,C standards”, “DNSSEC” and, important, “certificate provider Z”, both for the trust database and as a DNS txt record “using a standard string like “Certificate Provider ” + PKI string so as to be script elaborated.

That’s btw a point that I fail to understand anyway. It would be sooo simple to put ones PKI/cert.provider into the dns and that would be so useful to at least raise a warning. But alas, once more standards bodies are mucking around and losing years for little …

Clive Robinson May 25, 2014 7:10 AM

@Jacob,

The guys just went “by the book”.

Which is why they in effect leaked “state secrets”, and the gorrila in the Cabinate Office is to blaim, as Pinky and Perky were probably “just following orders”.

This is because the secure destruction of computer equipment for national security reasons is usually done in secret or in a way that does not leak information about what the security forces know to be risk vectors.

The way Pinky and Perky went about it revealled a number of interesting things directly. If however they had got the Guardian staff to grind off all chips then those interesting things would not have been apparent to any one who choses to look at the Guardian photos.

Hence information that was either uncommon knowledge or more importantly “Not Known” in the open community is now clearly visable to all. And this will probably cause enquiring minds to piece together information that would be classified to the highest levels based on what can be seen from the Ed Snowden revelations that followed the Pinky and Perky show at the Guardian offices[1].

And to my mind this opens all sorts of questions which I hope the open community pick up on and investigate, and as I said before I’m quite surprised at the length of time it’s taken so far to get started. Hopefully at some point soon Bruce will use it in one of his articles and journalists and others will then pick up on it and get the ball rolling.

And perhaps Ross J. Anderson and others at his level will put it into the courses and books they use to train future security proffesionals.

[1] We have already seen this in action, if you think back to “Bad BIOS” there majority thought was “It’s all in his head”. However some of us pointed out it was more than possible and even showed why, but still the majority view held[2]. Then the Ed Snowden revelations suddenly changed a few peoples point of view, and enquiring minds were inspired and had made their own versions in short order. Some actually wrote papers and thus improved their academic standing, quite easily. Now the majority view has changed and it is accepted as an attack vector that has to be mitigated against as part of standard security engineering practice.

[2] This is not the first time the general community has ignored people who have demonstrated “air-gap crossing technology” and discussed how it can be used. The usuall “brush off” is “it’s to complicated to be practical” which is just the old “Not Invented Here” syndrom, which is a major reason security fails us as history has often documented.

koita nehaloti May 25, 2014 7:48 AM

re: Hardware backdoors

There may be some relatively cheap ways to locate a backdoor handling area from a chip:

1.Put some very special custom EEG sensors on its surface to make rough measurements on electric fields, use every documented functionality and if some area never activates, it is suspicious. If some area activates more on some arbitrary input, it may be a sign that the backdoor’s secret knock matches partially. First 10 bits of a knock code would match with 1/1024 of inputs and first 20 bits would match with one input from about a million.

(EEG sensors are used in hospitals and in medical research. EEG sensor for chips would have different scale in size and voltages.)

2.The read and write head of a spinning hard disk could be repurposed for making a precise magnetic map of a chip, to reveal the wiring and transistor locations. Even water has small interactions with magnetic fields and seems a good guess that semiconductors would interact more, with or without current flowing. It could help to first treat the chip with a beam of ions that stick in. If the ions are protons or helium, they will drift away within seconds or days.

This mapping goes beyond security, to independent quality research that could predict which areas of a chip are more prone to physical errors that occur too rarely for functional testing in factory or by user. The predictions are mostly about individual chips and to some extent about chip models. If the chip is a multicore processor, then some OS might allow a setting that enables avoidance of the worst core. If the chip is RAM, some OS might allow avoiding some memory addresses. The avoidance might be just for some critical programs.

Actually physical errors relate to security because there might be some vulnerability that gets enabled by physically random change of a bit that is one of billion other bits that would enable the vulnerability if changed by extra-computational physical reasons.


If a chip has a backdoor area and it is concentrated within a square or circle that does not handle anything else, then the backdoor can be disabled by drilling a hole with a small enough drill. Attempt to use the backdoor could still cause some random behavior and crash a computer. Using a narrow ion beam may be easier and faster way to disable the backdoor if one has such ion cannon (that resembles tube TVs).

All this needs at least a clean-glovebox if not a clean-room.

Alessandro May 25, 2014 8:15 AM

@ Nick & Clive

thanks for your answers — I don’t take issue with the definition of a covert channel, what baffles me is the “Two cooperating processes that simultaneously compete for a shared resource” bit.

In other words: if they weren’t cooperating processes, if they weren’t “competing for a shared resource”, would it not be a covert channel? My guess is that it would be, thus making the “Two cooperating processes that simultaneously compete for a shared resource” definition redudant.

Probably a trick question i guess, because i would have answered that if done maliciously it could be classified as a DDOS.

Benni May 25, 2014 9:05 AM

Snowdens Lawyer works hard to get him to germany.

http://www.spiegel.de/politik/ausland/edward-snowden-lotet-rueckkehr-in-die-usa-aus-a-971551.html

He now announced, that Snowden is negotiating with the United States for a return.

Well, if the parlamentarian comission want to question Snowden, they certainly can not do this if he is in an american Jail.

The lawyer thinks it is shocking that germany’s inner minister called Snowden a criminal (We now know why they did that, after the testimony from the constitutional rights experts: The BND is involved in very similar illegal things as the NSA, and probably will now have to stop it). The lawyer writes that perhaps the engaged public in germany can put in a word and say snowden has done so much good that he deserves to be treated better…

the parlamentarian comission must be supported by the german government by all means.

The former president of the highest court said in his testimony “If you do not feel informed fastly, completely, and correctly enough, then please file a lawsuit at the highest court”.

Up to now, the only thing where some of the members of the comission did not feel “informed enough” was the report of the government that it does not want to let Snowden enter the country.

Ströbele already said that he wants to sue the government if it does not let Snowden coming to germany. Snowdens lawyer asked the government already, what crimes should Snowden have done in their opinion, and whether these crimes were a political crime, which would make any extradition from germany forbidden, according to german law…

The government has not answered yet, but it should until the beginning of June. Perhaps then Ströbele can finally sue the german government. Such matters are decided usually very quickly at the highest court.

yesme May 25, 2014 11:13 AM

@Thoth

US Gov just can’t help themselves. Somehow they have the urge to be jerks. And they desperately need an enemy. An enemy far away and invisible. A boogie man.

Unfortunately the policy of fear has become the norm.

And of course China is gonna react to this. I wouldn’t blame them.

(There is also the possibility that it is a distraction.)

Nick P May 25, 2014 11:15 AM

@ Alessandro

It is bad wording. Far as covert channels, a timing channel can be considered competing for a shared resource as they both keep trying to use it and checking how successful they are. Object reuse doesn’t fit at all so it’s knocked out entirely. Then an overt channel is just a normal communications channel and processes don’t compete for them. Denial of service is when one or more clients flood a channel to deny service from other clients. You’re defense here is it virtually never involves two processes: many more and typically distributed so often using different terminology in questions (eg “clients”).

So, if it’s two processes competing, the most likely is covert channel (timing or resource exhaustion). Denial of Service is a possibility, although rare on modern networks with just two processes involved. The wording is horrible and could cost you points. At least you have something to fire back with.

Clive Robinson May 25, 2014 11:17 AM

@Alessandro,

what baffles me is the “Two cooperating processes that simultaneously compete for a shared resource” bit.

Ahh that should be easy to answer but “usage&practice” make it a little fuzzy.

Covert Channels are a subset of Side Channels, the “current accepted” definition of covert channels is that they are non obvious and deliberate, not obvious to an observer, accidental or an unavoidable consiquence of say the laws of physics. Thus if deliberate the sender and receiver both need to cooperate in some way for the information to get out and be usable to the attacker. If there was no cooperation it would be just a side channel or other subset thereof.

The obvious problem with the definition is that, if you find a hidden channel is the sender a deliberate design or an accident… the fact that somebody has made use of it via some form of EmSec receiver does not make it a covert channel just that it is being used as one…

Which is probably why Lampson’s original definition back in 1973 was a little loser and would include channels where the sender was not deliberatly designed to leak information but did as a consiquence of it’s desired behaviour (such as the system load). Further the 73 definition considered “covert” only as hidden from the security monitors in the system, so if it flashed the front pannel status light in –nice easy for humans to read– Morse code obvious as it is to the observer, it was still “covert” if the system security monitor could not observe it…

Nick P May 25, 2014 2:23 PM

@ Clive Robinson

re covert channels and side channels

Thanks for bringing that up as I forgot to mention it. The existence of unintentional channels is important as well. Many covert timing channels that were actually exploited were entirely unintentional. Yet, they were channels nonetheless.

“The obvious problem with the definition is that, if you find a hidden channel is the sender a deliberate design or an accident… the fact that somebody has made use of it via some form of EmSec receiver does not make it a covert channel just that it is being used as one…”

That’s the problem I’ve always had with the definitions. I call all of them covert channels. The reason is that, deliberate or not, the mechanism and result of the leak is the same to attacker. Side channels are also all generally covert if only because people can’t really see them in production without plenty effort. So, I stuck with covert channels.

“Further the 73 definition considered “covert” only as hidden from the security monitors in the system”

I like that definition. That’s pretty much how I’ve thought of it as anything is secret until someone knows about it. Yet, these channels still evade detection in practice because they’re unmonitored. So, it’s probably ideal definition.

@ Mike the Goat

Why has the horn been equipped? And why only one horn? Who broke the other one off!? Give me his name, location, and IP address. I promise nothing unnatural will happen to him.

re BND cracking encryption

I agree that the BND likely hit the crypto in the same ways NSA does: at every aspect except the algorithm. I’ve always said cryptographers focus their brains entirely too much on the primitives. The brains are needed most in the crypto constructions that apply primitives in secure ways. I’m glad Bernstein et al are taking the lead on that front in their NaCl project. We need more like that, especially in interactive protocols such as TLS.

Regarding S/MIME, I thought its concept and abstract design are OK. Naturally, I’m suspicious past that about RFC’s, implementations, and how apps use it. That highly assured mail guards used a variant of it shows it can be made trustworthy enough for DOD/NSA. Thing is, though, that PGP has been built on, reviewed, etc. by contributers ranging from companies to cypherpunks. That leaves little reason to build on S/MIME. So, a highly assured version of PGP is preferrable unless someone has a good reason not to do this.

re hardware

RobertT was our insider on that market. He was clear that, other than intercepting the design, it’s hard to subvert a chip at the other layers. And it should be even harder to subvert on most cutting edge process node technology. So, whatever is built should use verifiable tools, hand off design by courier to right people, be able to trust them to produce the mask, courier it to fab, and have fab produce the chip. Knocks out plenty of issues, but plenty still remain.

Issues like this are why Clive and I periodically bring up voter schemes of mutually untrusting chips. In this case, they would be made at different fabs that presumably are unlikely to cooperate on a subversion. Such a system would likely be a timesharing system rather than a real-time system. That’s why I switched one of my designs to fast timesharing. It expands options I can use for initial systems. One idea was making whole system a virtual machine ported to arbitrary chips and whose code can also be re-arranged internally. This means we’re not strictly relying on the chip itself. Scheme has its own issues.

” I am thinking along the lines of a Z80 clone – easy enough to organize fan, easy enough to verify and not too many places for anything malicious to hide.”

I disagree. If we’re starting from scratch, build a simple RISC processor with tagging or capability addressing. SAFE and CHERI both show it takes little hardware to accomplish, while providing massive & flexible protection. So, a simple system with such a modification should be the goal. I’ve also added dedicated I/O processor to my requirements to eliminate effects of tons of interrupts, make tagging to/from devices seemless, and isolate their memory access.

Right choice of components would simplify this by keeping each component simple except for RISC core. That’s inherently more complex than most things, yet could be shared for any components doing processing: main CPU, I/O coprocessors, crypto engines, app-specific coprocessors, etc. And good news is OpenCore’s site already has most of the components people need. Examples of implementing tagging/capability units can be found in the open specs of DARPA clean slate efforts and in the big Capability Based Systems ebook online.

This is all easy to do for knowledgeable people if the target is FPGA’s or ASIC’s. That small teams with little hardware experience are producing working prototypes under DARPA says plenty to that affect. If you’re using TTL’s, etc I’m not sure how easy it will be. I know the System/38 capability architecture was built out of IBM’s version of TTL’s. That’s hopeful as it was a brilliant architecture for its time with many good properties. The Burrough’s tagged architecture (1961) is said to use “discrete transistor logic,” although I’m not sure what that implies about implementation on TTL’s. And it was a mainframe.

These are much more complex than the simple tagged/capability architecture I mentioned above. That is necessarily more complex than a Z80 clone. Yet, if we don’t solve the POLA & code vs data problem at the hardware level, it’s pushed onto developers above the hardware. And we’ve seen what that’s led to in both proprietary and FOSS solutions, haven’t we? Whoever designs the next stack has to get this stuff right that one time. Otherwise, they’re expecting their users to get everything right every time.

@ Mr Pragma

Good call as all the firmware is the reason I decided against using the old machine I bought in any connected application. I told people here that these are best used behind an assured guard. Mike has a particularly clever one. Clive has his thing. My design is a cross between a network guard and NRL’s Pump, but with no DMA. And it and the air gapped machine check each other.

Regardless, the problems only exist if the machine is connected. The machine can still be used for all kinds of stuff it it’s unconnected. The common case is to produce signed data on the machine, such as executables or documents, then put them onto write-once media to transfer to connected machine. The guards and such allow them to safely receive data, albeit don’t guarantee the data will have no effect. At least that allows people using same program (eg GPG) to communicate with each other over Internet, but with manual steps. This would necessarily be used for stuff that demands the extra protection.

Regarding usability, I think you would be surprised. One of early attempts at usable, older than subversion, hardware was a Mac laptop from around 2003. Most subversion efforts I’ve seen were 2004 up so I gambled on this. Most early Mac laptops didn’t have built-in Wifi & seller didn’t list it so I assumed it didn’t have it. (Rule: never assume). First boot I discover I’m online as convenient Rendevous system turned on wireless, found an open network somewhere, and connected. So, that system is burned (sigh).

Yet, it did provide an opportunity. I decided to use it as an untrusted, backup system in event main machine fails. I tested it on various sites, including YouTube, to see it worked fine if a bit laggy at times. There’s Linux’s and software of all sorts (even VLC) for PPC so the 2003 machine is beyond usable. There’s people doing similar stuff on even older x86’s with Linux distro’s designed for it.

Now, one can use even older hardware if he or she is willing to trade the Web away for the Internet. The modern Web takes tons of HTTP, Javascript, SSL, Flash, etc. The “Internet” came before it. It supported basic HTTP/HTML, email, IRC, Usenet, music, movies, pictures, business apps, client-server, P2P, and more. That’s quite usable, although the machine won’t participate in the apps/services of the masses.

And one can run an entire business on such systems if they use two separate machines for each user: trusted, limited system for business critical (and confidential) stuff; untrusted, Web-enabled system for the benefits it brings. Web would mainly be used for emails, research, etc. I’ve built things like this. I often used a KVM switch for end user convenience and included a guard to safely release information from business network to untrusted network. Designed right, the delay from the guard is 30s-1m. That’s bearable.

I also discovered that the concept can be pushed further back in time. Steve Job’s NEXTSTEP 3 demo shows a system that’s quite usable today if we’re focusing it on critical data. It has GUI, multimedia, email, networking, ease of use features, minimal resource requirements, etc. The date is 1992. So, anyone willing to port GPG, some guard drivers, and an Oberon compiler to NextStep can work on a quite usable air-gapped machine. And anyone homebrewing a system with minimal resources has some inspiration as to what can be achieved with early 90’s technology.

And if you don’t need graphics… well, I’m sure you can see where this is going. There’s plenty of options going further and further back. Just depends on what price people will pay for subversion resistance, verifiability, usability, etc. Lots of tradeoffs to leverage until more secure architecture is designed, produced, and (somewhat?) validated.

DB May 25, 2014 2:30 PM

@ Josh, Clive, and other Guardian hardware destruction conversation people:

All that “Pinky and Perky” revealed about “the secure destruction of computer equipment for national security reasons” is that any hardware chip with an EEPROM on it can be compromised. For example, by overwriting it with hostile data/code or by using it as a side channel for storing secret information on it for later extraction.

Should this really be surprising to security professionals? Not really, no. So this is hardly revealing secret information so much as confirming what we already know by common logic, regardless of how much governments may love to try to arrogantly classify common logic in a useless effort to keep it secret.

So you cannot derive that any specific attack code exists from what chips they destroyed, because NONE of this means that any specific chip is known to be used for such attacks, only that it COULD be done theoretically. That’s why they even killed the power inverter chip. They’re just covering all the bases. That’s what you do when you want to be thorough.

I think our real takeaway from this should be how much it highlights how woefully insecure our hardware system designs are. And highlighting this should be encouraged, because it encourages people to come up with and implement more secure designs instead.

Clive Robinson May 25, 2014 4:53 PM

@DB,

… is that any hardware chip with an EEPROM on it can be compromised.

Yes and it’s one of the things I’ve been pointing out for many years (not that many people listened).

The thing is though if I give you a computer board do you know if it’s chips have mutable memory in them or not?

The simplie answer is you don’t even when you’ve signed an NDA with the manufacturer and been given a data sheet. In fact it’s known that some manufacturers make SoCs with function straps on them. That is the actuall silicon will contain various designs that are “pined out” but are disabled by logic straps (think of a tristate latch on a bus with it’s /OE pin taken high, the circuit behind the tristate is invisable to the bus).

There are reasons to do this one of which is the “heat death” issue, the size of transistors has shrunk dramaticaly and they have increased speed just as dramaticaly. The problem is even though a transistor might be one hundredth the area it was, the increase in speed has stoped any power reduction, thus you have upto 100 times the power to be disapated for the same chip area… it can not be done thus to keep the speed the chip manufacturer ends up with a nearly empty chip unless he can fill it some way. There are two basic ways to fill it, the first is alternative functions the second is fill it with flash memory that disipates hardly any power. So the actuall silicon may have three or more chips routed out on it and it’s only at the very end of the production does the chips functionality get set to whatever the manufacturer needs to fill orders. How they enable and disable functionality is usually proprietary.

Thus even as the person who purchased the chips you realy don’t know whats realy on the silicon. However if you have the appropriate resources you can find out. State Level Actors do have the resources one way or another to find out where as the rest of us don’t.

An example of “not knowing” is the BroadCom chip in the Raspberry Pi, since it came out people have been trying all sorts of tricks to find out what extras there are on the silicon and if its reachable and usable. And bit by bit extra functionality is being revealed every so often.

As I said by only grinding off “some of the chips” not all they gave us information on them we may not be otherwise able to get. That is Pinky&Perky were by no means thorough, they left many chips untouched, they were slective instead. Thus their actions does narrow our search criteria to their selectivity, which gives us an oportunity to find out more about their reasoning and knowledge.

Whilst I agree SLAs make secret things that are known to others and thus it might appear an excercise in futility, its still a breach of national security to make them known to the public at large, even if others have done so… A classic example of this being TEMPEST from the electrical side you can with basic text books and first principles work out all the classified parts. That said if I were to tell you something that was not obviously publicaly known then I would be looking at jail time should the state decide to prosecute (and although it has not happend to me the UK Government have prosecuted others several times in my working life for “confirming the bleeding obvious”).

DB May 25, 2014 5:31 PM

@Clive

“Pinky&Perky were by no means thorough, they left many chips untouched”

There are different levels of looking at “thoroughness”… surely to be really thorough they’d just feed the entire computer into a super-fine-powder-making grinder? And to be REALLY REALLY thorough, maybe they’d follow that up with a nuke on top of the Guardian office where the powder is still floating in the cool basement air?? Ok… a bit extreme, but you can see my point (American final solutions usually mean blowing things up at some point). Obviously, I was originally talking about thoroughness in the sense of grinding off everything with EEPROM storage, regardless of whether it had ever been or could ever be used for nefarious purposes on that particular circuit board or not.

You have a good point about disabled features on chips that aren’t even listed on data sheets though. It would certainly be hard to find all those cases when singling out specific chips, unless you had a good knowledge of them all well beyond just data sheets.

Benni May 25, 2014 7:06 PM

In their testimony, the three law professors, experts in constitutional right, said that if NSA makes some building in germany, then the german government would have the obligation to prevent this.

In practice, this turns out to be entirely different. The NSA dagger complex, the host of the NSA cryptologic center, which also hosts according to Spiegel, TAO members which are engagend in Operations against germany, was built with german tax money:

http://www.spiegel.de/netzwelt/netzpolitik/nsa-standort-dagger-komplex-deutschland-zahlt-858-000-euro-a-971177.html

Yes, german taxpayers involuntarily helped the german government to create an NSA surveillance station.

This just shows how deep the interconnections between the german BND and the NSA really is.

Skeptical May 25, 2014 7:20 PM

The Wall St. Journal carries a few salient details about what the recent US indictment of five PLA personnel might signal.

When the indictment was issued, I read it as a warning signal that US policy on Chinese commercial espionage was shifting (in fact I think the shift was likely planned some time ago and simply delayed by the Snowden leaks). At least according to the sources used in the article, that seems to be the case.

A few quotes:

Monday’s indictment, in effect, is aimed at providing a foundation on which the U.S. government could build an array of punishments. It sets out evidence in detail–naming alleged actors and affected U.S. companies and organizations–that could be used to support additional penalties.

“Criminal charges can justify economic sanctions from our colleagues in the Treasury Department, sanctions that prevent criminals from engaging in financial transactions with U.S. entities and deny access to the U.S. financial system,” said John Carlin, the head of the Justice Department’s national security division, in a speech Wednesday at the Brookings Institution think tank. “They can facilitate diplomacy by the State Department.”

On the prosecutorial side, follow-on steps may include releasing more evidence about the hacking cases, or filing new charges in other hacking cases in which investigators have collected a critical mass of evidence, officials say.

Officials were mum on the nature of the additional evidence. But a person familiar with U.S. probes into Chinese hacking said investigators often collect video evidence of hackers.

“Some of these actors are not real good about turning off the Skype camera on their machines while they are working,” this person said.

A more controversial response advocated by some Federal Bureau of Investigation officials is to work with companies under cyber siege to feed bad information to hackers, said a person familiar with the discussions. The goal would be to cast doubt on the quality of the data being stolen, and in addition raise questions about information taken from other companies.

Keep in mind, a year ago when Mandiant released its report documenting the PLA unit used to conduct commercial espionage, the Chinese Government claimed it to be false and challenged the US to produce evidence that would hold in court. The US has taken that step.

There’s a clear narrative, and clear steps of escalation, in the US response to PRC commercial espionage.

I don’t view this signal as a bluff on the part of the US Government, which implies that PRC commercial espionage has reached a magnitude such that the US Government finds that the expected damage from economic and legal escalation is outweighed by the expected damage of a continuation of the status quo.

I also view increased cooperation in connection with the PRC commercial espionage threat by companies in the US and other targeted Western nations with their respective intelligence services as highly probable.

This also presents an opportunity for those focused on building (in both a policy and an engineering sense) a more secure internet and more widely available secure information systems. The threat of commercial espionage is of much higher priority to most companies (and to some extent consumers) than the possibility of the NSA eavesdropping. I would expect this approach to get far more traction than one focused on the NSA as a threat.

DB May 25, 2014 7:52 PM

@Skeptical

The threat of commercial espionage is of much higher priority … than the possibility of the NSA eavesdropping. I would expect this approach to get far more traction than one focused on the NSA as a threat.

I’d agree with this. Just make sure that we’re not creating systems that are backdoored for the NSA, as part of our protecting against China, that would be equally unacceptable too. Weakening a system “only” for the NSA makes it weaker for everyone, as even best case if you consider the NSA a “good actor” (lol) any secret key can still be stolen. Best to have no backdoor key at all.

Thoth May 25, 2014 9:00 PM

What we know from the Snowden leaks is what operations the US have done to spy on their own friends and foes, any other nation can do that, including China. If the Chinese caught an American spy, they would send that spy to be riddled with bullets. If the American finds a Chinese spy, they would issue weird policies of banning all Chinese from entering Defcon and jail the Chinese spies.

It’s a nation vs. nation spy game that has been there since antiquity. You capture a spy, you do whatever you want to the captive in accordance to military laws.

Pulling a dragnet to ban all Chinese from entering Defcon or something like that is really hilarious from Washington and China may soon have a counter-reaction (probably start to harass foreigners more frequently to vent their anger or the like). It’s “childish politics” gone bad.

If you capture a foreign spy, you may handle them according to military laws and international laws governing military actions and not push your frustration on the general populace.

One thing we can be sure from these episodes where innocent netizens are pawns in the hands of international giants, secure and open systems built with security as part of it’s basic parameters and not as an after-thought like many other products are essential.

Cyber-timmie strike force May 25, 2014 9:10 PM

Having made a few anodyne remarks, skeptical’s back with the NSA INGSOC. Here’s the party line now: blame the PRC for everything NSA is proven to do. Then NSA can try and dodge the blame for unavoidable countermeasures by claiming China is the threat, not them.

Note the ham-handed attempt to tell you what to think: the NSA is not the threat, no, no, no, industrial espionage is the threat. Tell that to Eliot Spitzer. Tell that to al-Awlaki’s teenage kid. Tell that to Jane Harman. Tell that to the little puffs of pink mist that used to be Abu Suhaib al Australi before the universal-jurisdiction war crime of his summary execution.

NSA, panty-sniffing cowards directing murder and torture from the safety of their sweaty cubicles.

https://firstlook.org/theintercept/article/2014/02/10/the-nsas-secret-role/
https://www.thebureauinvestigates.com/category/projects/drones/drones-yemen/

Mr. Pragma May 25, 2014 9:48 PM

Misunderstanding, Nick

While I did mention Ethernet, my point was more general. For any kind of actually useful system design one will need, for instance, storage as well as some kind of interfaces (potentially both, human-system and system-system).

So, there will be lots of “environment” issues like IO and supporting circuitry. You want multi-core? Welcome to MMU (and potentially IOMMU). You want some basic interaction? Welcome to graphics, printer, keyboard. You want to have some kind of storage? Welcome to the amazing circus of storage IO. And so on.
And pretty every single on of those subsystems have a microcontroller at their core, too.

I think we will need to basically redo the whole system stack, from chips up to, say, server software and, of course everything in between.
And to make it more funny and lots harder we will need to keep at least a reasonable degree of compatibility with what we have right now.

Unlike some here (whom I value, don’t get wrong) who strongly concentrate on maxima, with regard both to secure design and to threats to handle, I’m more interested in pragmatic solutions.

Let me name a major evil thing: Verilog.

In my minds eye the fact that there is a closed “standard” deeply integrated in todays chip design->production cycle is way more dangerous, problematic and important as a danger than (and closely linked to!) any specific chip manipulations.

As I see it there is only 1 way to make sure — and verifiable — that any chip is “clean” and that is to have an open standard and implementation(s) for describing chip logics, for managing them, and for verifying them (credibly and trustworthy).
A certain type of gate, being a lowest level “lego piece” in chip design, may be implemented in different ways using different technologies/processes, which would then be the “drivers” for that chip design standard/language.

That’s important because in the end the most reasonable and assuring way to put the “has it been manipulated?” question is to check whether the design has been congruently and consistently implemented through the whole process up to the final chip.

Without such a solution (and with Verilog) that question basically remains a multi-layered (from designer to fab) lottery and trust game.

Sure, we do tests today, too. But we do – and can only do – tests whether the chip works as specified, somewhat akin to testing whether a (potentially badly tainted and nsa-spywared) computer properly boots and, say, starts X and kde or the apache server.
Similarly we currently know that some chip performs certain functionality but we do — and usually can– not know what else it does.

Mr. Pragma May 25, 2014 11:09 PM

Toth

Pulling a dragnet to ban all Chinese from entering Defcon or something like that is really hilarious …

Yep. My first thought was also “what a ridiculous bunch of brain dead and ignorant morons the americans are” (again, “the americans” meaning the usa and its agencies – not the single individuals).

Sure enough China could do something like defcon, too. But why should they (do publicity stunts)?

And how immensely smart to put some Chinese agency officers, who wouldn’t go to the usa even if invited and expenses payed for, on a sanction list.

There is something deeper behind all that, however. While the usa formertimes called the shots and could and did hurt painfully, today usa is all but a once powerful croc which is rapidly losing its teeth.
And, in fact, the Russian and Chines responses to usas wanton sanction bullying made the us attack more expensive and painful to usa and sadly its europeen colonies and vasall states than to Russia or China.

Back to China. And now what? Do those eggheads in washington now put Huawei on a sanctions list? oopsie, they’ve already de facto done that.
The result? Huawei doing well, albeit fenced out of the us market, and cisco and the other us corporations taking billion-heavy hits.

That’s where the american double-speak and hypocrisy break. The washington thugs may succeed to “convince” their european, japanese, etc. thug vasalls – but the real people, the market has gotten the message: Do not buy american!

Is this pure politics from my side? No! Actually, I’m convinced, avoiding us-american products is (next to others) one reasonable approach to enhance security.

Clive Robinson May 25, 2014 11:52 PM

@Mr. Pragma,

With regards using multiple microcontrolers/cores with IO running on other microcontrolers/cores, it’s the model I had assumed all along. It enables the processing microcontrolers/cores to have a very tiny kernel of just a few KByte. It’s also the model used by old “Big Iron” so it is/was fairly well understood.

As for Verilog I have a natural dislike of it as it takes a programing language metaphore. Thus it tends to suffer from “Serial Thinking” being applied to an inherantly parallel process in many people who use it and the results are usually inefficient. VHDL is better at avoiding that particular problem but… it is inherantly a hardware focused system (suprise suprise) which many people find difficult to get a grip on. Thus it appears that Verilog is prefered by those with a CompSci background whilst VHDL is prefered by those with an engineering background. Not unexpectedly some people are writing or have written “front ends” for VHDL to both remove the drudge and even provide high level language convertion [1].

You pay your money and you take your choice… however there does appear to be a bit of a cultural divide to Verilog / VHDL which has other implications such as finding employees proficient in them.

With regards the “pragmatic solutions” I’m all for them if and only if they are on sound foundations, which few are these days.

One of the major reasons we are in the mess we are currently in is because historic pragmatic soloutions made due to constrained resources developed a life of their own and evolved into the proverbial 600lb gorillas (ie C to C++). The result has been “Fairy Castles in the Air” where the underlying pragmatic assumptions remain locked in under the weight of “work arounds” and “new paradigms”.

Many of our current systems when viewed from a distance resemble “a blindfolded elephant with a noose around it’s neck riding a unicycle” from this distance the only real question that arises is, Why the inevitable has not happened…

The future of computing is very clearly not purely imperative any longer people have got to get to grips with their serial thinking and break out of its constraints. When you look at the designs of CPU chips they are stacked full of tricks to support the serial thinking, that are at the end of the day a waste of resorces.

[1] This frontend translation to VHDL work has been happening for a couple of decades as this 1996 document shows http://www.eda.org/rassp/documents/newsletter/html/96sep/news_11.html

Clive Robinson May 26, 2014 1:16 AM

@Mr. Pragma,

When thinking about US-Chinese relations, that old “follow the money” advise still holds true.

The US has a significant problem in that it has fallen into the trap of being “The Trading Currancy” which has artificialy inflated the worth of the USD by as much as five times it’s real value as a currancy. Unfortunatly to trade with the US China has had to take on a significant percentage of the gross US debt in the form of currancy tied paper.

One of the major but unstated reasons for the Iraq war was that tired of US sanctions Sadam had approached several EU countries with a deal to only sell Iraq oil in Euros, in return for them lifting sanctions. The result of this if it had happened would the US$ would nolonger be the major reserve currancy the Euro would, and traders would revalue the US$ down significantly to a fraction of its value. Thus the US had to stop Sadams plan at all costs, thus throwing a few trillion at a war to remove him was a good investment.

Further to add to US woes is what the Fed has been doing with other peoples gold, it has in effect stolen title to it and used it to further support the US$ which is why the Fed is not letting other countries have their gold back… The simple fact is in recent times gold has been appearing as if by magic, that is there are more gold certificates out there than there is for mined gold and people start asking questions that don’t get answered in a way that resolves this disparity[1]. It has not helped when some bullion tuned out to have been adulterated with cheap tungstan slugs which were difficult to spot with conventional non destructive assaying techniques [2]

[1] http://www.gata.org/files/GATAFedResponse-09-17-2009.pdf

[2] http://www.zerohedge.com/news/2013-08-16/gold-or-tungsten-heres-how-know

koita nehaloti May 26, 2014 2:00 AM

More on detecting hardware backdoors:

Direct a not so precise electron beam on every part of the chip one area at a time while it’s working. This should cause random errors on the pointed area. If there is one area that can be pointed at without causing any errors, then there is a backdoor handling area. It is also possible that the electron beam will accidentally trigger the backdoor.

Does this sound about right? How about the previous post?

What are the chances that a backdoor is on one area vs spread all over in tiny pieces?

yesme May 26, 2014 2:44 AM

@Clive Robinson

The problems of both C and Unix are known for a long time now. C++ isn’t the answer to C and Linux / *BSD isn’t the answer either.

I think that is safe to say.

But what is the real problem? AFAIK it is competition.

  • Competing standards. Think webm/h264 and ODF/OOXML. This results in more libs (when ready), that requires more maintenance and more LOC.
  • Competing products. We have MS Windows, OSX, *BSD, Linux, QNX, Chrome OS, Android and lots more. All of them are not compatible (maybe with POSIX, but that is broken by itself too).

In short, the answer is IMO cooperation instead of competition. Altough that answer is quite simple, the implementation isn’t. It affects the business model of the companies involved. We all know how companies are tied to their business model. They only change that model when there are no alternatives left.

There are other answers / solutions such as governmental Open Source projects. But I think these are even harder to realise, at full scale, than the one I mentioned. Funny detail is that Brazil is doing rather well with Open Source and the EU also has some interesting projects.

Iain Moffat May 26, 2014 4:00 AM

@ Nick: regarding ‘the Burrough’s tagged architecture (1961) is said to use “discrete transistor logic,” although I’m not sure what that implies about implementation on TTL’s. And it was a mainframe.’ Sure that wasn’t diode-transistor logic? TTL is descended from early diode-transistor logic using some optimisations only possible in a single piece of silicon (in particular bipolar transistors with multiple emitters). Certainly any design that can be abstracted as gates and flip-flops can be re-implemented in TTL or even CMOS. What is more problematic is to directly transcribe early computer circuitry that was easy to build in the 1950s or 60s with a full range of discrete components including inductors and large capacitors (not to mention electro-mechanical elements) into modern silicon where a designer really only has transistors, resistors and small capacitors on chip.

Mr. Pragma May 26, 2014 5:34 AM

@ Clive Robinson

First: By “pragmatic” I did certainly not mean the (typical american) attitude of “somehow makeshift something (more or less) working”.

What I meant was that theory and research and striving for perfection are important, but that I personally strongly favour to also implement some of the many things learned, either by research or by painful experience.

As for processor architecture I widely agree. I do, however, not think that keeping controllers outside the processor is a religious issue or the only right way to do things. Simply because having everything outside, creates problems and/or penalties, too.

Re Verilog I think we need a restart. We have learned very much in the last 20 years and it should be applied. My approach would be to break everything down into reasonably small blocks which would be represented as “active objects” which could be verified, tested, simulated, etc. and for which every chip maker/fab would have “drivers” for any used technology with those drivers also “filling in” some real world values for the block specs. I know that some parts of this are already reality but unfortunately those designs are usually closed and are/haven been driven by producer and vendor priorities (which, to a degree is understandable because they profit from vendor lock in).
At the latest since we learned about nsa & co. there is a clear and unconditional must: We MUST be able to verify the chip end product to be closely congruent with the design spec and this must be verifiable.

Ad “us-Chinese”:

I agree except for one thing: No, the us have not “fallen into the trap”. The us (well, its politicians and bankers who happen(ed) to be also the ones representing and running the us, so it’s fair to say “the us”) has willfully and repeatedly designed, created, and implemented the us$ monopoly. And they have knowingly and willfully declared their readiness to fight wars for that and they have done that.
That was not a trap. And if the usa gets strangled and choked by its frankenstein monster it deserves so.

@ yesme

I fully agree.

But: competition is created by something, usually capitalism. And indeed pretty every standards body os plagued by corporation interests, often up to the point of being completely perverted.
Unfortunately this has been particularly grave because the usa was for some 5 or 7 decades the economic power and the usa is, beneath a think layer of democracy bla bla painting brutally and ruthlessly ultracapitalistically driven which translated into a gazillion inconsistentent and — of course — competing “standards” many of which were more driven by corporate interests than by reason, let alone the common good.

We should, however, for the sake of fairness also mention that there are different scenarios and needs to be adressed. A desktop OS is something quite different from, say, an embedded RTOS. Quite some OSs also arose out of a desire to not completely fall victim to corporate interests (think windows or unix (at&t)).

Unfortunately I don’t see too much hope. The “american century” (what an arrogant and gross idiocy in the first place!) comes to an end, but I don’t expect europe or China to do that much better in terms of standards for the common good.

One issue, being at that, that I immensely detest and that is so typical american (“big is good!”) is IPv6. For one IPv4 would still last for quite some time if the us entities that ignorantly hold large /8 address spaces were stripped of those spaces.
more importantly though, why not IPv5a (5 exists albeit basically unused), i.e. 64 bit addresses? They’d be way easier to implement, they’d happen to nicely fit modern processors (64 bit), they’s be fu**ing sufficiently large enough. And, being at that, we should to a very large degree stick to proven and established technologies and protocolls rather than insanely turning a major part of networking upside down by, for instance, replacing arp with an abomination that has but one and only one raison d’etre: Marketing, new products, lots of sales, hurray.

Mike the goat May 26, 2014 5:55 AM

Mr. Pragma: Exactly. A lot can be done with older processors, and it is surprising how much functionality you can achieve with highly antiquated hardware. I have an old SPARC unit which I use as a secure machine (and an atmel based data diode) which houses my keys and other private data. Yes, supporting ICs are a potential source of trouble, absolutely. Re your trust logo idea – it certainly has some merit but would be difficult to implement in a meaningful way. DNSSEC is a disaster and likely is a good example of the kind of destruction NSA shills have done in IEFT/internet standard working groups.

yesme: the US gov’t is going to find themselves increasingly isolated, like that nasty kid in grade school who ate their lunch at a deserted table in an otherwise bustling cafeteria.

Nick: the decision to equip the horn was not an easy one, but it had to be done. I am sure you are aware that all horn equipment must meet or exceed EAL6. There are two horns but only one is equipped at this point due to a funding shortfall. The plan was to have the firmware of the second horn authored by a different (and mutually untrusting) group. Re PGP – absolutely – a highly assured version of PGP would be a boon for everyone. There is a lot of flexibility in OpenPGP and this may be both a blessing and a curse; I imagine a ‘trustedPGP’ type project would have to limit the selection of ciphers and cull some of the lesser used features to reduce the size of the code base and make auditing and/or shoring things up more practical. Re chips – I spoke of Z80 as it was dead simple, still in production and there are multiple vendors in multiple countries but as I suspect you are inferring the cons of using an 8-bit processor are numerous, especially if such a slow chip is going to be doing crypto. I can just imagine key generation! I agree that a RISCy processor would be a better and more flexible choice.

yesme May 26, 2014 6:26 AM

@Mr. Pragma

I know cooperation is not gonna happen in Corporate America. Too bad, but that’s the way it is.

Talking about IPv6. How do I say it. Too much isn’t good. I would even go further than what you said. I would go 48 bits and when that is all used (….) we make it 64 bit. Just 3 blocks of 4 hexadecimal characters, seperated with a dot like we have now with IPv4 and no abbreviation possibilities. That would make it much easier to parse, both for man and machine. Here is an example.

protocol://0123.4567.89ab:port/path

Btw, IPv6 has also a nice surprise and that’s port number notification ( :80 ).

Mike the goat May 26, 2014 6:55 AM

Mr. Pragma re IPv6: agreed. Forcing some of these large organizations with /8s to give up some of their address space would effectively mitigate the problem. Re-assessing some of the reserved space may also yield a solution. Having 10.0.0.0/8 in RFC1918 probably made sense at the time when address space was plentiful but nowadays appears to be a huge waste when 192.168.0.0/16 appears sufficient for all but the largest internal networks. Dedicating an entire class A for loopback was another massive space waster.

Skeptical May 26, 2014 7:45 AM

A few remarks in response, each separately, to comments by

cyber-timmie,
Thoth,
DB

@Cyber-timmie: Why not use your usual persona/pseudonym? It is as transparent here as it is elsewhere. Do you wish to keep the other one reasonably “clean” (here, at least) for serious discussion, while you vent your childish spleen in this one? Out of respect for your desire to separate the two, no doubt carefully protected by a switch of tor exit nodes (why bother shifting the VPN exit into tor though), I’ll not mention the certain other persona/pseudonym, but if it’s transparent to me, it’s transparent to anyone else familiar with your postings across sites.

@Thoth: You’re focusing a tiny piece of a much larger picture. Ars magnified that tiny piece for obvious reasons (its the juiciest and easiest angle for its readers). It was mentioned in the context of an official speculating about the range of things the US is considering. That range extends from numerous trivial things, none of which might matter much individually but taken together may have an impact (and are speculative), to legal actions, sanctions and trade implications. Ars selected the one item of speculation they thought was juiciest to their readers and ran with it, but in doing so they distorted an accurate view of the policy now on the table.

For instance, to look at the higher end of the spectrum, the Chinese may be able to bully Bloomberg News into killing stories about the wealth of Chinese Government officials (they did), and may be able to expel New York Times reporters from China in order to prevent them from writing additional such stories (they did), and are obviously able and willing to do far worse to their own citizens who dare to look into such matters (they do).

But the US and its allies are not bad at financial surveillance, and the Chinese elite are extremely wary about keeping their money in China (they prefer to send much of it out, for obvious reasons).

Sanctions targeted at particular high level officials (we’re not at that level – yet) would have bite.

Then there is also the matter of the State Owned Enterprises which were benefiting from the commercial espionage, and the leadership of those SOEs who could likely be held responsible legally. More sanctions, with huge bite.

So don’t be distracted by a decision of a media outlet to focus on one tiny item of speculation by an official who was attempting to discuss the vast range of options likely on the table for US policymakers.

I’ve said it before, and I’ll say it again: we’re seeing a shift in US policy towards PRC commercial espionage, and the pressure will be ratcheted slowly, carefully, until the PRC re-engages to help resolve the issue.

If the PRC wants to simply play tit for tat as that pressure is ratcheted, without addressing the underlying problem, they will be very unhappy with where that strategy leads them. The US objective here is very simple and very legitimate: no more commercial espionage. And the US business community has shifted from lobbying the US Government to not make waves with the PRC over the issue, lest they upset business, to begging for action by the US Government.

@DB: The argument you state (and that Schneier has given many times) is exactly the argument which I think would be most effective (namely, that to secure the internet and information systems against commercial espionage, and criminal intrusions, we need truly secure networks and systems, and that means security for everyone). Mind you, I’m not totally convinced by it (yet), since I do not think it’s important that lawful surveillance be possible to implement, but there’s no doubt that such an argument would be far more powerful.

Mr. Pragma May 26, 2014 8:21 AM

yesme and Mike the goat

Of course, from a technical point of view I see the beauty of 48-bit adresses in that a 64-bit register could hold an ip/port pair. But I’m not sure that’d be sellable and it’s not really that important an advantage to have 48 rather than 64 bit addresses.

And one would not have to decide. One could have 1 bit, say bit 47 i.e. the lsb) as a flag whether the last 16 bits are a port (“short notation”) or the lower 16 bits of an ip with a port to follow in another 16 bit chunk.

One could combine this to make even more sense or be more practical by turning it upside down and saying that any current ip4 address automatically (by definition and standard) continues to be a valid IP5a address with the first 32 bits being 0, where bit 31 being 0 means that the following 32 bits are a full IP address (i.e. without “embedded” port) whereas any IP5a address with bit 31 = 1 means that the next 16 bits are the lower part of a 48-bit IP5a address and the last 16 bits being the “embedded port number”.

Sounds maybe weird but look:

IP4 adress 100.101.102.103 would become IP5a 0.0.0.0.100.101.102.103 (which is a full 64-bit IP5a address)
whereas IP5a 100.100.100.1.100.101.0.80 would be 48 bit IP5a address 100.100.100.1.100.101 port 80 (alternatively writable as ‘100.100.100.1.100.101:80’)

The beauty, you certainly already saw that, would be eazy peazy migration. Every current IP4 address could continue to be valid with just a very minor change (prepending 0.0.0.0 in front).

At the same time, one could offer IP4 waste pigs the /24 network they already are used to yet take away all but 1 of their adresses. Actually they even needn’t to change all their internal networks; a border router would simply adapt IP5a (ext) and IP4 (int).

Same goes for ISPs.

From there on, everyone could comfortably and slowly switch to IP5a without problems.

Going further, one could even take 1 /8 IP4 range, say X/8 and use X. … as a markerindicating to stubborn routers that X.n.n.n is not an IP4 address but the first part of an IP5a address if for some migration time, say 2 years, one handed out only X.n.n.n.m.m.m.m IP5a addresses. Just in case someone got his border routing wrong and happily continued to use IP4 addresses. Such an ISP could immediately know that he should convert that IP4 address to a IP5a address (and possibly send that client a reminder).

Why? Because IPv6 is a strongly unbeloved child (for many reasons) and a monstrosity and because ease of migration is the go or die issue for new standards of high practical and global impact.
And because “my” IP5a makes sense. Plain simple.

Wrath of the cybertimmies May 26, 2014 9:22 AM

oooh, now here’s skeptical with the lame try at cyber-intimidation! He must of used the terrifying Argus-eyed power of the totalitarian state he controls to catch me in my black socks with homely hookers at the Emperor’s Club the other night, I am ruined!!!11!

FAIL. Stick with the priggish sniffy condescension and avoiding all mention of NSA complicity in murder, torture, and sleazy kompromat.

Skeptical May 26, 2014 10:18 AM

A few responses to comments by

Clive,
Pragma,
Mike the goat

@Clive: One of the major but unstated reasons for the Iraq war was that tired of US sanctions Sadam had approached several EU countries with a deal to only sell Iraq oil in Euros, in return for them lifting sanctions. The result of this if it had happened would the US$ would nolonger be the major reserve currancy the Euro would, and traders would revalue the US$ down significantly to a fraction of its value. Thus the US had to stop Sadams plan at all costs, thus throwing a few trillion at a war to remove him was a good investment.

I say this respectfully (I sometimes exaggerate my ignorance of certain technical matters, and I appreciate your remarks in that vein; I also frequently find your remarks on international relations to be insightful or stimulating), and I am speaking only of the theory you put forward:

That is complete nonsense. Hussein shifting to selling Iraqi oil for Euros would have no more effect on the status of the USD as a reserve currency (or the EUR as an alternative) than would Hussein’s adoption of a Dvorak keyboard layout (assume for the hypothetical that Iraq’s native tongue is US English!) threaten the status of Qwerty.

The reasons for what in the US is frequently called the Iraq War are complex, but currency plays absolutely no role in it. I can say this unequivocally, without any reservation or doubt. I would be happy to address any questions or doubts that you may have about this.

Here is a paper by Barry Eichengreen, who is a very well respected economist at UC Berkeley, on the history of reserve currencies. It’s a quick read, and you don’t need to understand the details to understand the overarching themes.

Further to add to US woes is what the Fed has been doing with other peoples gold, it has in effect stolen title to it and used it to further support the US$ which is why the Fed is not letting other countries have their gold back… The simple fact is in recent times gold has been appearing as if by magic, that is there are more gold certificates out there than there is for mined gold and people start asking questions that don’t get answered in a way that resolves this disparity[1]. It has not helped when some bullion tuned out to have been adulterated with cheap tungstan slugs which were difficult to spot with conventional non destructive assaying techniques [2]

I want to reiterate my remark earlier about respect, and to emphasize that the following remarks are only with respect to the two basic claims in the paragraph:

that is complete nonsense.

First I need to emphasize: gold is not important to currency valuation (thank God) for major currencies. Gold bugs habitually, and with self-interest, push very hard the line that it has “intrinsic value”, and they try very hard to hype any stories about it. One of the blogs you cite, Zero Hedge, is notorious for doing this (and God knows how much money he’s lost as gold has plummeted from its highs a few years ago; it reached the point several months ago where certain gold bugs in Congress, like Ron Paul, attempted to exempt some gold transactions from taxes).

Second, the Federal Reserve hasn’t stolen anyone’s gold, and no central bank anywhere has claimed that it did. There were a few stories in the press arising from accusations by a German parliamentary committee (or members on it, anyway) that the Bundesbank did not follow all of the audit recommendations of a separate agency. The Bundesbank answered them fully, expressed its full and complete satisfaction of, and confidence in, the Federal Reserve’s custodianship of its gold held in New York, and resolved any disagreements with the other agency. It provided a status report on the transport of some of its holdings from New York and Paris in early January, and has granted several interviews to the press on the subject. It is very much a non-story, though one hyped by gold bugs.

Again, Clive, I want to stress that I’m being dismissive of the arguments themselves with respect to Iraq and the dollar’s status as reserve currency and with respect to gold and the Federal Reserve. I always respect and value your views and analysis.

With respect to China, its holdings of USD assets like Treasuries are irrelevant to the issue of commercial espionage. The PRC needs to keep the RMB artificially cheap for the sake of its exports, and selling its USD holdings en masse would make that much more difficult to accomplish. It would also, even as a threat, have no effect on the US, as the Federal Reserve could quite easily absorb Treasuries equivalent to what the PRC could sell (and the Treasuries market is absolutely massive – off the top of my head, I’d guess that PRC’s entire inventory of Treasuries probably amount to one or two week’s worth of trading volume). Such a sale could also trigger a loss of confidence in the PRC financially, which could have disastrous effects on the PRC’s economy.

@Pragma: My first thought was also “what a ridiculous bunch of brain dead and ignorant morons the americans are” (again, “the americans” meaning the usa and its agencies – not the single individuals).

You will never approach a sound analysis of international relations with such an attitude.

And, in fact, the Russian and Chines responses to usas wanton sanction bullying made the us attack more expensive and painful to usa and sadly its europeen colonies and vasall states than to Russia or China.

European nations like France and Germany are US “colonies and vasall [sic] states” eh?

US/EU/NATO opposition to Russia’s incursion into Crimea and Eastern Ukraine is “wanton bullying” eh?

“more expensive and painful to usa” eh?

Let me cut through the bluster with some numbers. More capital exited Russia in the first 3 months of 2014 than exited Russia in all of last year. Estimates for Russia’s growth in 2014 sank from around 2% to around 0% because of its actions in Ukraine and international (US + EU) reaction.

The effect on growth estimates for the US and EU has been zero.

Fortunately, in the face of sanctions and resistance, as the outlook for the Russian economy worsened and the counter-formation effects (“counter-formation” refers to counter-balancing by one group of nations when another, in some manner, causes a perception of a loss of some amount of security) of his actions crystallized, Putin has backed off his evident plans for Donetsk (influence operations will continue of course, there and across Ukraine, but not at the same level), which should push estimates for Russia’s 2014 growth back to 1 or 2%. Gazprom’s recent deal with China should help at the margins, particularly if the reports of the up-front payments by China for Russia to invest in necessary infrastructure are close to accurate.

Unlike some here (whom I value, don’t get wrong) who strongly concentrate on maxima, with regard both to secure design and to threats to handle, I’m more interested in pragmatic solutions.

I am of course ignorant of all things technical (I wouldn’t know a p-n junction from a double check valve), but it occurs to me that research on “maxima” can end up producing innovations that eventually flow into what you call the “pragmatic.” To give a very imperfect but sufficiently illuminating example, some of the security features operationalized in more maxima-leaning systems like OpenBSD have found their way into Windows (some as standard features, others if the user adds EMET to the system).

I can think of several companies, and a few government organizations, that would have an interest in funding some of the open hardware projects tilting towards the “maxima” side (provided the projects were brought to a certain stage of maturity). Moreover, as consumers seek to network more devices, “maxima” solutions at a low-level may become quite squarely “pragmatic.”

@Mike the goat: the US gov’t is going to find themselves increasingly isolated, like that nasty kid in grade school who ate their lunch at a deserted table in an otherwise bustling cafeteria.

I think you’re overweighting the Ars headline on the Vegas conferences and underweighting more important factors. Whether the US grants visas to Chinese nationals to attend those two particular conferences is minor (also unlikely to become policy, frankly).

Mr. Pragma May 26, 2014 11:31 AM

Skeptical (May 26, 2014 10:18 AM)

That you tell a lot of propaganda bullshit and paint whatever usa does in nice — if bent and wrong — colours isn’t new.

That you, in fact, consider the us$ to be “floating” a great thing, something that by most reasonable beings, incl. americans, is considered a gross evil, does not surprise me (nor do I intend to discuss such matters with you).

Let it suffice to mention that it is of doubtful wisdom to exclude some globally important major and skillful party from an event publishing its material globally anyway and to such provoke that party, which again has no loss through your decision, to not disclose its own findings relevant to security.
That’s aking to a manager saying “You may see all my bookkeeping on the internet but you may not look directly at it in my office” and in return getting … uhm … nothing and no peek at the other parties papers. Brilliant move!

That you, however, elaborate on matters you yourself admit to not know about (praised be your rare honesty) is a new climax, if on the negative scale.

First you brilliantly repeat what I said in the first place, albeit in other words (I did say that research incl. the search for maxima is positive and important).
Obviously it didn’t occur to you to understand the point. Let me put it in a way you might understand:
Let us assume that chances are that your X password (say Amazon, bank, …) stored MD5 hash (very low security) is with a high probability comfortably crackable or even already cracked. Let us further assume that there are some well known considerably better and more secure methods (say AES128) available to secure your password/transaction/etc.
Now, would you prefer a) to have some cryptologists to research 1024 bit very very high security, which might become widely available in, say 3 years, or would you rather b) want a reasonably secure and proven solution right now?

Oh and btw: On what basis do you offer OpenBSD as a “maximum leaning” example for security?
A) there is — justified — discussion inhowfar OpenBSD is to a large degree more about security theater and PR than about real security (hint: other less security noisy BSDs offer security features OpenBSD doesn’t while offering basically the same sec. features OpenBSD offers).
B) OpenBSD does not even arrogate to it to be in (any major) security research. They merely state that they try hard to have a secure and safe implementation.

It seems you don’t know about development of solutions neither. Forget your “provided the projects were brought to a certain stage of maturity”
If a project has reached a “certain stage of maturity” (where “certain stage” is very different depending on whom you talk to) it hardly needs any state funding anymore. That funding is needed at an early stage.
And btw. we were talking about complex chips here, not about some hobby board. You know, the kind of stuff that is so immensely demanding that someone doing it with a bare 7 digit amount of $ he is considered a brilliant hero because usually those developments are in the 25+ mio $ area.

So I suggest you stay with your usa propaganda and keep away from technical and professional issues.

And now I’ll deeply impressed shake in fear together with all the Chinese security guys who may not come to us security conferences. Just to please you I will even try to cry. (In other words: nuland yourself!).

Clive Robinson May 26, 2014 12:04 PM

@Mike the Goat,

With regards 10.0.0.0/8, I know of two large national networks that use it for their national private network. One has several thousand sites and multiple connections through NAT&PAT firewalls to various public networks and one semi-public network.

Without 10.0.0.0/8 it would be difficult for the organisation to have a national network that functions.

So yes it’s needed for various reasons by certainly more than one national organisation.

Figureitout May 26, 2014 12:09 PM

Mike the goat (the one-horned goat)
–Well of course, it is A LOT easier said than done. Sometimes I think about these things stepping way back…how can we ever trust a worldwide network or test the output of a chip when you can’t see into it b/c it’s too cloudy and small; at least people are questioning it, testing it, and thinking about it.

What’s the main thing you always hear about in security? The “professionals” always cry about sh*tty implementations. So I think tutorials going step-by-step thru implementations is a good thing; ignoring the fact of running on hardware that may insert something or hide something, killing the security.

As to Nick P’s disagreement on the choice of chip, like I said before…why not both and more diversity? I’m patiently waiting for what Nick P’s design will be, but mine will be much simpler and still subject to the usual attacks that need extreme resources to repel. Since Nick P’s going for this new “memory tagging” scheme, it’s new and yet to be tested to see just how much security it actually adds or how easy it can be subverted (and then lead to a worse state, false confidence).

It would be nice if I can see attackers detail their attacks for me when I release it though. You need to be close to me to do it as there will be no TCP/IP-stack (better watch out if I identify you); my main concern is transferring code and using existing tools, thus passing hidden vulnerabilities and some nasty code. Also, it’s convenient that Bruce hosts a Twofish implementation in Z80 ASM. B/c what’s more fun than implementing a crypto algorithm, than one in ASM? Done by a student as a learning experience so I’m not expecting it to be perfect at all.

Found 2 interesting links. The first is interesting discussion on a “security thru obscurity” question on obfuscating microcode to render some pre-made attacks null.

http://security.stackexchange.com/questions/29730/processor-microcode-manipulation-to-change-opcodes

Next is an individual I would like to bring to the blog on these hardware security topics, if he hasn’t already (if anyone reading knows him, would you kindly give him a nudge here). He seems to have extensive experience on many types of attacks, that I would like to design to mitigate as much as possible, or at least detect. Mentions one of his first computing experiences involved a Z80, so I’d like to hear his take on that chip choice. One of the attacks I want to mitigate are the venerable “fault-injection” attacks, that just sound evil to me, and he says needs much more research into.

http://www.cl.cam.ac.uk/~sps32/

Mr. Pragma May 26, 2014 12:23 PM

Alex (May 26, 2014 11:57 AM)

Well, that depends obviously on the chip (gate technology, complexity, etc.), the means available and on the party trying it. If, for instance, the party is a state and the chip is designed and produced in that state’s jurisdiction it’s way more feasible than if the party is any John Smith.

Generally speaking though todays highly complex chips with nm structures and many internal layers can quite probably considered as unverifiable for virtually any party.

In order to really make sense and to properly respond to that questions, many details, some of them not even chip related (e.g. in what environment the chip will be used) must be specified.

Moderator May 26, 2014 12:33 PM

“Cyber-timmie,” some time ago you were asked by other commenters to quit changing your name so much, because it makes conversations annoying to follow. I’m now going to insist: pick one name for this blog and stick to it, or I’m going to start removing your comments. I don’t care if it’s a name you use anywhere else or not, just so long as it’s consistent here.

Note the name you pick may not be an insult to, parody of, or attack on any commenter. This goes for everyone — I’m tired of people using the name field for schoolground name-calling games. It lowers the level of debate, and it’s really not as clever as a few of you seem to think it is. Comments posted under parody/attack names are subject to removal too, and the removal may come without notice, especially if your name is so obnoxious that I don’t care to type it.

Finally — and I’m back to you, Cyber-timmie, though a few others should take note as well — you need to work on your tone if you want to keep commenting here. The kind of over-the-top ranting comments you’ve been leaving here are not impressive or persuasive. Frankly, they just come across as childish. You have important matters to talk about, so talk about them in a way that befits their importance. As it is, you’re hurting your own case more than anything else.

Alex May 26, 2014 12:38 PM

Thanks Mr Pragma, just what I thought, not even a state resources can detect several tens of backdoor gates/connections among few billions… well, if every Chinese checks a gate….

To Moderator May 26, 2014 1:05 PM

I am also against radical posts, still I would be less concerned about name changing and be more focused on NSA agents turning the comments into a pseudo-ultra-high-tech-elevated-philosophical discussion, meaning pages of texts without any message, in order to mask the real useful content that used to be here.
It is an obvious intention to pollute the comments with irrelevant, meaningless text. Feel free to ban me, I also use to change names.

Mr. Pragma May 26, 2014 1:40 PM

Alex

Hmm, well, kind of …

One can not stress enough that “security” is not a thing that one can buy. It’s not even, as is often said, simply a process. It’s to stick with the process idiom, a process in a defined context/environment and defined vectors (of potential attack, of interests, …).

Maybe an example with reference to your question helps.

Yes, almost nobody can with any reasonable level of certainty check that a chip is “clean”. And don’t worry, I won’t ride the “one should anyway always assume bad case in security” (although it has much to it).

But for some, possibly even many, scenarios a “clean chip” is not a conditio sine qua non for security. One example might be you (or some state agency) having a need to have reliably working encryption.
Now, obviously tainting/weakening crypto is a strongly desirable and therefore quite probable attack on chips (and there are cases known).

But there are other ways than having a 100% clean and verified (to be free of such tainting) chip. One could, for instance, use different (typically 3) chips, say a xeon, an ultrasparc Tx, and a Qoriq P2xxx and have them perform exactly the same algorithm (which is comparably easy to verify) and to then compare the results or actually some million results.

The mitigation vector (sorry for my bad english but I hope I can be understood) is basically based on the assumption that possibly each of the chip encryption engines is tainted but that each one is tainted in (at least slightly (which different architectures are almost a guarantee for)) different ways.
A pseudo random generator is a good example because tainted pseudo random is hard to detect and quite valuable to attackers.

This approach is quite well feasible even for 3rd world states both technically and financially while say verifying a halfway modern chip is certainly not. But, of course, that approach works only because the client has quite well defined his sensitivity (in our example crypto).

Other examples are of a more general nature and, for instance, ask the question of how the adversary will take away/transmit the sensitive data he gathered. Here the mitigation approach might be to deny that by relatively simple means and control of environment (e.g. blocking relevant frequency ranges with adequate faraday cages and/or by coupling an accumulator along with some properly tuned (nF/pF) condensators into the power loop to the computer). You get the idea …

You may as well use those considerations to judge security advisors/ vendors. Good ones will differentiate at least to some reasonable degree and really good ones will make a detailed analysis of your precise needs, sensitive points/data/processes, potential realistic attackers and attack vectors, etc. Lousy ones like e.g. AV snake oil sellers will rather tend to sell all-in-one “we protect everything from everything” packages or “solutions”.

Finally a somewhat funny example: If not commercial factors but security is a priority you might gain considerable advantage by simply a) not using built in security instructions (PRGs, AES, etc.) and b) have non-optimized routines with slight unnecessary code portions so as to avoid relevant code being auto-detected as e.g. “Ah, OK, that’s the PRG” (and such attacked/tainted/…).

A good combination of proper analysis, know-how, creativity, and thinking outside the box is not only available (and useful) to the bad guys … 😉

Alex May 26, 2014 1:53 PM

How about having a really simple (check-able) machines or component to perform encryption, instead of the latest 20 nanometers billion gates processor? This could be a fair solution.

Jackson May 26, 2014 3:40 PM

Have I missed a post with a code of conduct for the blog? Probably we should have the code before we enforce it, otherwise any action comes off as arbitrary and biased.

I personally try to change my tone and moniker every time I change my IP address, although not in the middle of a particular post. Well, once. And it was sarcastic and I thought it was funny and I think it made the point more clearly than repetitive posts back and forth with someone who would never acknowledge that air is important to life if denying it served his purpose.

Comedy, parody and sarcasm ARE argument. I don’t think one can fairly claim otherwise. Is humor more childish than continual pompous bloviating? I doubt it. There are many ways to be childish and pointless.

Are we back to the days when Lenny Bruce was censored with exactly the same arguments? Why not? The whole world does seem to be running in reverse and if we are serious enough maybe we’ll be back to world war soon.

Moderator May 26, 2014 4:06 PM

There are many ways to be childish and pointless.

Indeed, and people constantly come up with new ones, making it difficult to enumerate them all. But changing your name in mid conversation is especially annoying — there’ve been complaints about this from people not even involved in the arguments — so I’m now asking people not to do it, even if you think you’ve found one time when it would be really funny. I’m confident this won’t turn Bruce’s comments section into a dry, academic debate.

Wael May 26, 2014 5:30 PM

@Nick P, @Mike The Goat,

It might be an app that signs stuff with your private key.– Nick P

And

which houses my keys and other private data. — Mike The Goat

You reminded me of something… Suppose Alice has a symmetric key, Bob sends Alice some number, say RN1. Alice encrypts RN1 with her symmetric key, K, then sends the output, C1, to Bob. At another time, Bob sends RN2 to Alice, she encrypts it with her symmetric key resulting in C2, which she sends to Bob. Can Bob, who doesn’t know Alice’s key, with any degree of confidence, infer that C1 and C2 were encrypted with the same key? It’s a form of a chosen clear text attack, but the object isn’t to extract the key, rather to see if the same key is used.

@Jackson,

Comedy, parody and sarcasm ARE argument. I don’t think one can fairly claim otherwise.

I agree with that sentence, given the style isn’t used excessively.

I personally try to change my tone and moniker every time I change my IP address,

I never understood why people would change their handles. But you bring a good point, why do you change your name every time your IP changes? Security? In that case, you might want to consider configuring a server that hands you IP addresses along with a unique sock poppet, call it DSCP — Dynamic Sockpoppet Configuration Protocol, lest you forget to change your moniker:)

Nick P May 26, 2014 8:50 PM

@ Iain Moffat

It could have been. Trips me out that they had so many more options in how to build hardware. Of course, given the difference between hardware then and hardware now, fewer options was probably the better choice. Older stuff was more interesting, though, as you learn what they built and how they built it.

@ Mike the Goat

“the decision to equip the horn was not an easy one, but it had to be done.”

Maybe I’m just slow from all the work I’ve been doing recently [unrelated to security projects] but I’m not sure what the horn represents. The initial impression was being especially serious about a comment’s content. Your elaborating on it makes me think you’re talking about a guard, your SPARC machine, or a combo of them. That it comes on and off adds a layer of “huh” on top of that. I’ll move on to the other stuff you said haha.

“The plan was to have the firmware of the second horn authored by a different (and mutually untrusting) group. ”

I’d have done it if I had firmware expertise. You can always use something like Coreboot or an open source release of Open Firmware. The former has a trusted boot capability, while the later was used by Sun & has open versions online. Test it to see if it works. If it does, trim out everything you don’t need. Then, verify what it has.

“I imagine a ‘trustedPGP’ type project would have to limit the selection of ciphers and cull some of the lesser used features to reduce the size of the code base and make auditing and/or shoring things up more practical. ”

That’s what I was thinking. The NaCl and Ethos projects were taking this approach with good results so far. Keeping it simple with a secure default that works for most use cases will increase adoption as well as assurance. Developers prefer things they can use correctly with little thought.

“but as I suspect you are inferring the cons of using an 8-bit processor are numerous, especially if such a slow chip is going to be doing crypto. I can just imagine key generation! I agree that a RISCy processor would be a better and more flexible choice.”

The main drawback is it has all the key architectural weaknesses that lead to code injection, while having less performance for mitigation. It’s about the worst situation a modern coder can be in far as making something correct, usable, and secure. Hence, my focus on chips whose assembler bakes in safety, security etc with minimal impact on other metrics. The tagged (i.e. typed) and capability based systems seem ideal. Those with memory authentication/encryption, segmentation, randomized ISA’s, etc are next bet. A JX- or SPIN-style system that runs most of OS and apps within a safe runtime is next bet. An obscure OS & chip without persistent storage connected through a guard is… probably not secure upon targeted attack and is a last resort. (Hilarious enough, it was also our first option.)

Note: The DARPA SAFE team has extended their tagging unit concept into a Programmable Unit for Metadata Processing. Paper.

So, that’s my look at it. Not to mention we have open processor cores and academic one’s that might be free/cheap to license. The Amber ARM v2, Plasma MIPS, JOP, and OpenRISC processors all come to mind. Plasma site runs on Plasma, actually. OpenRISC is fastest, I think. The VAMP DLX processor was formally verified and runs at 10Mhz in basic configuration. I’m sure that the tech I mentioned can be added to open designs like these as most stuff I mentioned was already implemented on similar chips by small academic teams. Just takes people knowing what they’re doing.

Btw, I do appreciate getting some credit on your blog for the comment signature schemes. 🙂

@ all re IPv6

It was a botched implementation, but at least they tried to do something. It’s so hard to implement a big change like that. So, I’d rather it be a nearly future proof solution. The 48-bit idea might have been sufficient if the address is only used for identifying machines or users. The reason 64-bit is favored is for the “Internet of Things” and other stuff that might use a ridiculous amount of names. Additionally, it fits in the processors as others have noted. Although, the use of customized network processors sort of defeated the need for that as they changed processors to fit the problem rather than other way around.

I’d also look into changing the scheme from the prefixed scheme to another one. The scheme they use makes it easier for problems like organizations buying up class A’s. That issue shouldn’t even exist. It might be as simple as changing it where IP’s solely represent how to get the packet to a specific device. Then, devices are issued individual IP’s with prefixes being nothing but router aids. Nobody buys a whole class of IP’s: just the individual one’s they need. Then, a length increase to 48- or 64-bit prevents running out. The logical identity behind an IP is handled by another protocol(s), facilitating persistence across networks & certain security designs.

@ Figureitout

” So I think tutorials going step-by-step thru implementations is a good thing; ignoring the fact of running on hardware that may insert something or hide something, killing the security.”

Exactly.

“I’m patiently waiting for what Nick P’s design will be, but mine will be much simpler and still subject to the usual attacks that need extreme resources to repel. ”

As I said, there are several in parallel with different uses. The one’s for desktops just leverage processors similar to CHERI or SAFE with added IO coprocessor in most basic form. This prevents huge classes of errors, biggest being data running as code. Only a select few components need to be trusted for that versus… a ridiculous amount of code in most systems. Building a usable, secure system on hardware prone to attacks is actually tons harder than to build more secure hardware and leverage it. If anything, I’m doing it to save myself work and so amateurs can build software without code injections popping up all over the place.

“Since Nick P’s going for this new “memory tagging” scheme, it’s new and yet to be tested to see just how much security it actually adds or how easy it can be subverted (and then lead to a worse state, false confidence).”

It goes back to the 60’s. The concept was used in numerous mainframes, prototypes in production, and secure system designs. That I recall, none were ever compromised by an attack on that level. The argument for their security is simple: so long as types and rules are defined right, it will probably work without bypasses as it’s enforced every instruction. That’s the key advantage to these architectures.

The IBM i series is the only survivor in the market and has a good security track record far as code injections are considered despite not doing it truly in hardware anymore. Most attacks on them are due to configuration errors or application weaknesses. They’re also highly reliable, fast, support key standards/techs, and mostly manage themselves. So, doing architecture well benefits more than security & one of the best business machines in all traits uses a typed object architecture. That said, I’m pushing this as evidence these architectures work rather than what to design. It’s too complicated for a small, amateur effort. Other stuff I mentioned wasn’t, though.

“Found 2 interesting links. The first is interesting discussion on a “security thru obscurity” question on obfuscating microcode to render some pre-made attacks null.”

I’ve posted that solution here before. The people answering were smart in hardware, but not in topic area. I posted an answer of my own. Thanks for giving me the link and opportunity as a student might read it somewhere, then produce what was described to many’s benefit.

“He seems to have extensive experience on many types of attacks, that I would like to design to mitigate as much as possible, or at least detect.”

It’s Bruce’s blog and he for anyone contributing intelligent discussion. I like his stance. So, bring him. He can like us, hate us, anything in between… who cares so long as he might contribute something useful to discussions and the field. (eg reason I’m here)

The guy you linked to seems pretty awesome. Only concern I’d have is Clive scaring him off with a rant about him developing fault-injection attacks before most others, maybe ripped off by Anderson, etc. He brings it up a lot. Anyway, that Skorobogatov is one of the main guys in that field at Cambridge makes that a risk. Yet, Clive also regularly gives them credit for good work and comments on Anderson’s blog (lightbluetouchpaper). So, who knows. We’ll let Clive tell us whether or not he’d make a big deal out of it if Skorobogatov joins us in discussions on those topics.

I’ll add that fault-injection needs plenty more research. However, most of the stuff he described can be beaten in typical systems by using methods such as filters and EMSEC shields. These systems main benefit is to reduce risk when enemy is physically right on the systems, esp in case of insiders (access to computer room) or easily stolen property (eg smartcards). Chips that might have secure, immutable software could still be beaten with attacks like he described. And his field has produced plenty of useful results on the defence side. So, it’s worth more investigation.

@ Alex

“How about having a really simple (check-able) machines or component to perform encryption, instead of the latest 20 nanometers billion gates processor? This could be a fair solution.”

It doesn’t work. A guy here specializing in chip creation, verification and security laughed it off. He pointed out that tools that verify larger structures can’t even see the smaller ones. Simplest method is swapping out one for the other with identical external properties if attacker thinks your chip won’t be checked. The more advanced way is to embed the tiny logic in the larger chip during manufacture. If your not checking it at nanoscale, how would you even know it happened much less prevent it? And non-subverted chip making equipment was similarly impractical so it could be swapped or manipulated to do this stuff.

His meme for this was that once a silicon capability has been invented, you can’t uninvent it for your verification needs. If it can be used against you, it will.

@ Skeptical

“we need truly secure networks and systems, and that means security for everyone). Mind you, I’m not totally convinced by it (yet), since I do not think it’s important that lawful surveillance be possible to implement, but there’s no doubt that such an argument would be far more powerful.”

That’s a nice concession. I proposed here a high assurance lawful intercept system that protected both ends’ needs, while providing restricting LEO access to just what’s specified by a warrant. The NSA is opposed to such a design as they want “collect it all” surveillance. Their true coercion powers are unknown at this point. However, the scheme should work for any targeted order. I even added NSA-sponsored tech & evaluation by NSA-approved lab in my scheme as a preemption on any claim Fed’s make to court about scheme being untrustworthy. Icing on the cake.

So, I’ll compromise and so will many Americans. “Secure us against everyone else and safeguard us a bit against US govt,” we might say. Purists will argue but lawful intercept is the law. Noncompliant services will be shutdown. Additionally, TLA’s are already accessing many machines and accounts out there of people wanting privacy. I think it would be better to get something secure with a highly-assured, auditable backdoor than something full of backdoors & weaknesses one doesn’t know about with only assurance being they’ll be exploited.

However, NSA wants full info, control, and ability to do it covertly with ease. So, they won’t go for it. Their hand has to be forced by lawmakers. Of course, this doesn’t stop privacy advocates from working on non-LI tech in parallel. People will just use it at their own legal risk. Services wanting to be here & stay in business can use a method safer in court.

Nick P May 26, 2014 8:59 PM

@ Wael

re key reuse

There’s a lot of research showing it destroys stream ciphers and OTP’s. There were issues with it in public key crypto that led to countermeasures. I’m not sure about block ciphers. I’d ask a cryptographer who was involved in actual breaking efforts such as AES competition. (Maybe Bruce will shed light.) Be sure to be clear that you’re not talking about one time pads or stream ciphers. You want to know about block ciphers, with the mode being a common one.

Note: they also have an initialization vector that must change much like stream ciphers’ keys.

My policy has been “Why take the risk?”Additionally, there’s tricks like generating temporary keys using master keys combined with public data (eg counter or nonce). So, even without generation or exchange of random numbers for sessions, one can get pretty far with simple mechanisms so long as a pre-shared master secret exists.

Wael May 26, 2014 9:19 PM

@ Nick P,

Spot on! IV is not confidential, but should be non static. Should be used as a nonce. The reason I asked the question is that I have seen implementations where the initialization vector is static, plus the clear text to be encrypted is less than or equal to the block size…

so long as a pre-shared master secret exists.

It does not exist, that’s the thing…

Thoth May 26, 2014 11:43 PM

Regarding lawful backdoors, it’s a dream to build a highly secured and auditable backdoor that both the user of the device or system being monitored and the system and/or people monitoring can audit and ensure safety and accountability of their actions within the scope of law.

One weakness that technology cannot change is over-arching laws. The government agenices want unrestricted access in most case to facilitate their investigations and probably horde up some clues for future use. Laws need to keep up with technology to keep power in check.

What is worrying is that if you give agencies a tiny door to work with, they would find ways to expand the door and soon, the door doesn’t exist anymore in an attempt to “know it all” and this does not apply simply to the NSA/FBI/CIA/DOD … any agency around the world with a budge and with such access would use (and may abuse) it.

On a conservative side, I personally feel that it’s best to not make any doors at all. Agencies must do their due best at HUMINT operations and try to gain the confidence of their targets and get their targets willingly divulge secrets the good old human espionage way. SIGINT has made it easier to collect data due to how technology are made in a “trust everyone” manner. Security was not part of the design until later on. All it requires is agents in back offices of AT&T or whatever Internet backbone centers, connect their cables and start filtering. HUMINT has devolved and become less relevant due to the fact we are mostly online and SIGINT is much cheaper and safer than HUMINT (where your agents might face physical harm) and thus HUMINT may have lost a good amount of edge in the Internet era in my point of view but if properly implemented (HUMINT), it could be effective (social engineering).

Should we backdoor systems and devices for legal intercept ? I feel laws and agencies are not ready yet. They are just relying on SIGINT to make work easier as HUMINT is more tedious/expensive/dangerous.

Anura May 27, 2014 1:15 AM

re: IPv6

I think 128-bit makes sense over 64-bit, and I don’t think a larger address space results in significantly increased complexity. The problem is not just how many addresses you use, but how to allocate them. It takes a lot less effort to figure out how to allocate 128-bit addresses than it does to allocate 64-bit addresses. The thing is, you need extra overhead at multiple levels, just for organizational purposes.

At the root level, you want to be able to segregate blocks into different purposes – private address space, loopback, infrastructure, etc. and you need overhead to make sure that’s always going to be enough for each purpose, and then you also want enough overhead in unallocated blocks so that those can be reserved for future use as well. For the regional authorities, they need enough address space to organize their blocks into different organizational purposes as needed. The ISPs need to have blocks that they can organize, and end-use organizations need to be able to organize the addresses they are assigned. Even home users want to be able to have multiple devices, and possibly multiple services on each device, each with their own unique IP address without having to resort to NAT.

At each layer, you want significant overhead so that you not only have room to add more devices, but also have most of your addresses unallocated just so you have room to allocate blocks for future purposes that didn’t exist when you first built out your network. This saves you from significant effort in reorganizing your network in the future due to growth (assuming you weren’t stupid about it). This infrastructure will be around for decades, and who knows how we are going to be allocating addresses in 50 years.

Now is assigning a /56 or /64 to each home excessive? Absolutely, I would go for /96s for homes, /80s for small organizations, and /64s for large organizations, personally, with /40-/56 being for ISPs, and any larger blocks used for reserved purposes or allocated to regional authorities.

yesme May 27, 2014 1:50 AM

Re IPv6

With IPv4 we managed rather well during ~20 years with only 32 bits. Unless the internet of things is really gonna set off and everyone has millions of devices around, I think 64-bit is still more than enough.

And if I was in charge (…) I think I got rid of the port numbering system.

Plan-9 showed us that even distributed computing works very well without them.

Clive Robinson May 27, 2014 2:24 AM

@Nick P,

Your post length is getting longer these days, whilst mine is slowly getting shorter, is it time to pass the crown ? 🙂

My anoyance over active fault injection is not people doing academic research, but NOT doing academic research, for the past thirty years.

Back in the 1980s things were considerably simpler because circuits were simpler, larger and ran at speeds well within the spec of test equipment a single research budget could aford. In those thirty years private/government researchers have been busy.

Now I don’t know who first used the technique, I certainly thought it up independantly. My running into a major “don’t talk wall” (back in the days I still had some respect for the MIs and Gov prior to Maggie Thatcher) did not help as I’m at heart a problem solver not an abuser of vulnerabilities. In the 90’s it was obvious that many of the secret TEMPEST requirments were getting put into EMC requirments and in the process closing down many “old faithfulls” of the security services.

An academic interest kicked off with Smart Cards and it was obvious to those that could see or tinker that they had real security problems and the discovery of key leaking side channels in the power supply kindeled academic interest, which resulted in a splash over DPA. I emailed the person whose name appeared on the paper and out lined using RF carriers not only as a way to get the same information out of the chips as DPA but without having to connect to the circuit from a reasonable distance, I also mentioned it was a two way process by which you could generate fault conditions. BUT then some one decided to get broad patents in the US and droped heavy handed hints to the research community he had rights, and almost over night research interest waned… and that was the way it stayed. Untill fairly recently when there was a hopefull blip over at the Cambridge Labs, where the entropy of a TRNG was reduced from 2^32 to around 2^7 by squirting an unmodulated carrier at it.

If you look back over this blog you will see that RobertT was aware of the same sorts of tricks, having independently discovered them as well back in the day.

Further my own gentel enquiries with the likes of Tony Sale and others who had worked in Gov Research makes me think that injecting EM carriers into equipment to get signals out or to inject faults was known as far back as the 50s/60s and possiblya lot earlier due to RADAR research side effects. The principle of Injection locking was certainly well known to BBC engineers back then as it was used in PAL to solve one of the NTSC issues.

As I’ve said all along, I want researchers in the academic area to work on it as some of us old timers are either dying off or are still gaged by either Govs or Corps.

Mr. Pragma May 27, 2014 2:33 AM

Nick P (May 26, 2014 8:50 PM)

Sorry, but while I value and agree with most of your points of views, I fell that you’ve gotten the IPv6 issue quite wrong.

Actually, as I’ve shown, changing to a larger scheme is not difficult — if done right and sensibly.
It is IPv6 throwing over and changing way more than the address pool size, and doing that without real need, and doing it in sometimes weird ways and new notations that makes the IPv6 change difficult.

I agree with those who dislike potentially too tight limits but going way beyond anything reasonably imaginable into numbers that exceed not only any imaginable human population but even the number of all the molecules of everything in our solar system is just nonsensical and adding major burden without gaining anything.
Short, IPv6 has no raison d’etre unless we expect single atoms in other galaxies to need an IP and we are willing to pay a hefty price — also in terms of risks (as e.g. router discovery related attacks have shown).

But I’m opposing IPv6 also because that whole “a gazillion IPs isn’t enough, we need umptigazillions for IoT and whatnot” because it’s against anything we’ve learned about networks. A network is not about gazillions of items without order but about having items grouped and having those groups groups and ordered again, too.
It’s just nonsensical to say that Jane might need 3 Bln IPs for all the toasters, sensors, TV sets, etc, etc — and at the same time teaching Jane that all those things should be properly grouped into vlans, diverse router groups etc.

Even if all or most of Janes IoT thingies needed internet connections (which highly likely would open cans of worms), or, God beware, having all her thingis directly reachable from the internet, she would not need millions or billions of public and routable IPs.
With decades of solid and established experience we can say with confidence that Jane could — and almost certainly should — meet her needs with private non-routed IP for the major part and only a few public IPs.

Clive Robinson May 27, 2014 3:51 AM

@Mr. Pragma, Nick P,

Whilst I understand there is some real wierdness behind IPv6, it is important to realise that the address size is important.

The hard part is working out what is sufficient. IPv4s 32bits is clearly insufficient and made worse by the original class rules. Likewise 128bits is clearly to much when you consider the Internet of Things (IoT) where the majority of hosts will be inside a 1USD SoC.

It raises the question of lower bounds and upper bounds. A simple lower bound would be 40bits for a world population of less than 8billion giving each individual atleast 128 backbone host addresses.

However this does not alow for subnetting issues where the smallest subnet is going to be four addresses where only one is usable. Then with other “class” or “region” and “organisational” routing issues you are looking at adding upwards of 30bits, which is a problem in as much as it’s over the 64bit convenient power of two size.

However this is based on tbe quaint notion of one address per host unique across the entire network which in reality never happens, and thus is unnecessary as can be seen by NAT and private network classes.

However there is an elephant in the room which mobile communications has brought to the fore even though it is not talked about. What ever solution is chosen it has to alow for all hosts to move compleatly freely in space at any time or speed without the need to change addresses as it goes. As far as I am aware this is still an open issue, which makes routing issues more important than scaling thus trying to decide on the number of address bits first is a little like putting the cart before the horse.

Mr. Pragma May 27, 2014 5:49 AM

Clive Robinson (May 27, 2014 3:51 AM)

With all well deserved respect, I fell that you just repeated the cardinal error of the ill advised IPv6 people by mingling concepts.

Client and routing questions are not much to do with IP addresses. IP addresses are about unique identifiers which support grouping (“nets and subnets”) and routing.

To offer a very simplistic analogue, web sites had the client/locality problem since long and it was solved by mechanisms like user-ids and logins.
There are better and less complex mechanisms available than completely fu**ing up IP addressing and forcing related but seperate issues like client/locality into IP addressing.

The problem we experience is not a lack of client locality solutions. Those zillions of mobiles are happily working today and could continue to do so for many, many decades.

The problem for which a solution was needed – and that is very raison d’etre for thinking about IPn — is that we’re closed to having the pool of IP4 addresses depleted.

So the priority (or even only) question to answer is “How to have more IP addresses available?” (preferably with a sensible reserve for future needs).

As there are billions of users and devices out there who are in productive and active use, some of them vital, a second priority immediately jumps at us: Whatever we come up with better not disturb or disrupt what is currently in use.

As for the adequate size we certainly will not equip every single atom in our galaxy with an IP, hence 128 bit is grossly oversized and 64 bit, allowing for a good billion IPs for each and every human alive, seems plentiful. Considering that bits (in hardware and processing terms) don’t come for free it seems advisable and reasonable to choose an IP pool size that is generous and providing plenty reserve yet not unnecessarily oversized. Ergo 64-bits.

Unfortunately we have still not achieved a good level of security with the current comparatively simple IP4 networks (incl routing, etc.). Why on earth should we introduce new groups of problems and vulnerabilities by generously mingling problem domains and having IP addresses take care of problems like client location or router discovery?

Sorry, but IPv6 is demonstrating a gross lack of logic and reason and simply is lousy engineering. The major interests actually being served by it are business interests, black hats, and the cyber mafia.

Mr. Pragma May 27, 2014 8:49 AM

Dear @Moderator(s)

While I understand your disliking of nickname games (which I dislike, too) I’m frankly more disturbed by the many spam messages on this blog.

The comments of “social gamers” may or may not be interesting per se but spam is definitely distracting and unwanted.

I would like to see quicker and swifter reactions to spam from you.

Benni May 27, 2014 9:03 AM

Interview with Edward Snowden in the german press. He says he is eager to come to germany:

http://goo.gl/Kvyomn He notes that a main part of his work at NSA had to do with analyzing the german communication.

Snowden apparently lead a group for information collection and analysis, which massively collects information from germans. He writes that at NSA, he was considered as one of the best analysts and counterexpionage they had.

The german secret service would works similarly as the nsa, collecting billions of emails and have full access to NSA’s Xkeyscore program.

Snowden says that BND is still keeping facts secret which would embarass the general public…

He notes that
“Digital traces tell an analyst where you live, how fast you drive, whom you vote for, who you love, and if I connect the data, I can find out when you went to bed and with whom”

The headline reads Snowden: “I know when you went to bed, and with whom!”

koita nehaloti May 27, 2014 2:12 PM

Commenters here do not seem to know about one 1930’s tech that is very relevant for discussions about verifying chips:

https://en.wikipedia.org/wiki/Electron_microscope

50 picometers is more than enough for current chips. Much worse and cheaper microscope can be used.

Putting the chip on acid bath of precise concentration and for a precise time will reveal any layer needed. The microscope mapping can be done in 100 layers by using the acid bath 100 times if needed. Different acids and temperatures for different layers of chip.

It is possible that time on those microscopes can be rented for a reasonable price. If not, people have done them from common parts.

Also, by using a magnetic head of a hard disk with custom electronics or something similar may allow a small group to reach 10 nanometer accuracy for a mapping device that could map a chip in a day, month or year.

The result is either a grayscale picture or a color picture with 2 or more color channels from different measurements, layers and treatments. Converting that to a layout flowchart needs a little bit similar program to those that convert pictures of text to text files.

Then the chip can be simulated. Let’s say the chip’s backdoor’s secret-knock checker gets a copy of everything that square root function gets. Some sequence of numbers is the secret-knock. Automated backdoor finder can spot that knock checker because any normal input to that part does not ever lead to any change in chip output.

Nick P May 27, 2014 2:33 PM

I decided to take a closer look at IPv6 situation. I figured some of my points might be off as I haven’t used it in years. My first mistake is that I thought it was 64-bit addresses: it’s 128-bit. That’s indeed ridiculous because 64-bit is vastly more than enough. It’s hard to believe anything over 64-bit is necessary given that supercomputing vendors believe 64-bit will be enough for them for decades.

So, let’s look at potential benefits to be fair. The Wikipedia article list these:

  1. Address space issues for 32-bit. Main IPv6 benefit here is it makes end-to-end reachability happen without worrying about issues such as NAT traversal. NAT approaches simply cause too many headaches to justify. There’s entire libraries and protocols designed just to deal with this. One little change in IP and all that can start going away. So, I’m for a change that eliminates both address exhaustion and NAT being a necessity. I’m including ports they cause headaches for me.
  2. Multicasting instead of broadcasting. This is useful and many IPv4 networks are using hacked solutions to achieve it. That’s usually a sign that the protocol should be changed. I’m not saying it’s necessary: just convenient for many IP networks.
  3. Stateless address autoconfiguration. Automatic host configuration. So far has been an added layer on top of IPv4 layer, although it’s an IP-specific concern. Putting a form of it into IP itself makes sense. My own preference is to isolate such stuff for security/reliability purposes. Most networking stacks don’t care about my preferences, though.
  4. Network-layer security. Building network layer security into the network rather than as an optional layer on top has benefits. Many think it’s the only sensible thing to do. Other’s say it’s unnecessary or just hate IPsec. I have no comment except to say putting protection features into IP by default makes a lot of sense for many reasons.
  5. Packet processing by routers. Simplified header, no fragmentation, and no checksum. These should improve router efficiency.

  6. Mobile IPv6 routing is better than mobile IPv4 routing. There’s a lot of academic research and commercial work in this area. It would be nice to have a standard in most devices that solves this problem. Although, this is lower priority to me.

  7. Options extensibility. Reduce odds of future protocol rewrite. I’ve always pushed for making protocols easier to upgrade. The alternative is hacked solutions on top, bottom and all around them. HTTP & FTP come to mind. So, extensibility is a necessity.

  8. Jumbograms. Improve performance over certain links. A convenience feature for the big pipes I imagine.

So, quite a few benefits for endpoints and middlemen. The biggest problem here, as Mr Pragma notes, is that they’re practically throwing away all the old stuff and forcing a complete redesign on network. The best criticism of that is by Bernstein:

http://cr.yp.to/djbdns/ipv6mess.html

Bernstein, Pragma, and I all agree that the IPv6 solution is harder to adopt than an IPv4 replacement should be. I could easily imagine extensions to IPv4 that extend address space, enable efficient routing, etc. without causing these problems. The main goal should’ve been compatibility, solving most critical problems, and forced yet non-disruptive migration. IPv6 doesn’t meet these requirements. Instead, it forces an all or nothing game on the Internet that few are willing to play.

That’s why it’s a failure.

@ Mr Pragma

There were alternatives that tried to redo things in a way that would make distributed computing much easier. Tanenbaum et al’s Globe project is a nice idea. There were even papers showing how it could do Web stuff better than the web, along with distributed applications. And it didn’t need IPv6 to accomplish such things. I say if someone wants to throw much of the Internet out they should (like Globe team) offer a compelling reason to do it. Otherwise, they should just fix it while changing as little of it as possible.

Skeptical May 27, 2014 2:51 PM

@Pragma: That you tell a lot of propaganda bullshit and paint whatever usa does in nice — if bent and wrong — colours isn’t new.

This is merely more ad hominem and not a response to the analysis I gave in my comment. Can you be more specific and substantive as to what you disagreed with?

That you, in fact, consider the us$ to be “floating” a great thing, something that by most reasonable beings, incl. americans, is considered a gross evil, does not surprise me (nor do I intend to discuss such matters with you).

Whether a currency is pegged to gold and whether it is floating are not quite the same thing. Here’s Keynes on gold and Krugman on exchange rates.

I don’t know of any polling data on what most Americans think about floating exchange rates, though Ron Paul erroneously claimed that a majority supported a return to a gold standard in 2012.

Let it suffice to mention that it is of doubtful wisdom to exclude some globally important major and skillful party from an event […]

Quite possibly, yes, though there are dimensions to such a move from a political/PR vantage which you may not be assessing, which complicates an evaluation of the wisdom of doing so (or the wisdom of merely mentioning the possibility).

As I said, the official being quoted was speculating on the range of things that the US is considering, and I doubt that particular item would be adopted. Even if the US were to undertake it, it would form a tiny component of the US policy response. It’s not something that in itself has much appreciable effect.

That you, however, elaborate on matters you yourself admit to not know about (praised be your rare honesty) is a new climax, if on the negative scale.

I always value positively praise such praise.

First you brilliantly repeat what I said in the first place, albeit in other words (I did say that research incl. the search for maxima is positive and important).
Obviously it didn’t occur to you to understand the point. Let me put it in a way you might understand:

I also appreciate your help in understanding what you write. You’ll note that I didn’t say I disagreed with you. Instead I highlighted the relationship between what you termed as “maxima” and “pragmatic.”

Let us assume that chances are that your X password (say Amazon, bank, …) stored MD5 hash (very low security) is with a high probability comfortably crackable or even already cracked. Let us further assume that there are some well known considerably better and more secure methods (say AES128) available to secure your password/transaction/etc.
Now, would you prefer a) to have some cryptologists to research 1024 bit very very high security, which might become widely available in, say 3 years, or would you rather b) want a reasonably secure and proven solution right now?

I would say the question illustrates the fallacy of false dichotomy and that it ignores the relationship between research and current capabilities (in that what cryptographers do at time t in (a) of your question will affect what is considered “reasonably secure” at time t+i in (b) of your question, where i has a large range of possible values).

Put differently, my answer is “both.” Presumably this is your answer as well.

Oh and btw: On what basis do you offer OpenBSD as a “maximum leaning” example for security?

I carefully qualified the example I gave, Pragma, noting it to be imperfect but sufficient to illuminate the point, and describing OpenBSD as “more” maxima-leaning in comparison to Windows at earlier points in time.

A) there is — justified — discussion inhowfar OpenBSD is to a large degree more about security theater and PR than about real security (hint: other less security noisy BSDs offer security features OpenBSD doesn’t while offering basically the same sec. features OpenBSD offers).

Fortunately my example did not require anyone to wade into that type of discussion.

B) OpenBSD does not even arrogate to it to be in (any major) security research. They merely state that they try hard to have a secure and safe implementation.

Fortunately my example did not claim this either!

It seems you don’t know about development of solutions neither. Forget your “provided the projects were brought to a certain stage of maturity”

I’m always happy to learn more about such things.

If a project has reached a “certain stage of maturity” (where “certain stage” is very different depending on whom you talk to) it hardly needs any state funding anymore. That funding is needed at an early stage.

Pragma, a project needs to reach a certain stage of maturity to have a viable chance of receiving funding, by which I mean that it must have progressed beyond the level of comments here to any one of several things depending on the nature of the funding being sought (and I mean funding in a broad sense as well). In context I would have thought my meaning to be clear, but communication is always imperfect.

And btw. we were talking about complex chips here, not about some hobby board. You know, the kind of stuff that is so immensely demanding that someone doing it with a bare 7 digit amount of $ he is considered a brilliant hero because usually those developments are in the 25+ mio $ area.

I’d heard that some research efforts can fall within those areas.

So I suggest you stay with your usa propaganda and keep away from technical and professional issues.

I always give these suggestions the attention that they deserve.

And now I’ll deeply impressed shake in fear together with all the Chinese security guys who may not come to us security conferences. Just to please you I will even try to cry. (In other words: nuland yourself!).

As discussed in other comments, US policy towards PRC commercial espionage seems to have shifted to a path with clear steps of escalation (see the article quoted here). What happens in connection with certain visas is almost entirely irrelevant to that path.

Since you introduced Nuland by way of suggesting I do something anatomically awkward, let me use her to illustrate: suggesting the US policy shift that has occurred is limited to the denial of visas is a bit like suggesting that US funding of organizations in Ukraine is limited to the provision of cupcakes to protesters.

I’m also not sure you’re recognizing what information this US policy shift conveys about possible opportunities for those interested in developing information security policies (in a broad sense), research, and “solutions.”

As one noted researcher wrote, a person should always look sharply for Fortune, for though she is blind, yet she is not invisible.

Of course, this researcher apparently died in enormous debt. But you see the point.

Skeptical May 27, 2014 5:05 PM

The Federal Trade Commission released a report today on data brokers:

http://www.ftc.gov/system/files/documents/reports/data-brokers-call-transparency-accountability-report-federal-trade-commission-may-2014/140527databrokerreport.pdf

The legislative recommendations include

(1) transparency to consumers as to whether their data is held, as to the data itself, and as to the sources of the data, and
(2) suppression or opt-out powers for consumers.

Slowly moving in the right direction.

Mr. Pragma May 27, 2014 5:11 PM

Nick P

wikipedia also states “IPv6 was developed by the Internet Engineering Task Force (IETF) to deal with the long-anticipated problem of IPv4 address exhaustion.”

Which, considering that IPv6 grossly fails to achieve any significant uptake although the current IP4 depletion situation would very strongly suggest that any not completely insane, abominal, and moronic alternative would be more than welcome, clearly points to IPv6 having lousily failed its mission.

But then, I don’t like wikipedia for anything than a (very) curse first overview. To prove me right wikipedia a little further down states “Every device on the Internet is assigned an IP address for identification and location definition” and such confirms that my dislike is well based (“identification” is correct, “location” is IPv6 hype and not correct).

Now to the points …

The “address space issues” are basically little more than IPv6 bullshit bingo. One major real problem, network renumbering, can be solved with 64-bit addresses, too. And again wikipedia as strong supporter of IPv6 uses BS arguments and blows up the problem.

Similarly NAT is often badly blown up as a problem (funnily that same game was played when browser guys for their very own set of reasons braught half the house and the kitchen sink into browers and http, claiming NAT and firewalls being a problem, blabla).
NAT is exactly the right and sensible thing for John and Jane Smith, particularly with all their IoT thingies and toasters with WiFi. The real solution would be to fix ftp and some other things. And anyway that is not IP related.
Finally the header field arguments seem doubtful to me. For one, the major part is related to routing-magic (bullshit), mobile thingies (bullshit), and IPSEC functionality (bullshit). Moreover they are certainly not more efficient that IPv4 headers. Hint: Look at the “next header” field …

IP addresses are about identification are about identification are about identification. Period. Ignoring that simple fact is one major reason for IPv6 to lousily fail.

“Multicasting” – 64-bit addresses would offer the same. For the price of a handful of persons not having billions of addresses but just millions. I’m stepping forward to take that pain upon myself (“Pardon me? You have only about 20 million IP addresses at your home? Poor fella! How can you live with that?”)

“Stateless address autoconfiguration” – is, as soon as one puts the hype aside, bullshit to the square, a pain in the ass, and a major invitation to black hats.

Pardon me for not going on but this gets boring. Suffice it to say that if there is a major pain for millions and millions of people and those people don’t accept the cure one offers then maybe the “cure” is worse than the pain.

Anura May 27, 2014 5:14 PM

@NickP

I decided to take a closer look at IPv6 situation. I figured some of my points might be off as I haven’t used it in years. My first mistake is that I thought it was 64-bit addresses: it’s 128-bit. That’s indeed ridiculous because 64-bit is vastly more than enough. It’s hard to believe anything over 64-bit is necessary given that supercomputing vendors believe 64-bit will be enough for *them* for decades.

Like I said, I think that we should have overkill. It’s not about how many we need, it’s making sure that at every organizational layer, we have enough overhead, while allowing us to eliminate NAT, and possibly even the port concept. This is why I like giving each resident a 32-bit address space, because it can allow a unique address for each connection, with overhead to spare. So with IPv6, you can assign each resident a /96, and give each device a /108. If you go this route, then you can see that a 64-bit address space is insufficient. Now, we are assigning residents /64s or /56s, which is just a waste, and that stupidity will probably cause management problems somewhere down the line as we overallocate addresses to the point where we have more than enough addresses, but it is so fragmented that they can’t actually be easily used.

If it were up to me, I would have planned the system so we don’t expect more than 1% of the total address space being assigned for any purpose, either to regional authorities or reserved use, over the next 50 years. Reserve 00::/16 and FF::/16 for special purposes (private addresses, IPv4 compatibility, etc.), and then assign the next 6 /16s to regional authorities, then assign the rest only as needed, leaving .01% of all addresses assigned for any purpose to start.

The real problem, as has been stated, is the incompatibility, and for that it doesn’t matter if it is 48, 64, 96, or 128 bits. The protocol should have been deisgned to handle this gracefully. Although I don’t know how you can allow IPv4 to send to an IPv6 address, we should have been able to get the whole system using IPv6, with an IPv4 compatiblity address space, allowing the translation from IPv4 to IPv6 to be transparent. Devices could have only been connecting to their local network using IPv4 or IPv6 with the translation happening at some intermediate level until everything supported IPv6 – at that point, it’s simply a matter of beginning to use IPv6 addresses. It’s still a massive undertaking, but better than the situation we have right now, where you can configure IPv6, but it probably won’t actually be used, so good luck getting people to adopt it.

Anura May 27, 2014 5:24 PM

“Reserve 00::/16 and FF::/16 for special purposes” should read “Reserve 0000::/16 and FFFF::/16 for special purposes.

Wesley Parish May 27, 2014 6:47 PM

Some time ago we had a discussion on using 3d printers to make printed circuits, and the like. Slashdot refers us now to: Rabbit Pro who are allegedly trying to do this very thing. /me wonders how much of our discussion wound up in their prototypes. Perhaps we should ask them.

Meanwhile, Eben Moglen has a very interesting discussion on the value of privacy for the preservation of democracy and human values as opposed to totalitarian values.

Clive Robinson May 27, 2014 9:33 PM

@Wesley Parish,

Speaking of 3D printers, we are obviously at the start of the summer “Silly Season” for news…

According to the lunch time news today a UK couple have invented a way to “print fruit” with a 3D printer and the news caster made some joke about bananas off the page…

Well it’s not realy true, what it actualy is, is how to use fruit juice in the 3D printer jet and a geletin solution that jells when the two come into contact. Thus the printer prints out a fruit shaped jelly[1] not a fruit… so more uninformed journalistic hype 😉

[1] One of the anoying problems in life is the differing UK/USA use of food terms and cooking measures. In the UK we have various forms of preserves which includes cordials, cheeses, conserves and jams, which vary from liquid to solid. The preserving agent is sugar, and the solidifing or jelling agent is pectin from the fruit or fruit juice [2]. In the UK we also have a range of dessert items that are not preserves even though they are made with sugar, they are ment to be eaten that day or within a few days –prior to the invention of domestic refrigeration[3]– these include curds and jellies which use other protiens usually from animal products –white of eggs, boiled pigs feet&skins etc– to provide the jelling agent, of which geletine is most often used (pigs trotters not being a popular dish these days 😉 However what the UK calls “jam” the US calls “jelly” and this causes much confusion, however I don’t know what the US calls what the UK calls “jelly” (which appears frequently at young childrens parties, for the bright colours, wobble factor and occasionaly balistic integrity factor ;-).

[2] Apple and lemon juice if obtained the old way are very rich sources of pectin and are thus staples in preserve making. However if you use the stuff from cartons you buy in the supermarkets your preserve may not set. The reason is some industrial juice extraction processes use an enzime to break down the pectin and other parts of the fruit pulp to get extra fluid content. These enzimes are “processing agents” not “addatives” or “preserving agents” thus often don’t get a mention on the packaging. Which makes it difficult to decide if a juice is suitable for jam making or not.

[3] We now have something called “fridge jam” which is basialy a way to use up soft fruits that have gone a bit too soft. Rather than chuck them into land fill where they can cause problems, you simply put them in a pan with some apple juice bring to the boil reduce to a simmer for a few minutes to break the fruit down then slowly add in sugar that was about half the fruit weight. Being carefull bring back to the boil for a minute or so then pour it into a clean glass jar or ceramic dish, it will taste better than shop bought jam/jelly and will last for a couple of weeks in the fridge or several months in the freezer, it’s more liquid and thus can be used as a sauce for/in ice cream etc. I’ve never tried pouring it on squid but sweet chilly sauce which is made a similar way is, as are some other sweet sauces made from fruits we consider as vegtables (tomatos cucumbers etc).

Anura May 27, 2014 10:14 PM

What you call jelly in the UK, we call jello (a generic form of the brand name – Jell-O) or gelatin in the US. We use jam, jelly, and preserves interchangeably.

Printing actual food from the raw materials, much like cold fusion, is perpetually 20 years away.

Buck May 27, 2014 10:52 PM

In the states for some reason, you’ll be far more likely to find that bright wobbly ‘treat’ in a hospital than at any childrens’ parties…

Figureitout May 27, 2014 11:16 PM

Nick P
As I said, there are several in parallel with different uses.
–I’m just waiting then, as I’m sure you know, saying the design is infinitely harder than building it. Wonder what kind of bugs you’d get. I’ll keep being an annoying thorn until you can report some nice progress, sorry. Then I’ll think on how to hack it. :p

Yes, I noticed your posts on possibility of a microcode obfuscation trick, but how to do it still is mostly a hidden endeavor.

I’ll email Dr. Skorobogatov w/in the week, see what he has to say. Bruce or MOD hasn’t told me to go to hell yet; and Clive can keep his “toys is the pram” sounds like.

Mr. Pragma RE: spam
–The blog has more to deal w/ than spam of course; sometimes the MOD removes it w/in seconds…

koita nehaloti
–You’d need a homemade fume-hood and some careful reading before you decap chips w/ nitric acid or whatever. Making a decent microscope is a project in and of itself; and the good ones aren’t cheap. You’d also have to look in the PCB using radiation b/c a hidden antenna is also a concern. Lastly what gets me is it’s still “eeny-meany-miiny-mo”, you have to trust 2 or more chips are from the exact same place, made exactly same and not exchanged, on which one you rip open and which one you use…My dad showed me some little signal analysis boards, I need to get the exact name again, and use them on a spectrum analyser. Would like to try that and look into that some more.

Actually, Ken Shirriff (legit hacker and nice/very useful blog posts) recently RE’d a TL431 and he actually just literally split it apart w/ pliers and took a basic microscope to the die and got a pretty nice picture of most of the guts; still using a more thorough decapping and RE to compare.

http://www.righto.com/2014/05/reverse-engineering-tl431-most-common.html

Still, if I saw people really taking the reins in this area, that’d be awesome. I hope to try and use a microscope at my school before I’m done.

Wael May 27, 2014 11:23 PM

One of the annoying problems in life is the differing UK/USA use of food terms and cooking

I find that interesting! It can actually be rather funny…

BE: Smart means handsome
AE: Smart is intelligent
BE: Biscuit = AE: Cookie

In the US a Biscuit is a kind of bread
BE: Grocery, AE:Store
In the UK, Store is a storage

BE: Rubber = AE Eraser
AE: Rubber is a prophylactic

BE: Mad = AE: Crazy
AE: Mad means angry or BE: Cross

Lift = Elevator
Flat = apartment

The last one, I have a story on: A colleague of a colleague of mine went to the UK for a few months work. When he arrived at the airport, he asked one of the attendants for a dolly. This is the conversation that took place:
The attendant: Good god, man! You just got off the plane!”.
Colleague: Yes! I did!.
Attendant: You cant be that desperate!
Colleague: What do you mean? I need a dolly — a hand truck for my language, they’re rather heavy!
Attendant: Oh! You mean a trolley!!!
Colleague: Yes! What’s that all about?
Attendant: Dolly means prostitute in the UK 🙂
Colleague: Oh!

For more check these out…
http://en.wikipedia.org/wiki/Comparison_of_American_and_British_English
http://effingpot.com/food.shtml
For some reason, I like those two definition explanations:

Nick – To nick is to steal. If you nick something you might well get nicked.

Suss – If you heard someone saying they had you sussed they would mean that they had you figured out! If you were going to suss out something it would mean the same thing.

Nick P May 27, 2014 11:35 PM

@ Wael

Oh you’re funny. I thought to Nick was to enlighten. Guess I’m not well-versed in British. 😛

Anura May 27, 2014 11:45 PM

I got very confused when a girl in my class asked me if I had a rubber, and then asked me to knock her up over the weekend.

Wael May 27, 2014 11:51 PM

@Figureitout,

Actually, Ken Shirriff (legit hacker and nice/very useful blog posts) recently RE’d a TL431 and he actually just literally split it apart w/ pliers and took a basic microscope to the die and got a pretty nice picture of most of the guts; still using a more thorough decapping and RE to compare…

Nice work! Ok, so he RE’d a chip with eleven transistors. How long would it take him to RE the GK110 Kepler with over Seven Billion transistors?

Clive Robinson May 28, 2014 12:00 AM

@Anura,

The droping of IP addresses and port numbers and replacing them with what are unique circuit numbers is one thing that does make sense for connection oriented traffic such as TCP etc but not for datagram oriented traffic such as UDP, not that UDP represents much of Internet traffic (and could be replaced with TCP with little trouble).

When talking about IP size then it does make sense to think of the three parts of network identifier, host identifier and port identifiers as one unique number. However there are some issues with doing this (port numbers change).

However it won’t remove the need for NAT and PAT, although originaly designed as a stop gap measure to preserve IP numbers and reduce costs of some ISPs it’s quickly become usefull in security and large site issues such as load balancing and configuration managment etc.

It is also going to cause problems with mobile devices. The simple fact is we have buried our heads in the sand over mobile devices and made hand waving gestures about it including the very mistaken idea that mobile phone protocols work so it’s not an issue.

It is an issue and it’s going to get a lot lot worse befor it gets better and neither IPv4 or IPv6 are upto the job.

To give an idea of what the issue is mobile devices come broadly in two types local to one or more routers under the device owners control, connected to any routers in wide area networks not under the device owners control. Thus your laptop connected via your home wifi is of the former but via a mobile dongle the latter. But what about the likes of your currently bluetooth head set… it’s local to your laptop which might be on your wifi, or works wifi, or an airport wifi or using a mobile phone provider from a train. In the Internet of Things the head set would have it’s own IP address etc. Whilst the user is more or less static via a wifi connection some of the problems can be resolved –even under IPv4– by assuming they are actually static devices provided they are in effect NATed and will have different IP adresses for each different network they connect to. But it only solves the issues of “user originated outbound traffic” not the problems of inbound traffic from other users somewhere else in the cloud. Thus the current solution is like having a mobile phone that can only make calls never receive calls, which does not make for a working network.

The soloution to inbound traffic is to have a “known place of contact” on the network, in essence it’s a database that knows your current contact details. But it has a major issue which is currancy about “location” and status, that is it has to know what your current IP address is and if the laptop is on or off. Which might be managable for static use via wifi is not going to work to well for mobile use on a train.

Back in the 1980’s when mobile phones started to be designed there was a problem, previous mobile designs for car phones did not scale for various reasons. One of these issues was location and status. The simple solution was a central database and the phone would send out a beacon every few minutes to say it was still listening and this was used to update the database. The problem is that the more frequent the updates the more network bandwidth they take up and as engineering traffic can easily be more than consumer traffic and swamp the database. Decreasing the update rate eases these problems but gives rise to other problems specificaly with inbound traffic. Basicaly the time delay to check the mobile is where the database says and in the state the database says, users are not happy with waiting more than 30 secs and then being dropped or diverted to voice mail.

And it’s this trade off in connect time and network traffic that is a real killer. The reason it can be made to work with mobile phones is complicated but it fundementaly requires a heirachical system that is not controled by the user but the system provider and the user accept a half minute delay to connect and an unreliable connection. The Internet is designed not to be either heirachical or centraly controled, and most apps cannot deal with a thirty second connect time or unreliable connection, thus the current mobile phone solution is not going to work on the Internet at the IP and above level for mobile devices.

Then there is a secondary issue of that headset under the IoT it would need to be dealt with as well as the laptop, but the user is going to also want it to work with both their laptop and their mobile phone, this is going to cause a host of new issues as well as adding to the location and status issues.

As far as I’m aware nobody has come up with an agreed and workable solution to the problem, and we know application developers don’t have a clue about it, nore do those who write the libraries and some modern programing languages.

Neither IPv4 or IPv6 can deal with these issues no matter how much you mangle them so untill they are solved arguing between 128/64 bits is a little like arguing the shade of red to paint the fire truck in when the town is already burning.

I can easily see why 64bits might not be enough and I can also see (as with house numbers) why 10bits would be sufficient for all non routing or leaf hosts. The problem of the “to many bits” realy only applies to embedded systems that are small form and running on batteries, which is currently not an issue but will become one if the IoT vissionaries have there way.

Personaly I think we should drop the idea of “All hosts are equall” and look at hosts as “nodes” and “leaves/leafs” that is those that route traffic are nodes and those that don’t and generaly run the app are leaves or devices. This does not preclude a device such as a laptop being both, but would require root and branch surgury to many OS’s, but that is going to have to happen anyway.

As I’ve said on many occasions with regards crypto functions you need a proper framework of standards not a series of cludges forming an inverse pyramid of decressing stability.

All of which means we need an interim solution that could be quite easily solved with IPv4,NAT and PAT, but for one minor issue. The IPv4 address space has become a commodity market where address spaces are bought, sold and leased etc. There is a lot of money tied up in this market and quite a few jobs, although it is a faux market controling/killing it is fraught with political problems which nobody appears to want to resolve judging by the last ITU gathering. And it does not matter how much neigh saying goes on unless the political, mobile and IoT problems are addressed correctly we will be having this or a similar conversation in five years time, and personaly I would forget NIST, IETF and UN they don’t have the credability or experiance individually to solve the problem. I would look over to how the various EU engineering stabdarda organisations solve the issues, they went through the various baptisms of fire and now know how to forge working standards that solve both technical and pollitical problems associated with devolved markets.

Wael May 28, 2014 12:02 AM

@Nick P,

Guess I’m not well-versed in British…

Well, we are in the same boat, and apparently we’re both seasick! Spell checker changed “Luggage” to “Language” and I didn’t catch it…

yesme May 28, 2014 1:33 AM

@Mr. Pragma

“IP addresses are about identification are about identification are about identification. Period.”

Bravo. You are exactly describing what’s wrong with committee design.

“Ignoring that simple fact is one major reason for IPv6 to lousily fail.”

I tend to go with Anura for this one. IPv6 doesn’t have anything to offer (well, not until it starts to hurt economically). So if you want to push IPv6 you can do two things:

1) Make the transformation painless.
2) Design all IP related new protocols or improvements for IPv6 only (think IPSec). That way IPv6 does have real and noticable benefits.

I think IPv6 failed until sofar because of that. But I do think IPv6 will be the dominant one within 5 years, despite of its shortcomings.

Clive Robinson May 28, 2014 2:55 AM

OFF Topic :

China is upping the anti over US Cyber relations,

http://qz.com/213398/the-escalating-us-china-spying-war-is-mckinseys-loss-and-huaweis-gain/

Basicaly China is banning the use of contractors, it’s not clear if it’s just in the ICT sector or broder to include financial services as well or compleatly across the board. Which ever does not realy matter it’s going to hit home on the US and other 5-Eye nations. Mean while it would appear that China atleast is seeing an increase in revenue from Europe and other places where the NSA implanted or otherwise adulterated US kit/software/services has been shuned.

And to make the pain worse China is draging Cisco into the cyber spying issue despite their protestations of inocence,

http://www.nytimes.com/2014/05/28/business/international/china-pulls-cisco-into-dispute-on-cyberspying.html

But it may be worse to come, according to Bloomburg anonymous sources China has told it’s banks to pull the plug on IBM equipment due to potential financial espionage worries,

http://www.reuters.com/article/2014/05/27/us-ibm-china-idUSKBN0E70S620140527

If true then Chines banks may well see a lot of off shore business head there way, as there are suspicions that the NSA has been supplying information relating to financial activities of organisations and other Governments to various US agencies who have used it to get fines etc…

But it appears that the cracking and cyber extorcion brigade are still doing their thing, in this case with Apple’s iPhones,

http://www.telegraph.co.uk/technology/apple/10857715/iPhones-frozen-by-hackers-demanding-ransom.html

It’s nice to see a bit of reality still happening in the cyber world, and for some twice as nice as it’s the fanbois getting a portion of the action 😉

Christian May 28, 2014 4:00 AM

http://www.theguardian.com/technology/2014/may/27/-sp-privacy-under-attack-nsa-files-revealed-new-threats-democracy

This looks like a must read.
Essay by Professor Eben Moglen.

“Last century we desperately fought and died against systems in which the state listened to every telephone conversation”

“We’ve lost the ability to read anonymously. Without anonymity in reading there is no freedom of mind, there’s literally slavery”

“The people of the United States are not ready to abandon our role as a beacon of liberty to the world. We are not prepared to go instead into the business of spreading the procedures of totalitarianism”

Mr. Pragma May 28, 2014 4:19 AM

Clive, as expected, got it about right.

Ports, of course, can be considered as just another part of identification. They are important and useful, however as “inner node/leaf addresses”.
So there are 3 parts, the network, the nodes/leafs, and the “internal leafs”, typically associated with services.

Ideas to get rid of ports basically just shift the tag from “port” to “ip address”. And this is wrong, also conceptually, because an ip address identifies an element within a network – but it does — and should — not care about that elements details, somewhat analog to the name server for a level must not care about details of name servers in lower levels. This btw. is not a purely technical question but one with social, political, and legal dimensions, too (e.g. “my server as opposed “the network”. That’s also the reason that the different notation ‘:’ vs. ‘.’ makes sense; it actually marks a boundary.

And it carries major practical implications, too. Right now, we have pretty everything from network stack up to gazillions of pieces of software based on the partitioning assumption “IP address ~ this network element/node/leaf” and “port ~ service”.

In other words, pretty everything the IPv6 “let’s change about everything (but let’s at the same time ignore the original and urgent problem)” idiocratic gang succeeds in pushing through will wreak havoc with — often critical — infrastructure.

As for NAT (and relatives): That’s a great thing and an important one. And as if to prove gross inconsistence and grave lack of reasoning capability the IPv6 indiocracy once more would like to break important infrastructure for the oh so joyous fun of basically merely shifting tags. So, rather than addressing something as “a.b.c.d:p” (and have a router/firewall do its job within the proper authority) they would like to write “a:b:c:d:e:f:g:h” to “make things better”.

It becomes more and more clear to me that a major first step to solve the IPx issue would and should be to build spacious and well protected mental asylums for the IPv6 idiocracy gang and to then let proper and responsible professionals find a useful, practical, and acceptable solution for the real world out there.

Ad “mobile”:

The “logic” is broken in the first place. IPv6 and pre IPv6. Playing funny games with IPv6 a) is addressing the wrong problem and b) of course “solving” it in the wrong way, too.

For a starter, usually it’s not about devices but about people. All the funny IPv6 mobile gadgets are worth little as soon as you change your device. And, frankly, which server cares about your device? the web service you connect to doesn’t; it cares about the user. And so does the mail server. And pretty every other server, too.

“But” … I hear IPv6 proponents (what a polite word for “retard”) say “what if I happen to change cell zones just after sending a request and get another IP?” The answer is simple: That’s an IP housekeeping/management problem of your mobile provider. And fu**ing NO, the world does NOT need a completely new, problem ridden, nightmare IP implementation to so solve that problem. And being at that: No, we do not need a new IP implementation for the case you have a flat tire or your mom would like to have those cuuute doggy pix in higher resolution (I mention that for prevention reasons. You never know with the IPv6 idiocracy …).

And all that shit because some (typically us-american) corporations, agencies, and universities, feel entitled to millions of IP4 addresses so as to have their printers and coffee machines internet-reachable by public IP.

You want a simple and efficient interim solution to gain some years in order to reset and start over reasoning about a sensible IPvX?

Reclaim each and every CIDR A and B IP ranges not held for ISP (and related) purposes and hand them a C range — if they can reasonably justify a need for it. Else give them a /28. There are schools all over the world that do not even have that. Simple as that.
If you want to do something good on top, allocate IP ranges roughly corresponding to the number of humans in any geographical area.

Mike the goat (horn equipped) May 28, 2014 4:50 AM

Re ipv6: it is a catastrophe, and there are myriad reasons that – more than a decade on – we haven’t all migrated. If address space is the primary driving issue, then if we just extended the address field from four to eight octets.

My IP v4.1 proposal is to take RFC791 and make the following tiny changes:

IHL from 4 to 8 bits.
ToS eliminated; use the options field if necessary for packet ToS marking.
TTL from 8 to 4 bits. Max TTL of 128 is more than sufficient for today’s internet.
Version header from 4 bits to 2. IPv4.1 packets will stick out like a proverbial sore thumb anyway, and I doubt we will have so many revisions that we require an entire 4 bit version field.
Checksum from 16 to 8 bit; this is more than enough space.
SRC and DST from 32 to 64 bits.

So after these small optimizations we have managed to only increase the header by 82 bits.

While we are at it let’s mandate a minimum path MTU and do away with the DF option. If a router receives an oversized packet it should rewrite it into smaller fragments AND send an ICMP response to the sender advising them of the max MTU of the link. Only one response will be issued to prevent flooding and if the peer still decides to send oversized packets then the router must continue to do the fragmentation on the peer’s behalf.

Anyway, that is my proposal.

Clive Robinson May 28, 2014 7:04 AM

OFF Topic :

Due to other “news noise” I almost forgot about this,

http://www.theguardian.com/world/2014/may/08/japanese-man-arrested-guns-3d-printer

The 27year old appears to have –had– his own 3D printer and printed out five different guns from designs downloaded from the internet.

The important point to note is that apparently two of them are now known to have worked and were capable of killing or injuring other people, without having any metal parts in the guns.

This raises the question of if such high density plastic polymers show up on X-Ray or microwave radar based scanners. In the unlikely event they don’t then there may be renewed questions about barrier security and how to implement new or modified mitigation procedures.

yesme May 28, 2014 8:09 AM

@Clive Robinson

As a mechanical engineer I happen to know quite a lot about plastics (well, more than the average person I guess).

There is a good reason why the barrels of conventional weapons are made out of high strength steel. It is simply the best material.

Plastics are bad construction / tooling / weapon barrel materials. Especially the IMM plastics used in most 3D printers (shown in picture). Reinforced Epoxy is a lot better. Another problem is heat. All plastics are melting at ~300 deg Celcius.

The barrel probably needs to be replaced after every shot.

Besides that, the accuracy of 3D printers is nowhere near the finishing requirements for barrels. For the finishing you need extra equipment (a lathe), but still I think it’s impossible to reach the required tolerances.

Plastics also react more due to environmental changes such as heat and moist which is devistating to the tolerances. So the barrel needs to be oversized and that results in lower accuracy and impact force.

The ammo is probably still ordinary ammo that you have to buy in a gunshop and is detectable with a metal detecor.

To me it’s a gimmick. I think you can roughly compare it with the flintlock pistol. Lethal but very clumsy.

If you plan to do something nasty with this kind of equipment, you better think again and get the real stuff.

Clive Robinson May 28, 2014 9:00 AM

@yesme,

Whilst you and I know that there are many things wrong with these plastic guns the vast majority don’t, and that’s the problem.

If a terrorist or criminal walks into a place with one and finds / manufactures an excuse to shoot one person or object without the gun visably breaking down, then virtualy nobody who sees it will belive that the gun is not capable of sufficiently accurately or leathaly working another dozen times with them being one of the fatalaties.

Also I know from experiance when younger that a suitable tree branch when drilled out and bound, will with a “mouse trap” firing mechanism, repeatedly fire .22 rim fire with sufficient acuracy to hit tin cans at twenty to thirty feet, and certainly sufficient to kill squirles when setup as a top of fence trap.

So yes I have concerns that such a weapon will be effective if used in the right way at the right time.

Clive Robinson May 28, 2014 9:16 AM

@Nick P,

It might be of interest to you that the latest version of Steal Bank Common LISP has been ported to ARM under Linux,

http://www.sbcl.org/news.html?1.2.0

Which means there is a reasonable chance of getting it to run on the likes of the Raspberry Pi.

Nick P May 28, 2014 11:33 AM

@ Clive Robinson

Interesting. I looked up the platforms it’s on. The Linux version supports quite a few architectures. Funny that the Alpha architecture was supported long before ARM. Two recent projects, a certifying compiler and a tagging engine, also used Alpha ISA for their work. Goes to show that there’s something about those Alpha processors that makes people just not want to let go of them.

Tyco Bass May 28, 2014 12:10 PM

Re: We’ve lost the ability to read anonymously.

Buy a physical book with cash, throw away the “shoplifting” chip stuck inside (if any)–read anonymously.

DB May 28, 2014 2:10 PM

IPv6, disaster as it is in many ways, is being overblown how catastrophic it is in practice. All popular modern operating systems have for years come by default with dual-stack IPv4/IPv6 that pretty much works (some bugs to be expected to early adopters, but they are going away). And many popular ISP’s offer IPv6 via 6RD to early adopters (which requires minor configuration on your router to use, assuming you have a newish one, oh, and, all cheap home routers are crap in the first place, so beware), and within a few years will offer real native dual stack IPv4/IPv6 at their routers by default I’m sure. Current adoption rates range from 3% to 30% depending on where and how you measure it, but all show a very sharp exponential growth curve over the last couple years…. Currently using IPv6 is the “slownet” compared to IPv4, but that will eventually change over time as growth increases, as it’s only natural to optimize your network for what’s currently most popular.

I’ve been using it at home here with my cable internet provider for years now, once setup it just works. The biggest pain was only my router/firewall setup because I was building my own from scratch out of all open source software.

none May 28, 2014 3:14 PM

As asked by the moderator: (but you could have copied that messages to here, moderator 🙂 )

Truecrypt.org has some news. Says that Truecrypt is discontinued due to Microsoft lack of support to the Windows XP , and teaches how to migrate your data to BitLocker (!!!)

All previous versions from sourceforge doesn’t exist anymore.

This happened more or less two hours ago. Few news on the net.

There’s a “7.2 version”, much smaller, that allows you to mount previous volumes but doesn’t allow creation of new ones.

The sig of this 7.2 version seems to be made by the owner, it matches previous versions.

Some comments are only available on 4chan, but most are useless or doesn’t give more information than the official truecrypt web page.

Petrobras May 28, 2014 5:13 PM

@Alex: “How about having a really simple (check-able) machines or component to perform encryption”

Please look up old comments threads on schneier.com in which I (Petrobras) was active. Especially look up keywords “3D” and “fab”.

@Wesley Parish:•”Some time ago we had a discussion on using 3d printers to make printed circuits, and the like. Slashdot refers us now to [rabbitproto.com].”

Thank you very much Wesley for that link.

“Resistance in Ohms:(19.77*length/width)+12” for a connector with thickness of 50 microns, according to their documentation http://www.bareconductive.com/wp-content/uploads/2014/03/2014.ApplicationNotes_ElectricPaint.pdf

Let’s say a transistor’s input has resistance of 45 kOhms, and you drive it with 4.5 volts: this makes 0.1 mA; So to be able to drive it with a power source of 5 volts, you should not waste more than 0.5V in the connector: its length/width ratio is at most (5000-12)/19.77 which is roughtly 250.

Good news: 250 is doable, although 2500 would have been better.

You may be able to build the StrangeCPU processor linked in https://www.schneier.com/blog/archives/2014/04/friday_squid_bl_422.html#c5686391 with a 3D printer, onto a set of standard fast PNP transistors.

You may even use two tampered 3D printers to print one untampered processor, as was discussed in posts that google 3D+petrobras will show up from schneier.com.

So this makes 4000 bucks to make one untampered processor. Quite nice, although the circuit of StrangeCPU is not yet released as far as I know.

You may also make your own 6502 processor for the same price, see its circuit on Visual6502.org : but without authorization, this not legal 🙁

There may be other suggested processors with public circuits on the other threads I was active.

Figureitout May 28, 2014 5:51 PM

Wael
so he RE’d a chip with eleven transistors.
Lol, yeah…child’s play when you compare to an unrealistic feat, still makes the subject approachable and you can switch one thing w/ one transistor. And how many chips have you sussed? I don’t get that chip-making process, fascinating that they actually work and we can’t even physically check them…

Figureitout May 28, 2014 6:14 PM

And regarding those “spectrum analysis test boards”, they kind of look like this but much simpler (just 2 coax connectors on a copper board that’s been scraped to create an “open wire” b/w the coax connectors:

http://w7zoi.net/mixed-bag/mixed_bag.html

http://www.qsl.net/n9zia/scotty/clock.html

It’s meant to test components (not chips per se) you just solder b/w the coax connectors, which you obviously still need for a PC, and you can verify some of the specs of the components (still have to trust your test equipment…gah). If I get a chance to try it I’ll describe it better as I don’t have a spectrum analyzer at home.

koita nehaloti May 28, 2014 7:00 PM

1.I want a proprietary software house with over 100 employees and global reach to use encryption. I have a good chance of getting their attention by writing to their discussion forum. But encryption is low priority for them and most users, so it has to be easy. I have heard that interfaces of openSSL and libreSSL are difficult. Is there a wrapper library that makes using them easier, or would making it be easy for those who know openSSL? What to link? I would like to link to a recommendation by Bruce. We all should consider urging makers of our favorite software to use encryption if it fits.

Firstly, the wrapper should free the programmer from choosing algorithms and making tradeoffs. Maybe just 2 security levels: medium security and high security.

2.Does c++11 have crypto secure randomness functions in it’s “random” header? (won’t risk page formatting with “smaller than” and “larger than” signs)

Nick P May 28, 2014 8:54 PM

What do people think of the truecrypt site? This is a pretty interesting statement:

“The development of TrueCrypt was ended in 5/2014 after Microsoft terminated support of Windows XP.”

They don’t show anything of their old site. Just migration and a statement that it’s not secure. And an implication it was funded by or depended on Microsoft. First thing I thought is they might have been hacked to put something on site to drive people away from Truecrypt. Then again, it might be exactly what it appears to be. Strange, strange.

Thoth May 28, 2014 9:08 PM

I am suspecting the recent ruse where the current TC webpage points to the use of a Government compromised Microsoft Bitlocker and a TC version 7.2 when it has obviously asked users to stay away from TC at all cost might either be the result of 2 directions:

1.) Government BlackOps operation.
2.) Some annoying hacker trying to proof themselves.

For those who are following my blog’s Truecrypt tutorial, please read my latest post (www.thothtech.blogspot.com/2014/05/truecrypt-compromisation.html)and do not panic.

Anura May 28, 2014 9:18 PM

I almost wonder if they were planning to get that working by April first and it was just too large of an undertaking. It doesn’t really make sense; I mean, I can see the project shutting down, but that it has anything to do with Windows XP support ending is just… odd. Especially during a time when it is being audited for the first time.

I see three possibilities:

1) The devs have no more use for it, don’t want to spend so much effort maintaining it just for other people.
2) The financial backers have no more use for it since Windows has bitlocker built in
3) The message is not legit
4) A TLA got involved after seeing rising usage after the Snowden revelations

For numbers 1 and 2, why abandon the project like this? Why not release it under a more permissive license? For number 3, if it’s not legit, why the silence from the devs? For number 4, this doesn’t make TrueCrypt unusable, and it’s just going to make something else prop up – doesn’t seem like there would be much reward.

Personally, I’d like to see some new alternatives come up, designed so that the cryptography is contained in a small, easily verifiable component, while all the usability and virtual disk stuff is completely separate.

moo May 28, 2014 10:05 PM

About TrueCrypt shutting down:
https://www.techdirt.com/articles/20140528/14144527380/truecrypt-page-says-its-not-secure-all-development-stopped.shtml

Someone noted that the signing keys had been changed just four hours before the new version was uploaded; it seems sensible to not trust ANY binaries from that site until the circumstances are better understood; the new version could be compromised. The whole thing smells fishy.

The two most plausible-sounding theories I’ve seen so far are (1) that they received National Security Letter(s) and fell on their sword in the same manner as Lavabit. Or (2) that the site has been hacked, whether for prank or for nefarious purpose.

Thoth May 28, 2014 11:02 PM

On a retrospect, here are what the Truecrypt developers could have done to prevent a single point of failure:

1.) Sign codes and release them to multiple mirror sites. I don’t see Truecrypt devoting any resource to mirroring their works.

2.) Truecrypt website is not protected by HTTPS by default.

What we can do now:

1.) If you are interested to build your own codes from older versions, it would help to settle your mind.

2.) Mass panicking in an attempt to move away from Truecrypt and having knee-jerk reactions might have been the exact intention (if the works were done by Government BlackOps or some malicious people). Reviewed version of Truecrypt by Matt Green should be used as the baseline.

If malicious hackers or wannabe-famous hackers are behind the attacks on Truecrypt, they did a good job exposing the weak site of web security of Truecrypt’s website and of Sourceforge since I believe TC’s website is hosted on SF.net. But the down side now is these hackers have made a mess of everything. Trust of security products that people have relied upon now falls into nothingness.

If the attackers are state sponsored or Governmental BlackOps, then they have yet again decided to undermine the privacy and trust of the Internet. I have been talking to many people recently and those whom I have known for years to be not privacy concerned are now very concerned about what is happening and who to trust. Someone whom I (who is usually not concerned of cyber security and privacy) told me that the Internet has failed to deliver a secure online shopping and transaction platform and I agreed with that person feeling bad that all the hardwork gone into trying to provide a secure Internet, all it takes is for Governments to undermine these very efforts. I hope all Governments (including China, USA, Britain…) would realize how bad the damage they have dealt to the trust of information we once had.

Jacob May 28, 2014 11:02 PM

The WIN XP excuse is totally lame. Also the recommendation to go with BitLocker – I think that (at least with Vista/WIN7) you need the Ultimate package to have it – a less than common requirement.

Adding another guess to the mix: the crypto auditing phase is active (or about to becomeing one), and if TC has been subverted this auditing may bring this to light. Much safer to burn the house down now and ran away.

Thoth May 28, 2014 11:15 PM

A plausible-deniability file container/system/format I have been playing around for sometime.

BMICS idea is to have collections of datasets in the form of blocks. If you encrypt the blocks differently using different keys and algorithm (it does not seek to control any algorithm other than specifying a format) and something breaks, you don’t have all the eggs in one basket. One good thing I feel BMICS may offer is if you have more variety of data, you have more variety and plausible-deniability in hand to use if forced to decrypt. You can claim that the blocks are random or do not belong to you hence you don’t have the keys and you don’t know the algorithm of the cipher. BMICS does not care and does not record the cipher algorithms and key lengths or techniques used to secure data. It simply provides a format.

The one bad thing about BMICS is if you want to decrypt your data, you need to test it on all the blocks linearly and if you have a huge dataset, you are going to slow down pretty badly.

BMICS specification have undergone 2 modifications (mod 1 and mod 2) which means you have to read both the mod 1 and mod 2 specification to understand the changes.

I have not found time to complete the implementation of the source code.

Link: http://sourceforge.net/p/bmics/code/ci/master/tree/

Jacob May 28, 2014 11:20 PM

Interesting tidbit from @jeffrogomez on Twitter:

“In 2011, the NSA bought iOS and OS X developer accounts from Apple:
https://muckrock.s3.amazonaws.com/foia_files/39609228865161.pdf

Received via FOIA.

The good thing here that Apple did not provide the NSA an access to modify their codebase, so the NSA had to go through the “Approved Dev” channel.

Apple products are clearly suspect now – who knows which modules are subverted by this?

Yesterday I saw some pre-program video highlights about the NBC’s Snowden interview, where a journalist talked about his first meeting with Snowden in his HK’s hotel room, and when one of the guys pulled an iPhone in the room Snowden paled and asked the guy to immediately take it outside.

Zonzo May 28, 2014 11:23 PM

Here’s where I AGAIN point out that someone needs to find Sarah Dean and find out why she stopped developing FreeOTFE, so abruptly as well.

With the LavaBit scenario in mind, we can never assume something less than enemy action it seems to me.

TKS May 28, 2014 11:52 PM

I the NSA/FBI/Whoever forced me to make TC less secure/disclosing private keys and I wouldn’t be allowed to tell the public I would do it exactly this way.

Archer Ordell May 28, 2014 11:55 PM

“You turn on the TV, and you see very bland interviews. Journalists in the United States are very cozy with power, very close to those in power. They laugh with them. They go to the [White House] correspondents’ dinner with them. They have lunch together. They marry each other. They’re way too close to each other. I think as journalists we have to keep our distance from power.”

“I’m not seeing tough questions asked on American television,” he added later. “I’m not seeing those correspondents that would question those in power. It’s like a club. We are not asking the tough questions.”

http://www.mediaite.com/online/jorge-ramos-reporters-cozy-with-power-act-like-theyre-in-a-club/

Thoth May 29, 2014 12:34 AM

@Archer Ordell

We have been accustomed to think that the Powers That Be are unquestionable and should not be questioned. Our family taught us that questioning our parents or elders is bad and punishable. How many of you could confidently say that you are brought up in an environment you would get away with question Powers At Home and not receive a good spanking and some more the Powers At Home would calmly and logically explain with great patience and tolerance ?

We do occasionally attempt to question the Powers That Be and they would not give us an easy answer nor an easy time. They might attempt to punish us for questioning as well.

I wonder how and what would drive society and individuals would evolve beyond the constrained mindset of blind obedience to the Powers That Be ?

Jacob May 29, 2014 1:14 AM

@Zonzo

Hard to find any sound logical explanation to all of this. If Sara Dean had to close shop and disappear due to pressure from TLA, one would expect that the same pressure would be applied to TC sooner than later – not waiting for 2-3 years to do the same.

Clive Robinson May 29, 2014 2:08 AM

@Figureitout,

Not quite sure what you are looking for re spec any.

The simplest circuit that works is a varicap “capacitivly tuned quaterwave” line resonator with a simple detector diode. It’s a PCB with a stripline quaterwave with a tee off at the fifty ohm point close to the ground end of the line that is the RF input. A second tee off usually around the hundred ohm point feeds a detector diode that has a smallish value cap to ground, this is the Detector output to your display. The varicap is at the hot end of the quater wave line and has a small form factor resistor from it to your sweep input voltage.

Back a decade or two ago you would use an o’scope with an external sweep input as the display and an audio saw tooth generator to supply the sweep input to both the varicap and the o’scope. These days you’ld use a cheap PC audio card to both generate the sawtooth and act as the detector to display on the PC screen.

You can improve the design by replacing the low Q PCB quaterwave line with lengths of semi-rigid coax that has a much higher Q. You can then build much higher Q circuits by building your lines out of home made resonaters of copper tubing.

Whilst it’s very simple to build the tuning range is usually only about 30% of the resonators natural frequency and the center freq to sawtooth voltage is not linear. However if you have a scanning receiver with IF output connector making one to sweep across the IF makes it a usefull diagnostic tool when bug hunting etc. If you use an SDR package on a PC you’ld be suprised at just how much you can do for just a couple of USD. For experimenting with the idea don’t use expensive varicap and detector diodes, just start with a couple of 1N4148 signal diodes they work quite well as both varicap and detector. If you want more varicap range and are working at a low frequency LEDs work suprisingly well, you can if you are mechanicaly adept with your hands build a prototype dead-bug style on scrap PCB off cut using the component leads to make the resonator. And if you don’t have scrap PCB off cuts have a look around a craft or hardware store for adhesive backed copper foil especily the stuff for those who make stained glass pannels and windows. From what you say your dad probably has some or knows where to get it easily abd cheaply.

Zonzo May 29, 2014 2:17 AM

@Jacob — simplest explanation is that the Truecrypt Devs were VERY careful about their anonymity. I’d lay odds that this may have been compromised with the recent audit process… and that once the government trunkmonkeys knew where/how to find them, the pressure was applied. Speculation sure. But in a world with secret courts and NSL’s… it cannot be ruled out.

Verifier May 29, 2014 2:25 AM

A significant problem is a lack of knowledge regarding the developers. With the nature of anonymous development, many of us can only assume the Truecrypt PGP keys are the sole method of identity verification. I can personally attest to having verified the new 7.2 binaries + source code + sigs against an old, verified TC signing key, and they are good. With this light, if the key was indeed stolen, we may have no way of authenticly communicating with the developers.

Moving forwards, I think it is nessisary to take only new source code from TC developers, review the differences, and compile. Trusting signed binaries 7.2 forwards, should be right out.

Furthurmore, the new source code should be compiled, and then compared to 7.1a compiled source code.

Also, 7.2 and 7.1a-beginning binaries should be compared, and reverse engineerd.
Were they compiled by the developers, with any distinguishing differences?
Was 7.2 compiled with obfuscation?
What do we observe when watching a 7.2 installed machine (various OS)?
—–>What does a transparent wireshark device show us?
—–>Are there any covert channels / long term data exfiltration methods it is using?

etc.

Petrobras May 29, 2014 2:47 AM

Here is my checksums of Truecrypt sources, that I just found in my /usr/portage/app-crypt/truecrypt/Manifest, which I attest is six months old:

For additionnal safety, look for the same file in any gentoo-based livecd you happen to own.

—–BEGIN PGP SIGNED MESSAGE—–
Hash: SHA256

AUX execstack-fix.diff 1955 SHA256 227c8e0bb04bd5f6915fc2570fbcbf1cca704b4b818bc5de283653197309a5fb SHA512 c64f9255303a521b4e531ebea574befe80a9f193c9aa42fd9cb552e56d087815ca161b50b593e7c3ede10a65c67dc36d0447dbffb0f4d4614f181a95759c2f79 WHIRLPOOL d1bcd4e09d7fccb2b10bdf53e21bff7da8dbce7b361c24ae6e8a9ab3fe772e9896d98068bfc7b6962186c010c956b73aad7da6d62fdc819205f66983888687ec
AUX makefile-archdetect.diff 266 SHA256 aa201bb7c93852c814d71c963b1d416d62aa2d1e685f9f5149b1388dca9ae883 SHA512 68102ff27708df26b1844ed969d98d05f18782164ca349138a304ff69730560c544783c4544067eec90965e41e7fab8bb8eff4885787593840f44f7f0011769d WHIRLPOOL 0d11b8bfd85532067973aa07bd4f70489b3b22f6fb33c9c699b5ef30e504e57e60bbf8820bd414640d6f0943fb83b8528d61757bb77aacd66c4670ef4f3a3f22
AUX truecrypt-7.1a-build.patch 598 SHA256 4c5219697cdb3ebf7e8ca86af6b18f2af191eaaa4f44daa18d849dd9cb0ffa7e SHA512 ade2d8a387e947c72fe1191323628fb3dbcb9b7b07faa51091d9029a966ccc926484dda0ea73766487da9c89fed0d2bc3d07c2213bd5f3df72162baf0be431eb WHIRLPOOL 3fdb3f05e336f8c150261bdfa1461c1ccfa3cad775e1877a0a17501c42d3154b81be4c20f793b4b89c970e54f0e7b3a369a73c0fe61dc563ec52ef31c784f98e
AUX truecrypt-stop.sh 308 SHA256 243a9d1041b291e12ce2065959838f0cfe01484bffac7915991ebeb90d2ccd2c SHA512 d524fd0eb957ce6ea72590b6ddd0501f911e0ec4abdc4a9add34c021b6fa8bd65747c2dd9fd19bea8c093ec0df5d4a418f44c770e1e2dfaab6fede21de9b061d WHIRLPOOL 1829a4ffb28fa127ec5944492205c16c564e1fbb8874bc497e61576edf015c74b4c47f0792e94b6c05c5b251f058879ff76ee5a71572d566b53fd4c165538a7d
AUX truecrypt.init 729 SHA256 6530577c5f86800a7d92a76b927538006a27f57cf517c6f2bdb793cebaa70b59 SHA512 0bb428457c5c5f5a5f5979291edbdba5fea6461e8a8103deab484b52746f677fd6e337650770eff8e52f7656d22436348d3589f60a17d2ec560be38d4b5028e3 WHIRLPOOL 3e3489fdf1cc4101e3ecb4a75f7f50d7d83fb89c43da72dca8daa8f7fd36901faf4ad1494a1e67d17f3d251d1b4e92383acb6a29d286c2ae0fde021c93b82960
DIST truecrypt-7.1a-pkcs11.h 43544 SHA256 662d39cec5a0063c8aacc430d4fcba4b31b80a174f1e824dcc359f1c1420bc2c SHA512 bffce5344383a07c4313c30ee1cb0ed7063a749527521bd964263deb5d951cc181acf9c4386bcd2ad44d40be35b3a08d56b1404730b4994e43760db71649ef3c WHIRLPOOL bc5bfbcf711f8d8d8f13625bd2ea98b195662ca458b9186a6a1a01f51897ba1e9a488b91ddfec97af7fdbb770637db27b5c124ba5fa376a415c512af3c6aac74
DIST truecrypt-7.1a.tar.gz 1949303 SHA256 e6214e911d0bbededba274a2f8f8d7b3f6f6951e20f1c3a598fc7a23af81c8dc SHA512 b5e766023168015cb91bfd85c9e2621055dd98408215e02704775861b5070c5a0234a00c64c1bf7faa34e6d0b51ac71cd36169dd7a6f84d7a34ad0cfa304796a WHIRLPOOL 5e7f4360746a30639aea96eaf4deac268289c111c0efa96f50487527f04064992c26ad4c8ae0fd565d80e77f0ce8add82b03930d877fe5adedc8a733b482fe38
EBUILD truecrypt-7.1a.ebuild 3585 SHA256 5687150463cf78fb9940265bd77d7590ad5d5f0295bc362b8aa23865c65ca62b SHA512 9309df2b10fdfd77c2bcc43edd711f6cb2b468113bfdb7c82d17acac32ca1d8b153b5323ce16646af21f0b4985d7f3287e05e9c78ceeb3a59e8c7f3929a1d1a3 WHIRLPOOL 1de171c79e5df37c3e2009e7158ddb319d38db8883190382f5dd3732cd67340c8e8f169aecaeb81f6ebecf089bc04cb6baddf9614ba5a41388143804ffa10a70
MISC ChangeLog 14610 SHA256 c9dbe9d35daa21f30945514eba4c2eec89b73bf71cc861d399ab22bbc8ec935c SHA512 770ee71fbbcd8311d4a57a20e813118d6a61ee8fac7fed24fc657b13ce69c53fe327d1c7ac0acc98ee541b9f4b3a31111b5feb763a0b3749926dfd6d27993f54 WHIRLPOOL a63dffb44f97178cf994fd98353d3027a875ccf626f17e5a35475f9c90a533c2f7a0a4675018a89fcc7b3d19cb5c4dfdae0ee6eab9f411a5f001c1119fa2fd43
MISC metadata.xml 234 SHA256 4e7347aba3326c96cd540feeedd3052c6b178de7e404f293a98dd8be2b7c2b06 SHA512 04df79eacf39f0c271dbc3045f628c799becedae4a255e03bcd2f1a5b707e78058c4b60e4c62254e6a3a17744f1f8211adbf0083bde88cb1e5457aa2ffe47e4f WHIRLPOOL 5aed2639847f6d36c7ee30d701830de96628d596534ac593175e66954a55c05b3a4d6cc3a4181d7994cfd11fdb869b8ee55850c5a206cfc18bab5b14f03e3bae
—–BEGIN PGP SIGNATURE—–
Version: GnuPG v2.0.22 (GNU/Linux)

iF4EAREIAAYFAlKkz0kACgkQXYk9GL8g3FEulQEAp4fr6E+ATta0/HafmXqpJyOo
GZktcFQAfePORGjUJr4A/33lDaHF40hfiQmVLDMmVz/Puda+OPx0q2suyq+NK1a9
=XWuD
—–END PGP SIGNATURE—–

Clive Robinson May 29, 2014 3:02 AM

@Zonzo, Jacob,

The usual reason most single handed free software projects stop is due to payed employment changes. The second is a change in their personal lives. Then there are other reasons such as lack of feedback, bad feedback and people thinking “what’s in it for me” anylonger.

There is also the “amature artist exhibit” effect, it is fairly well known that many good amature artists stop after their first exhabition. They are not in it to make a living thus it acts as the transition from aspiring to proffessional and they feel they have achived a pinical rather than the first rung on the ladder.

Jacob May 29, 2014 3:16 AM

@Clive,

I would expect someone who has put a major effort in writing software pro bono and established a sizable user community, and then either succumb to life tribulation or to a lost interest, to announce this in an orderly way and even declare his creation as full GPL or public domain and let the community, if so desired, to continue with the project so his/her legacy would continue for long time.

But to skip town in the middle of the night? there must be some other reason.

J May 29, 2014 3:27 AM

Couple more possibilities:
1) TrueCrypt audit found a bug that compromises all containers and is too big to be patched in a reasonable amount of time. TrueCrypt developers encourage people to stop using TrueCrypt before the bug becomes public

2) TrueCrypt was a secret Microsoft project because they wanted to make a prototype for their future encryption software but didn’t want their technical support deluged with calls about users playing with it and losing their passwords. Now that XP support has ended the project has been replaced and the message on the TrueCrypt site is truthful.

I’ll admit the National Security Letter scenario seems the most likely.

Petrobras May 29, 2014 3:44 AM

ftp://ftp.archlinux .org/other/tc/truecrypt-7.1a.tar.gz just served me the file with the right sha512sum and sha256sum.
Another link that google gives me right now:

http://mirror1.ku.ac .th/archlinux/other/tc/truecrypt-7.1a.tar.gz
26 other links that googles also gives me, I compress them with the bash-style notation {choice 1,choice 2} not to pollute too much schneier.com:
ftp://{u-text.net,mirrors.nix.org.ua}/linux/archlinux/other/tc/truecrypt-7.1a.tar.gz
ftp://{209.85.41.143,ftp.sjtu.edu.cn/pub/.mirror6/ftp.archlinux.org,ftp.archlinux.org,dion.freedback.com,ftp.csie.chu.edu.tw/ArchLinux,{mirror.calvin.edu,abs.calvin.edu}/ftp2/arch-linux}/other/packages/tc/truecrypt-7.1a.tar.gz
ftp://{114.112.41.90,mirrors1.kernel.org,mirrors2.kernel.org,mirrors3.kernel.org,ftp.br.debian.org,u-text.net/archlinux/other/packages/tc/truecrypt-7.1a.tar.gz,mirrors.nix.org.ua/linux/archlinux/other/packages/tc/truecrypt-7.1a.tar.gz,{mozilla,fedora,mandriva,eclipse,videolan,sagres,mint,ubuntu}.c3sl.ufpr.br}/archlinux/other/packages/tc/truecrypt-7.1a.tar.gz

sha256sum: e6214e911d0bbededba274a2f8f8d7b3f6f6951e20f1c3a598fc7a23af81c8dc; sha512sum b5e766023168015cb91bfd85c9e2621055dd98408215e02704775861b5070c5a0234a00c64c1bf7faa34e6d0b51ac71cd36169dd7a6f84d7a34ad0cfa304796a

Mike the goat (horn equipped) May 29, 2014 5:08 AM

Jacob: exactly. This really smells fishy. More interesting was the rumor several days ago that the TC audit had discovered something “big”. Now, I think this is a bit silly as knowing Green – he would disclose immediately but responsibly if anything of the sort were found. My suspicion is that either a dev with access to SF and keys has gone rogue /or/ the TC project has had their hands tied by an NSL or similar and this was a way of basically burning down any trust in the project.

Winter May 29, 2014 5:52 AM

If I could not break TC, I would try to discredit it too.

This is much easier.

Mike the goat May 29, 2014 6:38 AM

Winter: exactly. The problem here is that we are all speculating. I perhaps was just as guilty when I wrote my post on the issue earlier on today, coming to the conclusion that they either received an NSL and decided to burn the house down or alternately that they have been spooked by the recent attention and Green’s ongoing audit.

I think the main thing people need to understand is that there are no guarantees in life on anything, including life itself and that we are all just playing the numbers. That said, I find it interesting that people truly believed they were safe with their files secured when running any FDE product that relies upon a closed-source and potentially untrustworthy OS to function.

Nick: yeah, it sure has me scratching my head. Assuming the TC devs did indeed update the site and this isn’t some mass compromise of both code signing keys and credentials for their SF account (unlikely, but possible) I would think that such a curious statement merits further analysis. It may not be stego, but I certainly think it could have a hidden meaning. As you said – is there a connection between MS and TC? Are they implying that WXP was the last ‘trustworthy’ OS (and then they contradict that in advising users to switch to Bitlocker??). It doesn’t make a whole lot of sense. I smell a rat and I think that we certainly haven’t heard the last of this.

Jacob May 29, 2014 7:19 AM

I wonder if anyone would care to explain the following (directed only to Windows people):

Current Situation

  1. People use Truecrypt with a fair level of trust (not absolute trust because of the anonymity of the devs etc. but a fair level)

  2. People don’t trust BitLocker because it is closed source and MS is in cohoots with the Feds. They laugh and see a hidden message in TC site recommending them to use BitLocker instead.

  3. Truecrypt runs on Windows. If has no security at all if Windows is subverted.

My Question

  1. Why do people trust Truecrypt more than BitLocker? BitLocker is made by the same people who made Windows – however they trust TC on Windows. To stress my point: if BitLocker is unsafe, so is Windows and so is TrueCrypt (and, by extension, all the security products the people create by themselves while using the .NET Framework crypto assemblies)

  2. Is Psychology the only explanation to this incongruent behavior? namely different level of sensitivity when talking about security product, while semi-blinded to the security of the supporting OS?

Mike the goat (horn equipped) May 29, 2014 7:32 AM

Jacob: it would be interesting to hear Bruce’s take on this. I know he openly admits his use of Windows is due to the familiarity and cites the usability vs security conundrum – but, I don’t think we can ignore the issue any longer. The trust problem we have with our modern PCs isn’t easily remedied by just axeing Windows for a FOSS operating system, even if we pretend that the OS we install is flawless. We can’t trust the BIOS, the microcode, and the microcontrollers with DMA access – and I am sure this is an incomplete list.

The microcode is a curious one, as it is actually encrypted rather than merely signed so we can’t even determine what a given microcode blob from Intel is meant to do and just have to trust their errata and changelogs.

. . . and this is why we are damn well doomed. Unless we – as a community – can solve these issues and address them quickly, the world will begin to awaken and shatter the entirely irrational notion that we can have a “trustworthy” piece of modern hardware. Up until the NSA opened this Pandora’s box we didn’t have the level of public concern we have now. This is worse than the Thompson attack – as we can’t even trust the platform, let alone the compiler.

keiner May 29, 2014 7:44 AM

Is this nightmare ever going to end? Cryptocalypsis for ever?

Watergate took some years to kill a corrupt government, but now we are running out of time…

yesme May 29, 2014 8:45 AM

@ Mike the Goat

From a security POV all our todays systems suck.

The problem is bloat. Bloat on every level. It starts with the CPU code, the OS itself, programming languages, standards that need libraries (8000 pages of OOXML), compatible with everything or it won’t run, optimization, even licenses, etc..

If you want to know how good code looks like, start with Plan-9. This is how C looks like when its written by pros and with good architects. Also look at Oberon.

Here are some numbers:

Plan-9 Stdio.h – 4 Kb (good code)
OpenBSD stdio.h – 15 Kb (average code)
GNU stdio.h – 34 Kb (utter crap)

Some other numbers:

The entire Oberon compiler for the Oberon OS: 3000 LOC
The GNU compiler per architecture dependent code: ~400,000 LOC
Ken Thompson’s Plan-9 C compiler per arch dep code: ~ 10,000 LOC

Solving the problems at the core really helps. With a good and simple OS, a good programming language, non-determinism and a significant reduction of standards and hardware platforms it could be a lot more secure. But we don’t live in that world. In fact, it’s only getting worse.

Mike the goat (horn equipped) May 29, 2014 9:10 AM

Yesme: precisely! We have major structural deficiencies and if the NSA debacle has done anything, it has shown us just how broken and untenable the current situation is. My 80386 or vintage SPARC shouldn’t be more trustworthy than a “state of the art” laptop with TPM and UEFI, but it is – and by a mile. A 1970s LISP machine even more so!

We have forgotten just how important a structured, layered approach is to platform security and have instead adopted this band aid approach of patching problems as they occur and trusting the code below you.

This is the warning shot we all should have heard years ago. Isn’t it crazy that we got data execution prevention in x86 about a decade ago yet there were mainframes with this feature in the 70s? I don’t know what the answer is and how much of what we already have that we can salvage, but what I do know is that we need a decent base to build on. We don’t have that at the moment – not in our hardware, nor our software. And that is pathetic in 2014.

Nick P May 29, 2014 9:18 AM

@ koita

Look up the NaCl crypto library. They aim for high security, few options, and ease of use. If you want more options, check out Botan library as I’ve at least heard they have decent docs.

@ J

“2) TrueCrypt was a secret Microsoft project because they wanted to make a prototype for their future encryption software but didn’t want their technical support deluged with calls about users playing with it and losing their passwords.”

I was just about to post that scenario haha. It was the only thing that made the site’s claims sensible. My version had them being the BitLocker developers as well.

@ yesme

The Oberon compiler is an unfair comparison. Wirth designs his languages specifically to make the compiler easy to write. As in, every program you write might take more lines of code just so his language and compiler are simple. Although simplifying things is good, many developers believe he applies it to the wrong end. That the applications, not the compiler, should be simple to build. Python is an example of a language that optimizes at the other end. The language, runtime, and compilers are much more complex than Wirth’s stuff. Yet, they enable people to throw together huge programs with little code, plenty readability, good performance, and good reliability.

Otherwise, I fully agree with you.

@ Mike the Goat

I smell a rat, too.

  • Nick P (Glock equipped)

Mr. Pragma May 29, 2014 9:30 AM

yesme (May 29, 2014 8:45 AM)

YES! You nailed it.

And let’s face it: Open source is a major offender, too.

That’s why I strongly distrust gnu, raymond (“bazaar”), and generally many, many FOSS projects.

The rule of thumb is: The quality of a project is vitally depending on a) the professional capabilities of its core developer(s) and b) the (existence of a) hierarchy and tight control.
And c) the size of the core. The more people in the core the worse the product (committees obviously being a crossbreed between evil and idocy).

I’m also wondering how come that shitbingo2000 (apache, libreoffice, …) has a gazillion of “developers” while the only opensolaris supporting Sparc (opensxce) has 1 (“one”) developer and that developer has hardly enough to eat while the shitbingo guys are throwing parties.

One. Single. Guy. working on Opensolaris Sparc. One. Single. Guy!!!

Go there. Everybody. Right now. Send that guy money. Leave some kind words and a big “thank you”!!! (http://opensxce.org/)

yesme May 29, 2014 9:43 AM

@Nick P

Of course mentioning the Oberon compiler is unfair. But that is not what I meant. When you solve things at the core instead of working around the hot potato (think GNU Autotools), you can do a lot more with more.

Still we need to drastically reduce the number of standards / protocols and hardware platforms. This is IMO a serious killer. But how on earth is this possible? It is the foundation of the industry, yet at the same time a rope around the neck (that is camouflaged with a silk scarf). So I ask the question here. Am I getting this wrong? And if not, is this problem solvable at all?

Mr. Pragma May 29, 2014 9:51 AM

Nick P (May 29, 2014 9:18 AM)

Sorry but I feel that you are grossly misrepresenting Prof. Wirth and his work as well as his credo.

Wirths priority is not to keep the compiler simple so as to have an easy life as compiler builder.

His priority “Keep it simple!” is a general one and at least as much targeting the language/system users as the compiler/system builders.

One can discuss some of the wisdom (or lack thereof) in some of his work, in particular Oberon. For instance, Oberon seems to have nothing learned from Modula-3 (tolerated and authorized by Wirth and built on his work, but not designed/developed by him). That’s a shame. In fact, one might consider Oberon a step back and/or going too far in the keep it simple.
One gross example is that Oberon still has no exceptions or any sensible replacement.

Python (which I strongly like myself) is at the other end of the spectrum, yes. But to say that Python is more complex (inside) than Oberon is something that would be hard to show in any unbiased way.

I’d rather assume that van Rossum followed guidelines very, very similar to Wirth (well, to the degree that a very different project allowed for that). In fact, van Rossum also highly values “keep it simple”, both in Pythons surface and inside of it.

Does Wirth deliver? Absolutely. Programming on Oberon is very efficient and one feels a consistent line enabling beauty (a very important measure for quality).
Again, the problems with Oberon are elsewhere, for instance in Wirth having designed it rather “ignorantly” in creating a rather closed system and in leaving out important issues such a modern error handling, to name two issues.

Again, one can criticize Wirth. But not for what you’ve said. Rather for, somewhat simplifying, Oberon not being a more modern and clean Modula-3.

A side note on Python: I very much regret that Python has become such a large beast (and I strongly assume this being one major reason for a wide spread tendency to include lua as a scripting engine in many projects).

Buck May 29, 2014 11:13 AM

Oh, so many sockpuppets! This is all starting to smell like a big distraction to me…
Keep your head clear, emotions in check, and a keen eye out for any other suspicious activities.

Nick P May 29, 2014 11:31 AM

@ Mr Pragma

“Wirths priority is not to keep the compiler simple so as to have an easy life as compiler builder.”

No, it actually was his priority. I’m not guessing at it: Wirth stated more than once that his main “heuristic” (his word) for language design was how long it took to compile the compiler. If the system compiled too slow, he believed it meant the language was unnecessarly complex. So, he then removed features and reorganized it. This is why, as you noted, he took out useful features from Modula’s when producing Oberon: language & compiler simplicity was the goal.

That said, he certainly had an overall philosophy of simplicity that made his work higher quality than most. We can certainly imitate that. It’s especially necessary to manage complexity in security work as unnecessary complexity leads to security flaws that would’ve been avoided.

“But to say that Python is more complex (inside) than Oberon is something that would be hard to show in any unbiased way.”

The Oberon report, which includes full grammar, takes less space than just describing the bytecodes of the Python interpreter. Then, there’s Python the language, metaprogramming, REPL, optional compilers, etc. I’m sure I can, without bias, say that Python has way more functionality and complexity than almost any language Wirth has designed. It might have more complexity than many of his combined. Yet, empirical evidence shows that people learn it quickly, they’re very productive, and have a low defect rate (source: Coverity). They’re not superprogrammers so the only explanation is Python is a well-designed language. Seeing how complex the toolchain is, that’s saying something about simplicity vs complexity tradeoffs.

And there’s no bias here as I’m from a BASIC background with some LISP thrown in. If anything, my experience was closer to Wirth languages. Seeing the power of LISP, though, I know how a more complex tool can be better than a simple tool.

” In fact, van Rossum also highly values “keep it simple”, both in Pythons surface and inside of it.”

I agree and need you to understand I’m not arguing against simplifying designs. I’m arguing against solely valuing simplicity like Wirth or Moore (Forth) do. Example: A tagged processor that uses Oberon’s types and enforces code and data separation. Wirth’s processors are typical stack machines without safety features. In the design, I keep each component as simple as I can: use a proven RISC core, small tagging unit, few typing rules, and Oberon language. Wirth’s design is much simpler.

Yet, I choose against simplicity in this case because adding tagging unit prevents entire classes of problems in all code without developer thinking about anything. If I don’t add it (simplicity), then the developers will have to correctly manage types in every piece of code they write and spot hostile constructions in 3rd party code. That’s not simple at all. Had I simplified my problem or layer, I’d just make their layer much more complicated.

See what I mean? Certain simplifying decisions can make things better for everyone. Wirth is excellent at spotting them. We can imitate this. However, many times a simplification of one part of a system (or problem) requires complicating another. My philosophy, unlike Wirth’s, is to accept the complexity any time the benefits it produces are worth it. So, I’d add tagging to hardware, maybe macros to language, syntactic sugar, and so on. Each thing makes my job, the hardware and toolchain, harder. Each one creates a performance hit. Yet, each change makes developers’ and users’ lives so much better that it’s worth going through trouble.

“A side note on Python: I very much regret that Python has become such a large beast (and I strongly assume this being one major reason for a wide spread tendency to include lua as a scripting engine in many projects).”

Yeah, it is getting huge. I’m not sure how good or bad this is given their community’s needs. I hope they aren’t falling into the trap of bloat. Good news is Stackless Python wrote Python in Python. Should help manage complexity of it very well. 😉 Honestly, though, my first idea of getting C++ out of Python was to rewrite Python in either Ada or a Wirth language. Then, the extensions would be written as modules in same safer language so they’d get type-checked and memory safe where possible.

Benni May 29, 2014 12:17 PM

Now it is official: the german secret service BND tapps the communications of 160 countries, including the United States and the Great Britain:

A lawyer has sued the BND because of its internet surveillance before the federal administrative court. His lawsuit was dismissed because he could not prove that the BND has read his mails. The lawyer will now go to germany’s highest court, which previously has stated that it because the BND surveillance is secret and can affect anyone, you do not need to prove that BND has opened your mail when you sue the secret service.

However, during the trial at the administrative court, the following funny information came out:

BND wiretapps the communications of 160 countries. Including the United States of America, and the United Kingdom.

https://netzpolitik.org/2014/trotz-vorlaeufigem-scheitern-der-klage-in-leipzig-neue-erkenntnisse-bnd-ueberwachte-2010-196-laender-auch-die-usa/

From communications where one part is german they seem to have some statistics, which was 2,9 million email for the year 2010.

But how do they find whether some communication is german?
Well, at they say they filter out email adresses containing “.de”, but still seem to search them about words like “atom” or “bomb”. And then, how do they find out, whether the rest of the communication is german and therefore must not be considered?

Well, according to Süddeutsche.de, they must correct their automated systems by “manual intervention”. Which means: The BND special agents read the mails personally and decide if they find them interesting.

http://www.sueddeutsche.de/digital/klage-vor-bundesverwaltungsgericht-karlsruhe-soll-bnd-ueberwachung-pruefen-1.1978077

But what is with completely foreign communications?

German parlamentarians have asked how many communications the BND would receive everyday. Answer:

http://www.sueddeutsche.de/digital/prozess-um-bnd-ausspaehpraxis-wer-kontrolliert-die-harpunierer-von-pullach-1.1976366-2

BND receives so many communications that it can not make a statistics on that….

Yeah that sounds typically like germany. Deutsche Gründlichkeit, german toroughness, you know,

Of course an intelligence service which tapps communications has to investigate EVERY communication about its value for intelligence. Otherwise, it could be that some BND official would miss one single bit of information which would have been crucial. That would not be good.

And since there are 193 nations in the world, bnd must collect most communicaions from at least 163 countries.

I think it is a bit slippery for a german authority not to collect from 20 countries. So it seems the german toroughness has been a bit weak here….

Benni May 29, 2014 12:22 PM

No, In the above posting I made a mistake, According to

https://netzpolitik.org/2014/trotz-vorlaeufigem-scheitern-der-klage-in-leipzig-neue-erkenntnisse-bnd-ueberwachte-2010-196-laender-auch-die-usa/

BND tapps the communications of 196 countries.

And according to http://de.wikipedia.org/wiki/Liste_der_Staaten_der_Erde
there are 206 governments in the world.

Well, still a bit slippery the BND has become. Since 13 countries are still not monitored by the german secret service. How lazy they have become for a german authority? There should be some parlamentarian comission on this, endorsing a program how we can tap the rest of the 13 countries. Just for the sake of completeness

Mr. Pragma May 29, 2014 12:56 PM

Nick P

You are somewhat exaggerating. Yes, Wirth indeed said that compile time for the compiler is a major criterion for him. But he certainly didn’t work on that as a primary goal or else both Modula-2 and Oberon would be quite different.

And I agree with him. Because compile time indeed is largely influenced by bloat, bad decisions, and bad algorithms. And yes, indeed, avoiding unnecessary features is a good thing to do; just think about “language vs. std. library”.

Let me offer an example (ignorant of Oberon and non biased). In C one can have number literals, typically as initialisers, in base 2, base 8, 10 and 16. In some languages one can have them even in a range of bases, typically between 2 and 16.
As a language designer one has to make a (hopefully well reasoned) decision here. Because all that has to be parsed, lexed, evaluated. To avoid being unfair to C (which has traditional and then well based reasons for octal based literals) let’s ask how important, frequent, etc. that feature is.
But there is more to it. While I think C’s choice is sensible I also see that one might ask “what for?” (as in “for a system language, yes, for an app. language probably not) and how to implement it. This, again touches other areas like that sth. in the language proper is rather static while sth. in a lib. is way more flexible. Or one might push that whole issue to macros and to internally only work on base 16 literals, or …

That’s why I (critically) mentioned the lack of exceptions. Exception mechanisms can’t be simply put in a library; they need system (read: language) support. On the other hand one should note that, for instance, Templ has developed exception support for Oberon using existing Oberon mechanisms, namely meta programming. While I agree that exceptions should be profoundly rooted in the language proper, one also must recognize that Wirth must have done things better than some would suggest if a tricky beast like exceptions could be glued into it using the very features of the language (incl runtime; that’s where Templs very clever trick is).

I personally think that this whole Modula/Oberon issue is hard to judge because there are quite different contexts. Wirths context was a) teaching and b) having reasonably priced systems available (albeit within a rather “closed” island scenario. But at a time when universities struggled to offer more than dumb terminals to their students) whereas our context here is usage on diverse systems and for diverse scenarios.

As for Oberon vs. Python complexity let’s simply agree to disagree. “complexity” is sth. complex (haha) after all and, more importantly, sth. with many definitions in diverse contexts.

Ad “tagged processor”:

Well, again the world is (for good or for bad) a little more diverse than that. Wirth’s machines for a start are the machines that are out there. IIRC his first machine was using a 29000 (Risc) processor and his later machine a 32532 (CISC) processor (both were sensible choices) and those processors dictated his system designs to a large degree. Stack based? Hell, the vast majority of machines out there are stack based.

Back to planet earth, tagging is 1 (one) approach to solving the problem you address (and a good albeit not cheap one). But also one that can as well be implemented in the MMU rather than in the processor (core).
Most importantly though, this has little to do with Oberon. In fact, Oberon being one of the strong statically typed languages (like Ada and others) makes tagging easier if so desired.
Pardon me, I value you and much of what you write here and I certainly expect good design from you but frankly, some of what you say makes me feel that you somewhat mix up things. Automagically enhancing 3rd party software security by a tagging processor, mmu, whatever, is not something I see being or happening that easily.
Similarly I’m not sure that your point of view is really better than Wirths, simply because “complexity” must be defined for a context and, also important, can almost always be broken down into less complex pieces.

I’m also somewhat afraid that trying to solve to many problems at once risks to have one tinkering and reasoning eternally. Moreover many problems aren’t of a technical nature but rather of an emotional, social or whatever nature and can hardly be solved by technical means, no matter how well intended one may be.

Risking to bore you: Security begins with having a solid understanding of what one wants to protect (and why and at what costs and …) and understanding against what and whom one wants to defend.

Defending against programmer errors, lazyness, bad language decisions, doubtful 3rd party software quality, etc. may have some limited communality with outright evil spirited and potentially well funded attackers, but in the end that’s very different scenarios.

Looking at both scenarios I can’t but see that complexity is a major factor working against security. Many programmers hardly have a sufficiently profound understanding of what they are doing, most users hardly having a clue (and happily clicking on “pamela anderson nudes – click here”), and even well respected security experts like Matt Green occasionally sounding off bad advice (“use openssl!”) seems to very strongly suggest that reducing complexity is one of the more promising approaches to more secure systems.

Granted, Wirth has, seen from our perspective, gone too far in terms of simplicity but I also feel that your approach needs quite some more refinement, too (yes, I want to be positive and friendly about and to you 😉

Looking at the amount of work Wirth has done, most of it very very good or even excellent, I feel humble and a need to be very careful and to profoundly think before criticising that man. I don’t know about you but I’ll frankly admit that I’m light years beyond what that man has understood, contributed, and given to us. I think we would be well advised to not easily ignore it or think that we know better.

I bet that any upcoming significant improvement in the field of languages and system design will, knowingly or directly or not, be based to a large degree on what Wirth found out decades ago.
I just one of these days smiled when thinking how elegant (which is adding to safety) Ada’s and Active Oberon’s multitasking mechanisms and semantics are.

Buck May 29, 2014 1:38 PM

@Benni

There are probably about 13 countries with no cables to tap… That responsibility would then fall on the lap of the host country that provides a satellite link (if it even exists). Say… Sark, for example?

DB May 29, 2014 3:58 PM

@yesme

Still we need to drastically reduce the number of standards / protocols and hardware platforms. This is IMO a serious killer. But how on earth is this possible? It is the foundation of the industry, yet at the same time a rope around the neck (that is camouflaged with a silk scarf). So I ask the question here. Am I getting this wrong? And if not, is this problem solvable at all?

I strongly disagree with this statement that drastically reducing standards/protocols/platforms is the answer. While thinking about problems overall with so many seems boggling and hard to wrap one’s head around, and seems like it makes cleanup all the more difficult, I don’t think you’re properly considering the alternative.

Imagine a world with only one hardware/software platform. First of all, it would have to be government-mandated, because that’s the only way you can restrict it to just one. Then, it would, of course, be guaranteed government-backdoored, and much more easily by our government too, since there’s now just a single point of failure, one maker to coerce, one place to insert it, etc. And nobody would trust it because of the above, and you’d have to go to prison or be killed for “treason” for developing an alternative… Typewriters would also have to be outlawed, because they’d be seen as subversive too… Anyway, you can see where this is going.

Instead, what we have now is what naturally happens in a (more) free world (than I mentioned above). Everyone sees a different problem to solve, or sees the problem from a different angle, and tries to solve it in a different way. You end up with a plethora of options and ways of doing the same thing. It may seem like everything’s backdoored now, but in fact that’s not the case. Anything COULD be backdoored, and that makes it very scary/untrustable, but much much less than 100% of it actually is. What we actually have is much more resilient to subversion or coercion or backdooring, precisely because it’s so diverse. The diversity itself is actually good for the field overall.

The way to fix what we have, is in fact to create MORE diversity not less. With nothing really truly trustable, we need AT LEAST ONE (or quite possibly more) additional platform(s), built from the ground up using secure and verifiable methods, don’t we. May the best one(s) win.

Clive Robinson May 29, 2014 5:00 PM

@DB, @yesme,

If you have three different architectures, you have the basis of providing a secure solution by mitigation.

The simple case is three uncompromised systems all doing the same job, you check the output of all three machines in a voting protocol, if they differ at some point then you have a problem to investigate. The investigation may show a simple failure as happens with all devices with time or it may not. Either way you are aware there is a problem and can take appropriate steps to mitigate issues. This is the “NASA Solution” although it was started by NYC telephone company based on ideas that go back atleast as far as acient Greece.

There are other variations such as two of each type of machine where one machine does a process and one does an inverse of the process. The six system outputs are then compared in various voting circuits.

Thus even if the systems were compromised they would impossibly have to be compromised in exactly the same way at exactly the same time to not reveal they have been compromised. Further you can have more complex systems that vote across time in various ways using known tests.

You can thus build an overall system that would be close to impossible to compromise without it being apparant to a watchfull supervisor.

Can it be done, well yes I’ve built prototypes, is it practical, yes but in a constrained way, is it cost effective, that depends on your requirments, is it going to be a mainstream solution, not a chance, but then it does not have to be, if used judiciously.

DB May 29, 2014 5:16 PM

@Clive Robinson

I don’t see multiple platforms being compromised the same way as out of the question, especially when they’re all being done by the same very well funded entity. Though it does increase complexity for the attacker exponentially, so maybe we should just be thinking about a lot more than 2, 3, or 4… And it’s certainly leaps and bounds ahead of just staring at source code hoping we “see” something 🙂

Clive Robinson May 29, 2014 6:08 PM

@DB,

I said “three different architectures” not “multiple platforms”, different architectures cannot be “compromised the same way” but at best similar ways, and similar is a country mile from the same if you are looking the right way.

Lets say the three different architectures are SunOS on SPARC, Windows on x86 and Linux on Power PC or ARM. With the application written by three different teams. If done correctly you will be able to charecterise their behaviour as well as their output to the voting system. An attacker has to come up with three vulnerabilities for three different applications running on three diferent OSs on three different hardware architectures. Not impossible but quite difficult. They then have to exploit these vulnerabilities simultaniously to remain undetected to beat the running voting system and that is difficult beyond most peoples imaginations.

The only viable way to do it is when the system is not running such as when being maintained, and it would have to be done by someone who was in effect an insider.

But with a little bit of sophistication in the voting unit –which is needed in practice any way–, their vulnerabilities have to change the charecteristics of the three applications and maintain the timing and other verifiable charecteristics as they were originaly and that is close to impossible to do.

Nick P May 29, 2014 7:22 PM

@ Mr Pragma

Compile time is partly influence by bad language decisions. The other thing it’s influenced by is what the language does for you and how the compiler itself is written (eg 1 pass vs multipass). An Ocaml programmer wouldn’t want to drop to Oberon as many things that are safe and easy in Ocaml would be more difficult and take more code in Oberon. That’s because Gallium team (led by brilliant Xavier Leroy) sacrificed a certain amount of simplicity to make the language more powerful, leading to more concise and still fairly simple programs. And nice extras like functional programming, DSL’s, fast code, etc. Their compiler has decent performance, too.

And my LISP system allowed me to develop at whatever level I wanted to, extend the language with macro’s, etc while still compiling extremely fast. Matter of fact, I could compile one function at a time into a live image and even debug a running program. Building a tool like that with native performance isn’t simple at all. Yet, the programmer is better off for it and going back to a mere structured programming language is pain. One LISP even has strong types. 🙂

So, compile speed isn’t a good indicator of whether language is good. Might have been for procedural, structured programming languages in Wirth’s day on old hardware. My theory says there’s an optimal tradeoff between simplicity and complexity at each point. The languages that let me get stuff done fastest are more complex than Wirth’s most complex language. I put simplifying and enhancing the whole development process above simplifying the language grammar or compiler. So, I think we can make better tradeoffs than Wirth’s by putting a bit more power in the language. Just need to cut it off at a point where it starts getting too big.

And ensure it has clean, consistent design. That’s another positive point from Wirth (and others) that greatly aids compilation speed and programmer comprehension.

re processors

Wirth’s first machine, Lilith, used a custom CPU. He built a stack machine similar to P-code. Later, he just used COTS processors and ported the System part to them. And most are register-based including moving function arguments through registers (i.e. register windows). Their ISA’s are inherently unsafe without POLA or ability to prevent data being treated as code. Wirth, like any other user of these, must somehow make sure his software never does that under any circumstance. Even when arbitrary code is loaded via 3rd party software. Let’s just say that’s hard enough that the first vid I found of latest Oberon OS (Active Oberon 2 Bluebottle) was someone doing a vanilla exploit on it. 😉

“Most importantly though, this has little to do with Oberon.”

I was attacking Wirth’s simplicity over everything mantra, not Oberon. I even said my hypothetical CPU was customized for Oberon (“Oberon types”). Wirth’s processors are customized for Oberon and typeless. So, this was a fair comparison. Wirth’s thinking simplicity triumphs all leads him to choose an unsafe architecture as mentioned above where programmer at various levels must watch out for every little thing. The language helps, but doesn’t eliminate the coding concerns. A processor enforcing type safety for basic types per instruction catches most illegal operations instantly whether programmer knew of them or not. Almost every common form of code injection becomes impossible without further effort, with others constrained a bit. POLA is also easier to achieve as objects are just user-defined types in Oberon and most tagged achitectures support software-defined tags. That’s compartmentalization at object-level, enforced by hardware, and with no kernel mode there for bypass.

So, my solution was superior for developer despite adding complexity in form of tag checking. Now, how much harder is such a chip to build? Several academic groups have thrown together working SOC’s with tagging support on a limited budget so it’s probably easy enough. The SAFE team got their CPU, along with new languages & tools, running on FPGA’s in about 2 years. Theirs is the most complex to date out of academia with things like garbage collection support, software-defined types, and HLL system code. That puts an upper bound of 2 years on these things for a good feature set. (Wirth’s initial Lilith board took similar time.) A simple CPU + tagging unit with predefined types would probably be much easier than SAFE, taking months to a year. So, my solution wasn’t simpler: just better for a number of practical reasons.

And Wirth wouldn’t have done something like that because it contradicts his philosophy. And his users would’ve paid the price when the bugs and hackers came a knocking. That’s the problem with relying exclusively on his philosophy, methods, and tools in security engineering. They’re a start to a real solution at best.

“I’m also somewhat afraid that trying to solve to many problems at once risks to have one tinkering and reasoning eternally. ”

A person can take it too far. Yet, there are often many variables to optimize for in a project. What pre-existing I.P. can we integrate to save time with what constraints? What languages are allowed? What external interfaces? What performance? And so on. Once the problem is properly defined, the path to the solution is much easier.

“Granted, Wirth has, seen from our perspective, gone too far in terms of simplicity but I also feel that your approach needs quite some more refinement, too ”

You’re absolutely right. That’s why you see me exploring many things. I’m continually trying to improve and/or refine my approach to solving these problems. I’m running several in parallel on hardware end because the situation justifies it. The attacker’s end has been thoroughly explored, hardware-software security has had few resources, and I don’t want to miss a possibility. Especially while I’m free to make mistakes & changes without affecting a real product in development.

“I feel humble and a need to be very careful and to profoundly think before criticising that man.”

He’s a bright guy. It has no effect on whether or how I’ll criticize a decision of his, though. Scientific method dictates we judge the work, not the guy. I know how Wirth thinks, what his priorities are, what he’s done, and so on. He’s accomplished plenty in that I’ve given him plenty of respect here and promoted some of his work. Yet, given my expertise in security and systems engineering, I can see plenty of potential improvements esp at hardware and language levels. It’s because my goals and requirements are different. In some ways, his work doesn’t meet them. So, I point this out to signal the next group with similar goals as myself to know what to look out for or possible improvements.

Most developers want productivity, maintainability, safety/security, an ecosystem, and performance. An optimal decision that balances these must be made. It’s often not the simplest one because our needs often aren’t simple. 🙁 One can accomplish a baseline of work (maybe) with the simplest option. I’m not aiming for the minimum, though: I want my tools to give me the best results as quickly and reliably as possible. So, people in my boat must make different tradeoffs than a guy whose work is designed to explore simplicity of hardware/software and teach students. 😉

Nick P May 29, 2014 7:38 PM

@ Clive Robinson

I can’t remember if I posted this before. I’ve changed my thinking on our triple redundant voting schemes. I think it’s still good for correctness/reliability. However, for security against sophisticated threats I doubt it. The bug hunters looking at Chrome, for example, are now stringing five to six bugs together to produce a vulnerability and claim it’s easy. Finding the bugs won’t be hard at all even if developers are different. So, this leads me to two lines of thinking:

  1. More focus on obfuscation of how and where code/data runs. ISA, scheduling, registers, stacks, memory locations, control flow targets, etc. Most of this is covered by the software diversity and randomization security research. I won’t re-hash it here.
  2. Ensure the architectures are really different, including security methods and how the software works.

The second one is important because there’s many common methods of solving a problem. It’s not uncommon for several teams to do something similar, esp if they use same language. My old scheme called for separate countries, ISA’s, OS’s, runtimes, languages, etc. This isn’t enough. What’s needed is three separate schemes that are each good enough to give a serious opponent many headaches.

The simple solution is to start with the high security projects I’ve posted here in the past. One might be a SAFE processor with its tagging. One might be a Java processor. One might be a CHERI processor with control flow protection hardware. One might be a typical processor with ISA obfuscation. And so on. The best solutions to code or fault injection we have. The software running on them, as I pushed previously, would be a simplified and predictable runtime (eg separation kernel, tiny RTOS). This facilitates the timing. One might even improve the voter chip by improving its security where its functionality is immutable (ROM’d) except for the timing data, supplied via trusted port upon start-up.

Such a solution should provide real protection that properly leverages extra nodes. The use of COTS architectures and different teams might just provide an illusion of security against adversaries of increasing talent. I’m not even talking NSA: just people with spare time are getting better and better at bug finding.

re post length

You’re too funny. Yeah, my posts are getting that way, eh? Still, don’t hand the crown off to me yet. You’ve worn it so long removing it from your head might cause a medical complication. 😛

Mr. Pragma May 29, 2014 7:40 PM

Nick P

Well, maybe I should simply have a good look at a language you designed and implemented before I make assumptions.

Would you have a link for me? If any possible I’d also like to have a look at a couple of projects done in your language.

Thanks in advance.

Nick P May 29, 2014 8:22 PM

@ Mr. Pragma

I’d love to, but can’t. I did language design work years ago. I lost it and years worth of other stuff when three encrypted drives failed in succession. An alarming coincidence that’s made me afraid of HD’s. The rest was work done under NDA so I can’t share it (and frankly don’t keep it). I mainly did security engineering so I didn’t care to expand on that. A regrettable mistake now.

Anyway, I can trace my experience. I started with BASIC. I later learned Visual BASIC 6 to get first feel for RAD. A GUI app in minutes? That’s great. Extending it required me to learn C & C++ for some legacy applications. That didn’t last long. I decided they were so complex, unsafe and time consuming that I’d be better off coding a BASIC to C++ compiler that added checks & did optimizations automatically. That took a day to write in BASIC, although dealing with C++ compiler’s quirks too much more. I could then write readable, safe code that compiled fast and ran fast. And it had perfect compatibility with legacy code written in garbage language. 🙂 Closest I’ve been to a Wirth-like language.

Next thing I noticed was plenty of stuff was easy to describe in English, but not in BASIC. It simply took too much code or got incomprehensible quickly (eg nested if’s or long case statements). Pascal etc had same problem. I realized I’d be better off bringing the language up to do more work for me. As my field was automatic programming, I stumbled onto LISP to see a language so powerful & ahead of its time that I could barely wrap my mind around it. So I didn’t lol. It’s macro’s did language extension I wanted, it’s millisecond (on Pentium 2) incremental compiles didn’t interrupt my mental flow, and it allowed live updates/debugging of the program. That was on a whole different level from Pascal, BASIC, or C/C++. The small amount of it I learned greatly expanded my perspective on what programming could do.

So, my work diverged into two directions. The first step was creating an equivalent version of my BASIC in LISP. So, now I could code it interactively, debug it as such, and type one command to code generate efficient machine code. I could also apply macro’s to extend BASIC’s language without modifying my BASIC system. I think I re-implemented that part in LISP anyway to get rid of separate step. My memory is poor these days so some details are missing. I think I did selective compilation of modules so certain ones could run interactively & some were compiled for high efficiency, then linked back into the running image. I’ve been doing stuff like that a long time so hard to tell where I started with it.

The other direction was 4GL’s. Automatic programming research produced this, too. An early one was a CASE tool that took data descriptions, control flow diagrams, etc from a system designer and automatically produced entire applications in COBOL. My mind was brimming. So, I got a hold of 4GL’s and started prototyping them, too. I noticed the best of them identified the functionality a developer used 80+% of the time, integrated it into the language itself, and made its syntax easier. A single line of 4GL language might equal dozens of lines of structured programming with no real drawback in application programming. Only survivor of general-purpose 4GL’s that I’m aware of is WINDEV. Their marketing is over-the-top I will say. 😉

This doesn’t even count languages that I’ve toyed with such as Mercury, Haskell, APL, etc. They had different paradigms that I didn’t spend enough time on to really talk about. Focusing on my strong-suit, hybrid functional and imperative programming style, I’ve experienced a variety of languages and approaches. A BASIC-like 4GL or a professional LISP runs circles around structured programming languages in many areas. Just makes you waste less effort on things machines can do better.

The idea language would have best features of LISP environments, 4GL’s, and safe/efficient languages such as Wirth’s. It doesn’t exist yet. My own prototypes created solid production code, while giving me a competitive advantage against C++ and Pascal coders whose tools couldn’t keep up with their minds. The recipients never knew how the binaries or C/C++ source was generated either. They were only told I used a “highly structured development process…” run by machines acting on mostly ultra-high-level code. 🙂

DB May 29, 2014 8:58 PM

@Clive Robinson

When I said “multiple [hardware/software] platforms” I meant it as a broader thing that included “multiple [hardware] architectures”…. And when I said “compromised the same way” I meant the same outcome, from the perspective of your testing, not the same literal code running on it per se. Obviously you can just say “you’re not doing it right”… but ok, then you have to be more specific.

An attacker has to come up with three vulnerabilities for three different applications running on three diferent OSs on three different hardware architectures. Not impossible but quite difficult. They then have to exploit these vulnerabilities simultaniously to remain undetected to beat the running voting system and that is difficult beyond most peoples imaginations.

Agreed with “not impossible but quite difficult”… that is the exact point I was making. But “beyond most people’s imaginations”… wait a minute here. A year ago intercepting EVERY PHONE CALL and tracking the location of EVERY HUMAN that carries a cell phone was beyond most people’s imaginations, and that turned out to be true… It turns out it’s quite amazing what a billion USD a year will do, with lots left over for even worse. We should not underestimate this, when we’re coming up with secure systems.

Does this mean we shouldn’t do what you propose, and cross check things on multiple architectures? No, it just means we should be realistic about what we hope to achieve with it, and don’t deny where our vulnerabilities still are.

Benni May 30, 2014 6:07 AM

@Buck:
That the remaining 10 nations do not have an internet fiber connection is not an excuse for BND not tapping their communications.

As it is revealed by the book “The NSA complex” from Spiegel journalists, the BND helps NSA to get access in crysis regions with tapping the communications.

We are an exporting country. We export cars, and we have exported the analysis tools Mira4 and VeraS to NSA because they think that these tools are superior compared to their own XKeyscore.

We have a large development funds. It is absolutely no question that the remaining 10 countries need immediate help from our development fund and our network provider T-Systems, to get connected to the internet as fast as possible.

They also need a reliable mobile and gsm network. It should be of top priority that germany’s development fund and germany’s mobile provider T-mobile create a stable environment for distributing locating bugs eeeh no I ean mobiles over the remaining nations.

German toroughness makes it imperative for germany’s authorities to get the communications from even the last congo cannibal (no insult against congo cannibals intended). We work precisely here in germany. That very german notion of precision was it which made the STASI in the former GDR possible.

If we wiretap, then we wiretap completely. We have some reputation to loose.

the white van May 30, 2014 6:58 AM

“Is Your Antivirus Tracking You? You’d Be Surprised At What It Sends”
by Chris Hoffman, 28th May, 2014, MakeUseOf.com

######

"Your antivirus software is watching you. A recent study shows that popular antivirus applications like Avast assign your computer a unique identifier and send a list of all web addresses you visit to the manufacturer. If the antivirus finds a suspicious document, it will send the document to the antivirus company. Yes, your antivirus company might have a list of web pages you’ve visited along with your sensitive personal documents!

AV-Comparatives’ Data Transmission Report

We’re getting this information from AV-Comparative’s Data transmission in Internet security products report, released on May 8, 2014. AV-Comparatives is an antivirus testing and comparison organization.

The study was performed by analyzing antivirus products running in a virtual machine to see what they sent to the antivirus company, reading each antivirus product’s end user license agreement (EULA), and sending a detailed questionnaire to each antivirus company so they could explain what their products do........""

######

Rest of article and comments here:
http://www.makeuseof.com/tag/antivirus-tracking-youd-surprised-sends/

.PDF – The Study, dated May 20, 2014:
http://www.av-comparatives.org/wp-content/uploads/2014/04/avc_datasending_2014_en.pdf

.PDF-To-Images Free 0n-Line Viewer:
http://view.samurajdata.se/

Nick P May 30, 2014 11:54 AM

@ Mike the Goat

re Custom disk encryption boards

Funny you ask about IME’s. There was this proposal from 2009 about inline disk encryption for confidentiality & quick erasure. 😉 Clive and I have fleshed out a number of variations on it here over the years. My prototypes, like most at the time, used VIA Artigo boards due to how cheap they were and their Padlock Engine. Padlock accelerated common algorithms along with an onboard TRNG. That it was x86 meant using it as a coprocessor didn’t require things like changing endianness. Anyway, that work is long gone but certain traits in my posts can be copied.

Let’s start with a variation of my old design that’s not an IME. It’s operating system specific. The idea is to put a secure coprocessor (eg Padlock) on the system with access to main memory and internal memory. Using Truecrypt as a reference, even for actual code, one puts a full disk encryption solution onto the coprocessor. The coprocessor is activated at boot, with option to keep the key or demand password on startup. Coprocessor has a trusted path in form of pinpad and LCD. So, this works just like TrueCrypt except the crypto is isolated and accelerated. The zeroize button can be helpful as well.

Next version is an IME. NSA’s was my reference. Another with a picture of interface. The IME’s main benefit is it’s OS neutral. The lesser known (and really best) benefit is it takes OS out of TCB of encryption scheme. The drawback is the entire thing is going to be custom if it’s to be secure and meet performance of today’s drives. That’s read speed, write speed, latency, and transparent encryption. The encryption method, if integrity protection is included, will likely have overhead that the IME hides from OS by simply showing less space available on disk.

Note: They also cost more than software disk protection. Those meeting NSA’s standards, incl. MILSPEC and TEMPEST, are usually around $8,500. Comes down to approx $7,000 if you buy at least 1,000 of them. Hopefully, we can do better if we leave off MILSPEC, TEMPEST, and a defense contractor sized profit margin. 😉

The design for this starts as a board with two connectors, a main CPU for disk driver + mgmt, ROM, RAM, a serial port for trusted path, a simple device that plugs into it, and an FPGA to implement both high-speed I/O mgmt & encryption. The main CPU directs the FPGA. Trusted path is used for authentication and status. Once key is generated (eg from password), it goes into the FPGA. Any memory involved in setting up the crypto is wiped after initialization. This allows total erasure of all data on drive simply by issuing a command to FPGA to kill the key. CPU can range from a low-power RISC processor to a high security processor depending on needs. A 3-Series Xilinx FPGA should be used for prototyping to enforce low costs, while an anti-fuse FPGA should be used for production version.

A few quick additions. The FPGA should be setup as a pipeline with separate I.P. cores doing data encryption, data moving, etc. This way it happens in parallel and nothing is ever waiting too long. Ensures throughput. Also, there should be a command that causes IME to release an encrypted version of master key for file system. Backups can also be done to other media, combined with secure backup technology, just in case. I mean, to OS it’s all plain text anway so that shouldn’t be a problem. I just can’t underemphasize the importance of the phrases “make backups” and “media that’s not hard disks.” I’ve lost too much work to want someone to experience the same. And the first batch of these things will not be reliable. 😉

A few boards here with small size, plenty I/O and a huge FPGA.

The third evolution of it was a drive enclosure. I mean, the IME probably would’ve been an drive enclosure anyway. I’m talking more on the lines of being able to swap disks out of it easily. This led to idea of a secure, self-encrypting, external hard disk. The enclosure would contain the IME. The use of an external enclosure also gave opportunity to put trusted path on the enclosure itself. One example. I think it can be improved plenty.

A fourth variation I just came up with is for the backups. Let’s say a series of these things is in use: main system drive, external drives for extra storage, and backups done onto DVD-R’s. One can potentially make an embedded system that receives the data, encrypts it, and burns it onto a number of DVD-R’s. It can reconstitute them later. I’m not sure how necessary this one is. I just recall doing it manually for hundreds of DVD’s and it was a P.I.T.A. Like the other systems, the master keys should be blackened and exported so another device can recover the data upon failure.

Skeptical May 30, 2014 12:54 PM

@Nick P: So, I’ll compromise and so will many Americans. “Secure us against everyone else and safeguard us a bit against US govt,” we might say. Purists will argue but lawful intercept is the law. Noncompliant services will be shutdown. Additionally, TLA’s are already accessing many machines and accounts out there of people wanting privacy. I think it would be better to get something secure with a highly-assured, auditable backdoor than something full of backdoors & weaknesses one doesn’t know about with only assurance being they’ll be exploited.

I missed this earlier. Something along these lines would strike the right balance (I think at the moment anyway).

As concerns about PRC commercial espionage have (finally) begun to motivate substantial US Government action on the issue, proposals like the above stand to gain much more traction.

With respect to the NSA, they’re unlikely to have objections to the implementation of such a system (meant broadly) within the US (if it were possible to limit it in that way).

Internationally, I think it’s a different matter, and there the issues are (in my view) exceptionally difficult.

@Clive: re PRC restrictions/dropping of US company services:

Perhaps they’re considering replacing them with someone less likely to engage in espionage.

Anyway, more seriously, this kind of response from the PRC is expected of course. It is the fact that such a response is expected that makes the US policy shift particularly rich from an information vantage. I strongly suspect that there is a consensus within key parts of the US Government and key US industry groups that the expected damage of continued PRC commercial espionage is greater than that of what may be a series of escalations on the economic and legal front.

The best end-state here would be for the PRC to cease conducting commercial espionage. As we move to higher levels on the economic escalation ladder, the damage the PRC inflicts on itself becomes much greater than that likely to be suffered by the US.

If the PRC wishes to maintain its levels of commercial espionage operations, then its best hope is to attempt to influence US lobbying groups by reducing US business opportunities in China. The plan would be for those groups to then lobby the US Government against escalation and to maintain the status quo. Predictably, that’s what the PRC is doing.

But it’s a short-term strategy. Expectations in the US business community will adjust, and US business views on the PRC have been dimming by the quarter as absurdly flagrant theft of IP has continued and as PRC restrictions on foreign investment have continued. The US business community fully understands the importance of reducing commercial espionage, and is very unlikely to change its stance. So I would not expect the short-term strategy to be successful; and in the medium to long term, the US has clear escalation dominance.

I would expect the US to give the PRC time to think things through, and to realize that the US is playing the long game. If the PRC fails to adjust to fair rules of competition, then I would fully expect to see US escalation on legal and trade fronts (very likely as part of a multilateral effort).

Benni May 30, 2014 2:29 PM

BND news: The german secret service wants permission and money (200 Mio euros) from the german government to siff up all facebook and twitter posts in real time:

http://www.heise.de/newsticker/meldung/BND-will-Echtzeit-Ueberwachung-sozialer-Netzwerke-2212289.html

http://www.sueddeutsche.de/digital/auslandsgeheimdienst-bnd-will-soziale-netzwerke-live-ausforschen-1.1979677

And it wants a permission to hoard security vulnerabilities secretly so that BND can attack if necessary.

Furthermore, it not only wants to be allowed to analyze the content of the communication. It also wants to be able to store metadata indefinitely.

Finally, they ask for equipment that allows the detection of rocket tests and they want to heavily invest in Biometrics in order to indentify their targets faster.

Why BND wants this? Well BND says that foreign intelligence services do this, and they want to remain competitive. So if the german government would not grant the BND the money for that, then the BND fears to fall behind the NSA and GCHQ…..

Mr. Pragma May 30, 2014 2:47 PM

There are some news. And I’m frankly bewildered.

The linucks foundation announced that they (? possibly through “cii” (see below)) will provide 2 developers and some advisors (incl. Bruce Schneier) to openssl.

Also, under the umbrella of linucks foundation a “Core Infrastructure Initiative” is/has been/shall be (?) created by a conglomerate of large us corporations, most of them of doubtful trustworthyness. But, well, that’s where the mio $$ come from that cii/lf will spend on “security” (or so it’s said).

Among the projects to be undertaken by cii yet another entity, the “Open Crypto Audit Project” is going to analyze openssl.

Obviously this is not only about technology and security.

While the cii is basically a conglomerate of mostly us corporations — and certainly not acting their interests … — ocap on the other hand is obviously driven by political and other agendas, too, as the list of people involved there clearly illustrates.

Sure, most of them are big names like eben moglen (again …) and some of them (like “our” Bruce Schneier or J.P. Aumasson) are competent, but there are also lots of other, some even recognizable as being politically driven, on board. At first glance I’m under the impression that there are more lawyers and propagandists/evangelists involved that crypto or software engineers.

It also strikes me as, uhm, funny (I’m trying to stay nice) that matt green seems to be deeply involved in ocap.
Now, of course, we are all human and we all make errors, granted. But it makes me wonder, based on what exactly m. green just recently advised to use openssl, yet just currently sets out to seriously look at openssl. Is this the phenomenon of an uhm, expert, confusing evangelizing the OS credo and proper conduct for a scientist?

Frankly, my first impression of that “concerted action” with lots of lawyers and large corporations involved is that some usa corporations who feel the backlash from snowden in terms of lost revenue and some, shall we say “(pseudo)technopoliticians” from and around linucks foundation, felt an urgent need to try to win back trust. Being what they are and ticking the way they tick they decided, of course, for a strategy more based on evangelizing and lawyers than on relevant expertise.

I’m wondering whether adding names like Bruce Schneier to that “effort” is winning back trust – or – whether it leads to B. Schneier (and Aumasson) losing trust.

Compare this with libressl from openbsd.

Sure, the Openbsd guys can look or even be somewhat grumpy and stubborn. And yes, statements à la “libressl is built for openbsd” may sound somewhat unfriendly but, come on, porting sth. from openbsd to linucks is certainly feasible in modest time and with modest efforts.

Yet, while they have actually worked on solving the problem, their request for financial help was turned down so far.

Instead the linucks foundation camp comes up with … corporations, politics and PR, (pardon me) propaganda, and … uhm … 2 developers. And, of course, millions of us$$.
(I generously left out the planned audit because that would certainly bring up the question why the oh, so mighty good and important and blabla linucks foundation didn’t audit the OS crown jewels earlier).

Honi soit qui mal y pense …

yesme May 30, 2014 3:55 PM

@ Mr. Pragma

You are quite right. The “linucks initiative” is PR from head to toe. Besides that, it’s a joke.

Facebook paying 19 billion dollar (yes, with a “b”) for WhatsApp, that’s not a joke.

But trying to “secure” our systems with a whopping “couple of million dollar” over a 3 year period that is beyond pathetic.

As I said before, the problems are so fundamental that IMO equal fundamental changes are the only answer. You can only do that with serious architectural changes, breakup with backwards compatibility, just get rid of bloat and, let’s face it, a different and safe programming language.

To me, if “the industry” doesn’t get a multi billion budget (yes, again with a “b”) and at least a decade of attention, this is a show.

Mr. Pragma May 30, 2014 5:12 PM

yesme

I’m afraid it’s worse.

If someone confronted me saying that he thinks linucks is effectively owned by us-american corporate and agency interests I’d have a hard time defending linucks, fsf, lf, cii, etc etc.

Even worse though, seeing people like Bruce Schneier — the man who understood and clearly said that it’s to a large degree a question of TRUST — support those windy and not at all trustworthy “initiatives”, makes one seriously (re-)consider questions about integrity and trust.

People like Schneier, or Aumasson should understand that they should stay outside of the evangelical, initiative, and PR circus. Dancing with fsf, lf, cii — and such with political and corporate — interests casts very ugly shadows on their trustworthiness.

People like moglen may dance with pretty everyone. They are talking heads and lobbyists and pretty everyone with a brain understands that. One may even consider that line of propaganda, political, and legal work as important for the foss cause and worthwhile.

People like Schneier, however, must be trustworthy authorities, the must be above and away from the lobby and corporate circus.

Prof. Bernstein seems to have understood that. He may speak for or against something but it’s obvious that he always speaks HIS mind — and that creates trust.
One can argue with him or like his views but, no matter what, one knows that he speaks HIS mind and not the interests of others.

Being cozy with people like moglen and all those initiatives and interests they stand and work for, badly taints trust. Simple as that.

Frankly, I’m not sure that in the end Schneier will have added trust to those shady groups – or if he will see people having lost trust in him.

The fact that he supports an “initiative” (a term typically used in political contexts) auditing and quite probably then repairing openssl, i.e. a mindless, unefficient, and obviously propaganda campaign, while his very “partners” bluntly ignore a request for help by people who really and actually work on a solution, casts a grave and ugly shadow — also in terms of “trust”.

yesme May 30, 2014 6:18 PM

@ Mr. Pragma

I am not a conspiracy theory guy. In my post about TrueCrypt (and the corrections) I clearly made sure that that’s not my thing.

I am however a technical guy. Ok, I am not qualified to talk about the stuff that Clive Robinson ans Nick P talks about, but I am convinced that out todays stuff is plain wrong.

Talking about the legal stuff, the license crap, I think the beerware license is way better than (L/A)GPL. But I think the ICS (simplified BSD) license is the way to go. Anyone who can read English can understand the ICS license, it is simple, has no political meanings, and frankly it says: “Do whatever you want as long as you don’t sue us”.

Trust however, that’s a different topic.

C.A.R. Hoare once said:

There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult. It demands the same skill, devotion, insight, and even inspiration as the discovery of the simple physical laws which underlie the complex phenomena of nature.”

With that in mind, (and repeating myself) look at the sizes of the stdio.h

Plan-9 Stdio.h – 4 Kb (good code)
OpenBSD stdio.h – 15 Kb (average code)
GNU stdio.h – 34 Kb (utter crap)

Keep in mind that these are the fundamental building blocks of our systems.

Now, if “the industry” thinks that they restore trust by solving the utter crap that has been built up for 30 years with a couple of million dollar in 3 years, it’s even beyond laughable.

That is what’s wrong with the US IT industry. They think “we” are dumb persons they can screw at any time like they have been doing in the ’90s. But it is 2014. We are educated today. (thank Snowden a million times)

This attempt of “the industry” (do we have a word for that?) shows to me that they still prefer grabbing money from us than that they are willing to solve the fundamental issues. If they want to regain trust, I think they have to seriously think again. Fuck their evangelists, lobbyists, lawyers etc. Better come up with a serious plan.

Mr. Pragma May 30, 2014 7:29 PM

yesme

I’m also not into conspiracies and so I simply laughed off the large number of “Gosh, what’s behind that truecrypt sight thing?”. If at all I’d assume some interest group behind that blowing it up and asking senseless questions.

Your assumed lack of qualification (re Clive and Nick) is something I personally wouldn’t agree with but well, that’s not the issue here anyway.

Regarding the license issues some further thinking and discussing might be appropriate. While I detest the gpl (and the fsf and all the fervent talibanistic attitude) I do see that the BSD type of licenses is not the shoe to fit every foot. There are cases where an author wants and justifiably should have some control over his code albeit it not in the completely overblown and strongly politicized way of gpl.

Hoare, whom you quoted, is always worth to be listened to and his thoughts and advise should be given plenty attention and thoughts, agreed. While I personally wouldn’t have chosen stdio.h as the measure I do get your line and fully agree with it.

As for the industry, frankly, I don’t care batshit. If they gave reason to mistrust them or if they, even worse, colluded with the us agencies, they deserve every cactus ending up in their a** now. Of course they try to limit the damage and, most of them being us-american and ultra-capitalistic, they of course seek their luck in propaganda and PR.

I even don’t care about fsf, linucks foundation, and the like. Of course the hyenas and dogs just had to discover foss as a feeding place and of course the political interests, the industry, and all the foundations and initiatives had to bend, abuse, and rape foss. And of course people like moglen are active in that and right in the center.

But people like Schneier or Aumasson should keep a healthy distance to those creatures and vultures. This is of particularly high importance now as so much trust has been lost, broken, and spit on. Seeing Bruce Schneier joining the dance, seeing people like him loosing his distance and allowing to be perceived as colluding with them hurts and creates damage. A week ago I would have used and recommended to others any of Schneiers work without reservation. Not any more. If you sleep with dogs you’ll wake up with flees.

As for “we are educated today” I disagree. You and some here are indeed. But not all the Joe and Jane Smiths out there. The industry and the politicians don’t try the propaganda/PR approach without reason; they use it because it works.
And that’s OK with me, tough as it sounds, because in the end that approach works for a simple reason: It works because its the food Joe and Jane Smith are used to and want.

The problem is that we also need real security. And in that regard everyone, incl. Schneier, who dances with the propaganda people creates real and serious damage. Because they are working at re-establishing the security theater by creating the impression “Now the heavy weights take care of the problem” and “Soon, openssl will be secure again!”. That’s bad, that’s dangerous, not simply because it’s a theater serving us-american political and business interests rather than security. But because it also supports the line that C isn’t a problem, nor windows, nor linucks (messiah and opium for the tech-masses). It’s basically the message that hundreds of millions of line of source code with thousands of thousands of very grave problems and vunerabilities are not the problem; Just use the funky repaired good as new openssl – now with 20% more security in it! – approved by Schneier, the security guru, and everything’s gonna be fine.

Clive Robinson May 30, 2014 9:06 PM

@Nick P, @Mike the Goat,

My vote would be on an external device, such that the keymat never enters the PC and it’s very vulnerable OS and apps.

Further due to physical issues and Evil Maids, I don’t think the keymat should be in any way stored in the IME when the power etc is off. Because not doing this still alows for Evil Maid attacks on the IME.

Thus I think the IME should use the equivalent of crypto ignition keys based on say smart cards, that also require the user to type in a reasonable size pin on the IME keypad.

Also I would give consideration to storing FPGA crypto setup on the smart card such that crypto other than AES can be used as and when required without having to make new IME units.

Clive Robinson May 30, 2014 10:38 PM

@Skeptical,

Anyway, more seriously, this kind of response from the PRC is expected of course. It is the fact that such a response is expected that makes the US policy shift particularly rich from an nformation vantage.

Firstly it’s not a “response from the PRC” thats just the press spin on it. The PRC policy predates the US legal sillyness.

As for playing the long game US politicos and business execs don’t generaly think more than a few months into the future and have not done so for the better part of a century, where as the Chinese historicaly think in terms of life times.

If I was a betting person –and I’m not unless I know the result in advance– I would not be backing the US on this minor side game, because the US had lost it long prior to the Ed Snowden revelations anyway. It’s been known for over five years that China and other major foreign powers were all over other nations IP, and atleast one (France) openly admited it over a quater of a century ago.

US businesses however knowing full well their IP was in significant danger quite happily shiped their IP into China to get access to either cheap labour or cheap manufacturing resources, and in some cases (rare earth metals) because China held a trump hand.

The C level execs in US corporations have a life expectancy based on the next two quarters figures, the PRC know this and more importantly they understand it and know how to exploit it. The root cause of this is “share price” share holders generaly have little or no regard for the security of a companies IP just any effect it has in the very short term on the share price. Thus spending money on real security is generaly regarded as a bad thing as it’s a significant cost that reducess short term profits, whilst shipping the IP to China to reduce costs and increase short term profits a good thing…

That’s the law of the jungle for US and other Western Markets, untill that changes China will have the better hand at the table.

Further the US does it’s self no favours with trade agreements, the TPP showed the real shenanigans that goes on in secrecy that the USTR calls “transparancy”. Remember this is where the US actually admits to “economic espionage” because it has been caught out befor by amongst others Japan.

Paul Krugman –the Nobel Prize-winning economist– has said of TTP”..there is isn’t a compelling case for this deal from either a national or global point of view” and that “Nor does there seem to be anything like a political consensus in favour, abroad or at home and his conclusion “I’ll be undismayed and even a bit relieved if the TPP just fades away.”

Worse US congressmen who have to give the final aproval on TPP are complaining that the USTR is witholding information from them whilst engaging in talks etc with US industry and trade associations who have poor reputations when it comes to the likes of the environment and IP.

Originaly the US was not to be part of the TPP but various members of the Bush administration applied muscle, which the Obama administration has continued, tellingly both China and India representing a significant percentage of the world population have decided currently to stay out (probably Phoenix Effect). And either of them could break the TPP appart if they chose to.

Further tellingly is what is currently going on in the South China Sea, Obama goes over to talk up the US and China in response makes a very very provocotive move that is very likely to cause blood shed if not military conflict in the reigion. Presumably because they regard Obama and the US as weak and unlikely to do anything other than vacillate.

Nick P May 30, 2014 11:18 PM

@ Mr Pragma

“Prof. Bernstein seems to have understood that. He may speak for or against something but it’s obvious that he always speaks HIS mind — and that creates trust.
One can argue with him or like his views but, no matter what, one knows that he speaks HIS mind and not the interests of others.”

That’s a very good point you bring up. I agree on Bernstein. This is also my style although I’m more for compromise on some issues just to get something done. (Sensible compromises, not BS defeating whole purpose.) I’ve always believed that I should be able to build a certain amount of trustworthiness if I continue to be myself, stay on my principles, and promote what I believe regardless of its popularity. I’ve pissed plenty of people off and missed plenty of financial opportunities because I stick with my principles. If I didn’t think it was secure, I wouldn’t put my name on it or even participate in it when security was a stated goal.

My way of judging people, for performance or trustworthiness, is always by character. I look at statements, actions, and consistency. I expect a certain amount of failure or BS. “To err is human.” Overall, though, the most trustworthy person is he or she who consistently acts over time in accordance with their principles. The principles might be good or evil, but so long as action matches words one can trust future behavior to be whats expected. Such people, with good principles I’ll add, are the best people to have on security projects like those proposed here. It reduces the odds of the worst risk of all: trusted insiders being anything but trustworthy.

@ Clive Robinson

I was considering making the device trusted for key material for practical reasons. Usability, mainly. I agree that it isn’t ideal. Far as smart cards, that’s a decent idea. You already know I like leveraging highly assured tech, even smart cards, where possible. Remember my “high assurance authentication server” that was just a pile of smart cards and a load balancer in a box? 😉

Nick P May 30, 2014 11:47 PM

@ Skeptical

Glad you liked it. Approval of someone with your views provides hope that the agencies might like it, as well. Quite a few of us have tried to compromise. I assure you they won’t stick with it unless it’s secret and facilitates their espionage. The problem it would be internationally is a good catch as that’s quite a problem. The cost of high assurance is large enough that one needs international sales to recover the costs. Most developers wanted to sell to allies (eg NATO and more harmless neutral’s). That was blocked by US govt (via NSA influence) for most of INFOSEC history so few will gamble on it.

The mostly local market that consistently buys this stuff is so niche that even an established player such as BAE’s XTS-400 is only operational at around 200 sites. International sales are necessary to get market large enough. I always thought it was funny that they didn’t want me selling high assurance terminals and firewalls to countries they sold fighter jets, bombs, sniper rifles, and hacking kits to. The funnier part is that Europe knows how to build high assurance stuff as well as we do. Nothing needed is outside the public record. Same lack of willpower or market support as here. The US agencies just want so much to spy on allies too that they will not compromise on this issue.

And so high assurance is still a charity in the general market if the company is located in the US.

name.withheld.for.obvious.reasons May 31, 2014 9:10 AM

@ Nick P

a simple device that plugs into it, and an FPGA to implement both high-speed I/O mgmt & encryption. The main CPU directs the FPGA. Trusted path is used for authentication and status. Once key is generated (eg from password), it goes into the FPGA. Any memory involved in setting up the crypto is wiped after initialization. This allows total erasure of all data on drive simply by issuing a command to FPGA to kill the key. CPU can range from a low-power RISC processor to a high security processor depending on needs. A 3-Series Xilinx FPGA

I’d prefer a Spartan V–the Virtex class devices are great for bus interfaces. BUT…

The toolchain in this development environment represents a risk that I would be unwilling to entertain–or when implementers retain the write pins and core bootstrap encryption using less entropic means than may be needed. A device the Navy at one time for use in SSO applications on shipboard systems had me running for the door…my preference is to dedicate the platform–STRUCTURED ASIC represents the minimum when specifying an on-board cryptographic engine for what ever reason…one benefit is the preservation of evidence in subversion–and–I have always tried to provide asymmetric multi-die components (different FAB’s are used to distributed an asymptotic “behavioral” profile. This allows one to do verification tests that can be trusted but cannot be qualified as “complete”. Instead of cryptographic keymat/hashes and encrypt/decrypt cycle tests (I’ll skip entropy and and TRNG for now).

Functional verification that can map physical characteristics can be very useful. Masks and slices can be used to provide very tight designs–it also requires that the die geometry is well matched in three dimensions (think of it as a version of elliptical curves expressed in geometries of the hardware fab itself). Back in 2004 DARPA had a FAB subversion contest that was well covered by IEEE. The results were astonishing some real artisans showed a level of mastery that makes one appreciate the task. Everything else, once electron isospin and quantum chains begin to grow–runtime analysis becomes interesting.

Nick P May 31, 2014 10:56 AM

@ name.withheld

I’m flexible about which FPGA. I just mentioned 3-series because development kits start at $150. Far as toolchain, there is at least one open source toolchain that could be targeted to FPGA of choice. There’s also quite a few academic ones focused on verification, performance, etc which might be licensed and targeted to it.

I totally forgot about Structured ASIC’s. Thanks for reminding me about them! They were a clever creation. The reason I’m mentioning FPGA’s is that ASIC’s are just too expensive. The intial solution will probably be done by volunteers or pro’s with small sponsorship. I’m not sure what a Structured ASIC costs but I figure it isn’t cheap either.

The verification and testing points you mentioned were interesting, too.

Nick P May 31, 2014 11:32 AM

For anyone following hardware discussions, here’s a great article talking about Structured ASIC tech that compares those vendors among each other and S-ASIC’s against other tech’s.

http://www.soccentral.com/results.asp?CategoryID=488&EntryID=19387

I found it enlightening. I guess Structured ASIC might be my new end goal seeing that most of my abstract designs fit into FPGA’s. FPGA’s for prototyping or limited release production, Structured ASIC’s for large production at lower cost/complexity.

Clive Robinson May 31, 2014 12:01 PM

@Nick P,

Structured ASICs were a bad idea originaly, companies sunk large amounts of cash into them but customers were few then as FPGAs got better went down that route.

Companies are now closing down their structured ASIC offerings or killing them complearly.

If you want to know the reasons have a read of,

http://chipdesignmag.com/display.php?articleId=386

Moderator May 31, 2014 12:22 PM

Figureitout, this is not the place to talk about your sex life. If you post anything like the comment I just deleted again, I will ban you immediately.

Furthermore, you’ve used a lot of space in Bruce’s comments to talk about your problems with the unknown agents that you believe persecute you. I am sorry but this is just not the place for that kind of extended personal account. I really think the best way for you to work through it would be with a therapist, since regardless of exactly what is going on — and I’m not in a position to know that — it’s obvious you are in a lot of pain. I realize you have reasons to disagree with this advice, and are unlikely to take it, and I’m sorry for that. But even so, this blog can’t be your diary. I’m going to have to insist you not discuss that subject here again.

Mr. Pragma May 31, 2014 1:21 PM

I agree with Clive (re structured ASICs).

To better understand complex issues it’s often useful to look at it as a scenario of the basic red line and the steps taken along that line.

Actually I see two red lines; one can be described as MindToMetal or idea-to-physical reality, the other one might be subsumed as “digital revolution” (as opposed to the former largely analog world), the former being the more important one.

The MindToMetal line is obvious. It’s about ever more direct and simple (which also means omnipresent, widely available) path from idea to physical reality.

Usually the driving factors (for the implementation steps) were/are size (smaller is better), performance, power (as in “energy consumption”) and price.

Transistors were faster, smaller, and used less power than tubes. And they were cheaper to produce, albeit with a hook; The NRE to produce them at all were way higher than for tube production, once humming away, however, the factories could spit out very high quantities at rather low cost.
In a way this is repeated with board vs FPGA/ASIC. It’s relatively cheap to get board production up and running as opposed to immense barriers before a semiconductor fab is build, equipped and in production mode. Once it is, however, way more way performant, more power efficient, and usually cheaper circuits are available.

In the end the real gazillion$ race is about offering the shorts distance, cheapest, simplest IdeaToMetal. The decisive hurdle is at the fabs; the production processes, although ever better understood and managed are still comparatively primitive (as compared to the rest of the digital age players).

The final goal is evident: To have a highly versatile and flexible production process allowing the customers to cheaply — both for the factory and the customer — produce even relatively (say in the 100s) small batches of semiconductors using a standardized customer interface.
So, a customer could design some chip using a library of building blocks and feeding a “chip pdf” (e.g. VHDL) to the fab/semiconductor company quite similar to todays sending a pdf/ps to a printer and pick up his 1.000 brochures or whatever 2 days later.

Most of what is needed is actually available. The barrier is in the fabs. It’s still way too much, too complex, and too expensive work to produce individualized semiconductors; so minimum lot sizes must be in the 10s of thousands of wafers.
That’s basically the race. To reach the point where 1-Wafer jobs are reasonably — and reasonably priced — feasible or even common.

There are side races, too, like the one for omnipresence (as in “fab facilities are everywhere”), which are closely related to the main race. Once the main barrier is broken, the side races will mostly end, too.

Actually FPGAs are a quite congruent abstraction of that problem area. They are, in a way, the currently available compromise. ASICSs on the other hand are more of a quantitative compromise and strategically less important although, of course, they are important steps in terms of feasibility and obtainability. Being that it seems just concludent that ASICs should stick to the gate array model and rather aim at low cost, high performance, low power needs. In particular, they should concentrate on the “red line” question in making their innard details rather unimportant (which to the customer they are anyway) and in improving the customer design to semiconductor problem in terms of time, flexibility and, very importantly, their very production processes. Again, the holy grail isn’t in GAs vs. cells but in making single wafer designs and sales (~ 100s of chips) economically feasible and attractive.

Whoever first brakes the, say, 10.000 custom chips @100$ per piece barrier, will have a strategic position in the global market and billions upon billions in revenue.

Mr. Pragma May 31, 2014 1:31 PM

@Moderator

With all due respect (and I didn’t read the deleted comment):

You are right that this blog isn’t the place for certain comments. But from what I see it’s also not the place for very personal remarks, well intended or not, like your advise to Figureitout.

And if I may wish for sth. Wouldn’t it be about time to reign in re. the felt gazillion speculation/conspiracy theory comments on truecrypt? 99% of those add very little to this blog and our discussions but basically spam the blog.

Thanks.

Mike the goat (horn equipped) May 31, 2014 1:38 PM

Nick: an enclosure isn’t such a bad idea. It would be easy enough to implement but doesn’t fit the bill for something the average Joe can use in a home/SME environment for little outlay and cash.

Looking at some of the Adaptec and LSI PCIe RAID controllers – they appear (almost) perfect for a cheap and easy transparent crypto controller project. If the processors are fast enough to do on the fly parity for multiple device arrays then surely they would be sufficient to do the crypto provided we choose a fast algorithm. Given the option ROM is executed at boot, we could easily have the password prompt to unlock the disk(s) displayed there. Other functions would just be accessed by pressing ESC (or whatever hotkey you like) during this period, where a simple textual menu gives options to secure erase, quick erase (delete the key), encrypt/decrypt disks etc.

The only question remains is as to just how hackable these devices are. Having not attempted it, I don’t know – but they fit the bill almost perfectly for the ideals of this project – to provide a home/SME grade transparent FDE that is OS agnostic.

Figureitout May 31, 2014 1:40 PM

Moderator
–Never again (I’m serious now). As long as the message got to its intended recipient, is all that matters. And aww c’mon…now your making it sound like I just made an extended sexually erotic posting! :p

On the topic of therapy, which I’m glad you mentioned, venting and getting the support of some of the great commentators on here has done my mental well-being way more good than any therapy session. I can’t thank them all enough, letting me keep my grip. Also, security people will actually believe my story and won’t be so green and incredulous to the hell-scenario I’ve laid out. A therapist will roll their eyes and prescribe me on more meds I don’t need (and it’s not free, it’s a paid venting session and they don’t do anything to help the situation really). And of course (“But wait, there’s more!”), no my diary would be on the order of Anne Frank’s or some such; I can’t escape it, that’s what mentally will tear apart future victims.

Moderator May 31, 2014 1:52 PM

Mr. Pragma,

Much as I might roll my eyes at some comments, “what the hell is the story with Truecrypt” remains a valid topic.

As for the other issue, I’ve said what I intend to say. I would appreciate if others did not bring the subject up again, since that would likely make it very difficult for Figureitout not to respond.

Gerard van Vooren May 31, 2014 2:12 PM

@ Figureitout

Please forgive me when I say something that hurts, but what you have said along the line is really scary. Sometimes I just don’t know what to think. If this is how you express yourself, please make it clear to us that it is a thought, a reflection or something else. I agree with what the moderator said. This blog is just about security, primarily related to IT topics. Ok, sometimes the discussions are heated, but it is only about technical issues. Please don’t make it, as you said it, an Anne Frank diary. This is the wrong place for that.

Gerard van Vooren May 31, 2014 2:29 PM

@ everyone

I changed my nickname. “yesme” was a bad joke. And yes, this is not gonna change anymore.

Moderator May 31, 2014 3:10 PM

This is the wrong place for that.

He understands that and has agreed, Gerard. So please don’t try to draw him out on the subject again. I know you mean well, but that’s the opposite of helpful.

Mr. Pragma May 31, 2014 4:51 PM

Petrobras

That’s an interesting idea and a potentially attractive element for a good solution.
But it’s not a solution by itself. For multiple reasons.
For one, “complex” languages don’t exist for the fun of it but for reasons of necessity.
Also betting on chips can’t be more than one element unless you happen to be a dictator with multiple levels of (mutually observing and controlling) highly educated security services as well as a couple of billion $ to run your very own — and excessively well secured and controlled — fab.

Considering the complexity of both chips and their production and at the same time the plenty possibilities to blackmail, lure, enforce, or whatever employees to implement backdoors or other mechanisms it would seem quite unrealistic to assume that secret services do not find ways to taint that whole matter.

In the end I’m afraid, any serious security solution must assume — and treat as such — the processors and other highly integrated semiconductors as enemy territory (similar to the internet).

That’s also why Clive (and others incl. myself) have suggested voting solutions, comparative solutions and the like.

name.withheld.for.obvious.reasons May 31, 2014 6:25 PM

@ Clive, Nick P, Pragma

I guess I will be the lone defender here, structured ASIC’s are a nice middle ground when the level of complexity is not high or your producing an “ALL IN WONDER” component. Really not big fan of big netlists or big fabs. The substrate should match the task at hand or your wasting time, resources, power, and my patience. And, I did not suggest them as my first choice–but my last.

The FPGA fan clubs have really grown in size but not in courage–commit to a fabric damn it. Everyone wants to fix the problem later. And I ask “What problem?”–the response–“I don’t know, that’s why I want to be able to fix it!”

It suggest to me that what is missing…”a solution”. At least in my old age I haven’t grown lazy and bitter–cynical–yes! Or is the seniorical.

name.withheld.for.obvious.reasons May 31, 2014 6:43 PM

Some of my best friends are analog electronics engineers (conceptual, design, and applied) and i don’t hold it against them–they can be some of the nicest people. It’s when their frugal, conservative, minimalist approach to systems design that is a laborious process of “skin effect” to tidal influences and orbital oscillations–you have to wonder if it is all really worth it–and so far the answer has been yes.

Astronomy alone has been one of the biggest ANALOG beneficiaries (and some navys) of some the most significant impacts on large DIGITAL systems–without analog–ALL YOUR BITS ARE BELONG TO ME (And some of your browser cookies, I’m hungry).

Mr. Pragma May 31, 2014 8:39 PM

name.withheld.for.obvious.reasons

Well, of course, there are cases where a structured ASIC is just the right (comfortable?) solution.

Generally, however, the building block abstraction is better handled upstairs in a software layer. Yet more generally I think that one should, at least ideally, be able to be quite ignorant of the “how is it implemented on the chip” question (Yes, I know that’s not always possible and sometimes breaks at simple issues like ASIC/FPGA providers but one should as a general rule try to do it the right way).

Nick P May 31, 2014 11:19 PM

re Structured ASIC’s

To recap based on what the each source said. Source that is pro S-ASIC’s gives these benefits:

  1. Better performance and power consumption than FPGA’s.
  2. Most layers forming the device are pre-fabricated, reducing design time. One offering only required customers do one layer.
  3. Vendor addresses most “signal integrity, power grid, and clock-tree distribution issues.” Reduces necessary expertise and certain tool costs for customer.
  4. Use of repeated, regular structures makes it easier to put a design on a newer, immature process node technology.
  5. Is feasible at production ranges of as little as 10,000 units a year. ASIC’s tend to take many times more volume to justify cost.

  6. Altera’s S-ASIC conversion process (HardCopy) can convert an Altera FPGA-proven design to a pin- and package-compatible S-ASIC. Must work as it accounts for 5% of their revenues.

  7. The focus on Systems, Networks, and Platforms on a chip lends itself nicely to S-ASIC’s modularity and easily redoing designs.

Source that’s anti-S-ASIC lists these criticisms:

  1. Starting point is that FPGA’s got better than gate array ASIC’s, which he implies are better than S-ASIC’s.
  2. S-ASIC is a scheme by ASIC vendors to pull customers from FPGA vendors.

  3. The S-ASIC architecture is inefficient and unnecessary.

  4. He implies there’s no trouble moving from FPGA’s to ASIC’s as synthesis tools take care of everything in minutes to hours and underlying hardware doesn’t matter.

  5. Argument of popularity in that few FPGA customers turn their designs into S-ASIC’s.

  6. He claims that S-ASIC’s are intended by FPGA vendors to be a safety net for customers in event that FPGA’s don’t work out. (Contradicts 2?)

  7. He reiterates that some vendors don’t do it anymore, naming LSI Logic, to argue industry is against it.

  8. He says the economical solution (and alternative) is to bring back the gate array. And make it have less NRE and turn-around time.

Another article makes these points:

  1. Large ASIC providers had to invest plenty of resources in ensuring these worked with their backends and design flows. They didn’t.
  2. Certain S-ASIC providers claimed the FPGA-to-SAIC design flow would be seemless. It wasn’t: the designs still had to be modified to be more ASIC-like.

  3. Certain S-ASIC providers apparently weren’t experienced enough to deliver working designs consistently. Customers don’t like getting burned twice.

So, I’m looking at the claims. There are certain benefits. However, one must pick the vendor carefully and realize the transition might not be seemless. I don’t buy into market preference criticism as market often prefers inferior solutions for various reasons. The argument that synthesis tools eliminate the problems sounds bogus given how many projects must be modified, tested, and sometimes fail to work on ASIC. Obviously, the tools haven’t eliminated the problem. One critic essentially supports the S-ASIC approach by arguing for a similar approach (gate arrays) and the other rejects the current offerings rather than the idea.

If anything, the problem right now is that vendors overpromise and underdeliver. The tech is also only a few years old. Last I checked, the competition didn’t become No 1 overnight. There were kinks to work out, lessons to learn, etc. So, S-ASIC is promising, might be a thing in the future, and should invite caution for potential customers in the present.

@ name.withheld

Your analog loving friends might find Triad Semiconductor’s mixed signal S-ASIC’s interesting.

Wael May 31, 2014 11:34 PM

@name.withheld.for.obvious.reasons,

Some of my best friends are analog electronics engineers

Analog is still dear and near to my heart. Too bad I don’t have the time to work on analog projects. I still remember the schematic of an RF transmiter I built using a 2SA49 PNP germanium transistor and another one using a 2SC22 NPN Silicon one. Recently I worked along some RF engineers and when I looked at what they do these days, it didn’t really come across as analog design. They took a reference design and “matched impedances” or “Shielded something” — still a lot of hard work, and requires a lot of knowledge, but the fun is missing. Real analog work is almost limited to chip manufacturers. And I agree, analog is fundamental to digital, albeit “behind the scenes”…

Nick P May 31, 2014 11:57 PM

@ Mike the goat

Making it a cheap chip or reusing a controller is possible. However, RAID controllers parity checks might not tell us anything as parity checks are simple XORing most of the time. That’s what… one or several sets of bytes per cycle? Even a slow processor can XOR many MB/s. Crypto algorithms are usually measured in cycles per byte. AES256 counter mode on P4 iss around 27.5 cycles per byte. That means XOR parity operation is at least 27 times faster than a crypto primitive simpler than XTS mode often used on FDE. So, we’d have to test it to be sure if it’s good at both or just one.

The other issue is your speculating on hackability. Assuming you make it work, this has two risk areas. One is that you succeed, start piggybacking on their work, and then they do something that kills your effort off. The second is the use of a chip you have no internal data on leads to attack vectors on it. For these reasons, using your own chip that you’ve thoroughly understood and developed for is superior. Especially if you’re aiming for security and they weren’t.

Now, how to do that while things are still cheap? Well, one can turn to the many SOC’s that are on the market. You just need one that’s cheap in volume, integrates with a desktop, can support a disk, can support encryption, can remember configuration, and has internal storage for keys. So, I went shopping in the marketplace to see what’s already been built.

Marvell’s SATA controller jumps out due to two features:
http://www.marvell.com/storage/system-solutions/sata-controllers/

Intel’s Atoms are doing decent on crypto tests
http://www.servethehome.com/Server-detail/intel-atom-c2750-8-core-avoton-rangeley-benchmarks-fast-power/

First Freescale chip I found had accelerated XOR’s and crypto
http://www.freescale.com/webapp/sps/site/prod_summary.jsp?code=MPC8548E

Any of these should be modifiable to do what you want to do. These companies are also known for affordable chips. Reuse keeps development costs down. Using a specific chip with documentation creates the benefit that you know what’s in it and control how its used. The last benefit is it (or one like it) will probably keep being available to supply your customers for some time.

@ Wael

Remember that RobertT used analog components in his designs partly to throw off those trying to understand them.

Petrobras June 1, 2014 1:11 AM

@Mr. Pragma: “Also betting on chips can’t be more than one element unless you happen to be a dictator with multiple levels of (mutually observing and controlling) highly educated security services as well as a couple of billion $ to run your very own — and excessively well secured and controlled — fab.”

For me, this is no longer true since the possibility of 3D printing of conductive material, that makes possible to make transistors-based circuits at home for $4000, as I discussed above see https://www.schneier.com/blog/archives/2014/05/friday_squid_bl_426.html#c6330219

I know this will be a slow chip (in the low mHz range), but at 27 cycles per byte to encrypt (see above comment), one can live with that.

Mr. Pragma June 1, 2014 8:57 AM

Petrobras

You are right, I didn’t think of that (“micro-printing”).

I still think the no compiler approach isn’t the best one. Building your own “chip” and programming it in Assembler or alike is probably creating more problems than it solves.
But hey, good luck! 😉

Skeptical June 1, 2014 8:07 PM

@Clive: I’m taking your points slightly out of order, as I want to address the most important ones first.

As for playing the long game US politicos and business execs don’t generaly think more than a few months into the future and have not done so for the better part of a century, where as the Chinese historicaly think in terms of life times.

a few paragraphs later you expand on this point:

The C level execs in US corporations have a life expectancy based on the next two quarters figures, the PRC know this and more importantly they understand it and know how to exploit it. The root cause of this is “share price” share holders generaly have little or no regard for the security of a companies IP just any effect it has in the very short term on the share price.

This is a common misperception of US businesses, which I think reached its zenith around the late 80s/early 90s, when one would read frequently of the disciplined and long-term focus of the Japanese and the feckless, near-sighted greed of the American corporation.

In fact a company’s long term prospects are vital to its valuation (unless it’s winding down). That is why, for example, Amazon can fetch a share price that is 469 times what those shares earn today: investors are focused on what it can do in the long term. That is why Silicon Valley attracts so much venture capital. That is why Tesla has a market capitalization of 25 billion dollars even while it is losing money (currently).

Do you think large, institutional shareholders of Amazon, or Tesla, or IBM, or Google, or Intel, or Apple, care about IP security? Of course they do. It’s a more difficult thing for them to assess, but it absolutely matters. And here’s another thing: everyone who sits on the boards of such companies will know very well how much this can matter, and taken together they are a very influential group of people.

Now, of course, management of a company can be given incentive, through poorly structured compensation agreements, to maintain the perception of a long term strategy while secretly undercutting it in order to magnify certain key metrics that investors will be watching. “Yes, we cut expenses last quarter by 20% even while revenue growth accelerated” such an executive might crow during an earnings call, all the while not mentioning that these numbers were achieved by shifting production lines to high-risk factories and that if product quality falls, so too will the company.

And that does happen, in the US and everywhere, particularly if there is a weak board of directors. But it’s not the normal state of affairs in American business.

Incidentally, you tend to find that conduct to be more normal as you move to environments where there is little transparency and high corruption. The PRC scores quite high on both measures.

Which brings me to my last point. There are a fair number of stereotypes we all learn about other cultures, and these are very frequently the equivalent of “alternative medical remedies” that claim to be sound simply because they have been repeated by one generation to the next. That of the undisciplined, half-greedy and half-naive Americans, is such a “folk remedy”, prescribed as a wondrous tonic for easily understanding an extremely complex and diverse society. Events have often turned out poorly for those who substituted such stereotypes for objective analysis.

Firstly it’s not a “response from the PRC” thats just the press spin on it. The PRC policy predates the US legal sillyness.

Which PRC policy? Here are quotes from two of the articles you cited in your comment (and you seemed to be agreeing with them):

From the Quarz article:

The new rules come after Beijing forbid Chinese government offices from using Windows 8 last week and said they would vet imported IT equipment, and the US Department of Justice indicted five Chinese army personnel for stealing corporate secrets from US companies. Also last week, China’s Ministry of Finance proposed that foreign accounting firms be banned from working on mainland Chinese accounts without a local partner, a move that could be as much about protecting China’s domestic industry as it is spying concerns.

From the Reuters article:

China is pressuring its banks to remove high-end servers made by IBM IBM.N and replace them with a local brand, Bloomberg reported on Tuesday, as tensions rise between Beijing and Washington over allegations of cyber espionage.

Why do you think these policies predate “US legal silliness”? This isn’t something I’ve looked into.

If I was a betting person –and I’m not unless I know the result in advance– I would not be backing the US on this minor side game, because the US had lost it long prior to the Ed Snowden revelations anyway. It’s been known for over five years that China and other major foreign powers were all over other nations IP, and atleast one (France) openly admited it over a quater of a century ago.

IP has much greater importance today than it did in 1989, particularly to the US economy.

Let me just note a few things.

This 100 page report, which is well sourced, estimates losses to US businesses due to PRC disregard for IP rights (this estimate includes much more than the effects of commercial espionage alone) to have been somewhere in the neighborhood of 320 billion US dollars in 2012 (obviously, a tough number to estimate, but it illustrates the point).

The US economy depends heavily on IP for growth, and all signs point to IP becoming ever more important in the global economy.

PRC direct investment in the US actually exceeded US direct investment in PRC for the first time last year.

The PRC remains greatly concerned that social stability is tied to high economic growth. Yet the PRC growth rate seems to be slowing, environmental damage has reached intolerable levels in many areas across China (addressing them will impose costs), and the nation faces quite serious issues in its financial and real-estate markets.

A new generation has recently taken leadership, and there is room for movement now where there was not before.

In short, you have increased dependence by the PRC on the US, numerous challenges faced by the PRC which would be exacerbated by steps up the economic/legal escalation ladder laid out before, and political actors in the PRC who are less tied to existing policies.

Now, there are also factors weighing against a good resolution here. But given the importance of IP, the US has no choice in the matter, and it has enormous leverage that it can use, should it choose to do so.

Further tellingly is what is currently going on in the South China Sea, Obama goes over to talk up the US and China in response makes a very very provocotive move that is very likely to cause blood shed if not military conflict in the reigion. Presumably because they regard Obama and the US as weak and unlikely to do anything other than vacillate.

If by vacillate you mean continue to increase military ties to nearly every other significant East Asian country, construct new bases in the region, and continue development of strategies and weapons aimed squarely at maintaining total deterrence along the spectrum of conflict in East Asia, then sure. The US has 80,000+ military personnel in that area, unmatched weapons, and extensive experience in warfare.

The PRC is far too smart to think about Obama as weak in this area, much less to base any truly provocative actions on that judgment.

Obviously, both nations desire to avoid serious military conflict in the region, and they’ll work together to do so.

China’s actions are aimed at its neighbors, not the US. The rivalry and animosity in that region between nations runs very deep.

Mike the goat (horn equipped) June 2, 2014 7:34 AM

Nick: yeah, you’re probably right. I was installing an Adaptec 5805 into a client’s server a few weeks back and was won over by the specs on the box – a dual core 1.2 GHz SoC and 512MB RAM. If it was hackable and the very well reasoned issues you mentioned could be mitigated I imagine that you would have a reasonable platform to base such a device on.

Nick P June 2, 2014 12:02 PM

@ Mike the Goat

The specs are impressive. They probably have accelerators like I mentioned that they don’t list. I’m all for trying to hack on such things in spare time. Just not for a production system unless device was designed for such flexibility. Examples include PS3’s “Other OS” option, SOC’s that use vanilla hardware with customized firmware/software, etc. Lessens some of the risk.

susan harri December 14, 2020 12:03 PM

OMG!! This is certainly a shocking and a genuine Testimony..I visited a forum here on the internet  I WEEK AGO, And I saw a marvelous testimony of this powerful and great spell caster called
DR EKPOMA  on the forum..I never believed it, because I never heard nor learnt anything about magic before.. Not a soul would have been able to influence me about magical spells, not until DR EKPOMA did it for me and restored my marriage of 8 years back to me and brought my spouse back to me in the same 72hours just as I have read on the internet..I was truly astonished and shocked when my husband knelt down begging for forgiveness and for me to accept him back.. I am really short of expressions, and I don’t know how much to convey my appreciation to you
DR EKPOMA  you are a God sent to me and my entire family.. Here is His whatsapp phone number {+2348139206346} Email: drekpoma77temple@gmail.com  You can share your own testimony.call or whatsapp…….+2348139206346

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.