Comments

Anura June 20, 2014 4:40 PM

There are three ways to know if something’s potentially a bomb:

1) It has clock hands
2) In cake form, it will usually have an unusally large and sparkling candle
3) it will have colorful LED lights in the shape of Mooninites

Nick P June 20, 2014 5:00 PM

Crafting a Usable Microkernel, Processor, and I/O System
with Strict and Provable Information Flow Security (2011) Tiwari et al

http://users.ece.utexas.edu/~tiwari/pubs/ISCA-11-starlogic.pdf

A very interesting piece of work combining many good ideas. This would be the “clean slate” version of the separation kernel paradigm. That paradigm, which I pushed here much in the past, was about minimizing the TCB while decomposing a system into partitions with strict enforcement of CPU time, memory allocation, I/O access, and message passing. Aside from integrating that, this work tries to do for hardware what microkernels and TCB concept do for software. Excellent and worthy of further research.

Note: The team behind this also did the work on the PHANTOM Oblivious Computation scheme and the more recent Sapper hardware security policy enforcement scheme.

Mike the goat June 20, 2014 6:23 PM

Nick: didn’t have much to comment on your link but just wanted to thank you for taking the time to post all this interesting material. I’d like to think I am pretty well versed in the current sec environment but I always learn something from your posts.

Benni June 20, 2014 10:18 PM

How the society can abuse BND’s surveillance system:

BND searches the data it gets from the world’s largest internet node de-cix first by a wordlist.

Now, in germany, everything works along some rules and by the book.

For this reason, the wordlist, after which the BND searches was secretly defined in a monthly meeting of the so called G10 comission that controls the BND.

From this article

http://www.spiegel.de/spiegel/vorab/anwalt-klagt-gegen-durchleuchtung-von-e-mails-durch-den-bnd-a-960203.html

we know that the words “bomb” and “atom” are in the list.

From this article,

http://www.spiegel.de/politik/deutschland/elektronischer-staubsauger-wie-der-bnd-lauscht-a-31411.html

we know that the wordlist “contains names of tanks and rocket types, chemical weapons, but also words that are used daily, for example “snow” since it is used as an abbreviation for cocaine.”

From this slide, http://www.spiegel.de/media/media-34037.pdf we now that the BND filters out emails that unfortunately happened to contain words from the given list by agents who read them personally.

So we can give our BND agents a bit of work.

1) Create a german email account and another one in some other country.

2) Let the accounts be operated by some machines, which are writing to each other funny mails containing the word Bomb, snow, vx, sarin, antrax, also put in some tanknames and rocket types and so on…

Soon our BND agents will have a bit more work to do…..

And they can not do anything about it. The wordlist after which they have to search was, in german style, ordered at higher level….

Of course, the emails from which the bots send their mails should change regularly, thereby the BND agents will have some difficulty to put them in their spam filter….

nobody June 20, 2014 10:56 PM

@Benni – the advantage to BND is that “islamic terrorist intending to attack the president on tuesday in the garden” is a single word in German. So a word list search is very effective

Benni June 20, 2014 11:14 PM

Well, BND seems to have

https://netzpolitik.org/2012/geheimdienste-haben-2010-37-292-862-e-mails-uberpruft/

2000 words related to terrorism
300 words related to illegal border crossing
13.000 words related to arms and weapons trade

In the year 2010, bnd’s monitoring of de-cix with that wordlist was partially successfull. They found in fact 213 communications that were of interest to the service….

This spectacular success of monitoring most of the world’s internet communication at the world’s largest internet node de-cix is perhaps the reason why suddenly, “project wharpdrive” had serious funding problems due to “fiscal constraints”, about which the BND was so ashamed it did not even tell its partners at NSA and GCHQ:

http://www.spiegel.de/media/media-34117.pdf

Buck June 20, 2014 11:37 PM

Here we go again!
Ready for another round?
I’ll be back next week…

Rest assured, those IP address have surely been repurposed by now (or more probably, had a less clandestine purpose all along)… Putting them into some sort of strange (semi-?) permanent ‘NSA blacklist’ would simply be a waste of valuable IP4 space! 😉

Now then – on to figuring out how to keep corporate interests, criminal groups, and government agencies from wasting so much of humanity’s greatest invention to date!

Wael June 21, 2014 12:15 AM

@Benni,
So why did they mask all names and kept Hr. Klaus-Fritsche unmasked on page 4? Deliberate, or oversight?

Thomas_H June 21, 2014 5:32 AM

“More than 400 large U.S. military drones have crashed in major accidents around the world since 2001, a record of calamity that exposes the potential dangers of throwing open American skies to drone traffic, according to a year-long Washington Post investigation.”

Linkie

Together with the Google balloon that triggered a emergency response for a downed aircraft in New Zealand this should be enough to raise the question whether filling the skies with automated vehicles is a really good idea…

Clive Robinson June 21, 2014 6:39 AM

@ Thomas_H,

The problem with UAVs of all types is not the available AI or for that matter the available airframe, the problem is cost of operation / efficiency as a selling point…

The main selling point on all UAVs I’ve seen compromises safety for extended range, hours aloft, stealth/LPD, etc.

It appears that if you take a human out of the craft, safety rules that would be considered normal become expenses that cut into profit. Thus enfineering tolerances are cut to less than you might expect for a push bike. AI or other systems are often less weight reliability etc than a normal light aircraft auto pilot. If fitted anti collison sensors are of less use than the Mark 1eyeball most pilots have, etc etc.q

Part of this engineering cost cutting is the idea that the only thing that realy matters if a UAV drops out of the sky is ensuring confidential or above equipment is destroyed to stop it falling into enemy hands. Thus these UAVs are the equivalent of WWII “doodle bugs” or mortars to those unfortunate enough to be in the drop cone, which in many cases would be places where people congregate to buy food or celebrate family events, or groups of people traveling in convoys for their own protection against the unlawfull activities of others.

The problem then switches from an “alledged battlefield” issue to a civilian issue where the same under toleranced air frame and avionics now get used in civilian areas by LEOs and other Governmental TLAs with barely a nod to the civilian licencing agencies as “experimental vehicals” or equivalent get out that is frequently used for home built RC aircraft, para sails, hang gliders, etc.

The trouble is if you try to tighten up the rules you will get howls of protest from extream sports and RC modelers, backed up by the largess of the military industry lobbying, along wirh LEOs screaming about how it’s stopping them catching terrorists, people smuglers, drug smuglers,etc etc etc. Or what ever else will frighten the politicos into letting them have their gungho military toys so they can be ready in their limited macho minds to fight off / be the next Rambo, whilst in the mean time feeding their baser instincts peeping into windows and other private areas to catch flashes of flesh, especialy if young/nubile/firm/in action etc etc, just as the likes of the NSA have been with the Internet.

Benni June 21, 2014 10:17 AM

@Benni,
So why did they mask all names and kept Hr. Klaus-Fritsche unmasked on page 4? Deliberate, or oversight?

Well, thats in the text: He is not some low ranking agent, but a state secretary, and thereby a person of recent history, whose privacy is not as strictly protected by german law as that of a low ranking clerk who just does his job. State secretaries are directly reporting to the ministers

Nick P June 21, 2014 10:59 AM

@ Mike the Goat

Why thank ya! That certainly is the goal. Of course, seeing the information had no traction prior to Snowden and little after is a bit discouraging. Not enough to stop me, though. My archive is entirely too large to index but I’m still thinking about creating an index/list of just the ground-up safe and secure machine designs. Might be worth the time to keep them in one place.

Chris June 21, 2014 11:39 AM

Hi just read all of the PDF:s and it took awhile to do that.

Couple of things that stroke me was:
-Use of Spam filters to not glog up the system
-P2P traffic not collected
-OpenVPN secure or not ? (Hmm)
-Hotspot Shield is a honeypot
-Oil Companies are kept under heavy surveilance
(Why if not economic spying certainly not terrorist related)

Well I dont think that they can win this war with electronics, without using HUMINT, personally I think that its a waste of money to do it this way.
It certainly wont stop the CT in my opinion, and I think they will have some tough times ahead from now on.

Andrew Wallace June 21, 2014 12:15 PM

If you build it they will come.

My suggestion to Schneier is to clean up the blog in a new approach.

The blog should be a meeting of expert minds. At the moment it is very anarchistic.

I don’t think Bruce believes truly in that or represents him.

Maybe it was mission creep and the blog wasn’t intended to be the way it is today.

I don’t understand what kind of community Schneier is building here.

The new CIA Twitter and Facebook pages aren’t much better and already they are finding the comments hard to deal with.

anonymous June 21, 2014 1:01 PM

2) Let the accounts be operated by some machines, which are
writing to each other funny mails containing the word Bomb, snow,
vx, sarin, antrax, also put in some tanknames and rocket types
and so on…

You must be pretty young, benny. We did that back in the late nineties. Echelon, you know…
Didn’t work then, won’t work now.

Benni June 21, 2014 1:21 PM

By the way, there is some chance that you can set yourself into the BND blacklist, so that your email adress appears in the list that they are forbidden to search.

BND has an email:

zentrale@bundesnachrichtendienst.de

These bloggers here have tried write there already in order to opt-out.

https://netzpolitik.org/2014/deutschland-akte-wir-praesentieren-die-auslaendischen-domains-die-nicht-ueberwacht-werden/

No answer yet…

@Chris:
“-P2P traffic not collected”
“-OpenVPN secure or not”

Some while ago, there was this spiegel article on a question that a member of parliament has sent to the german government which is related to these things. Ironically the question came from the same party who ruled the former GDR with its Stasi.

http://www.spiegel.de/netzwelt/netzpolitik/regierung-haelt-details-der-e-mail-ueberwachung-geheim-a-834897.html

The answer from the government is here

http://www.andrej-hunko.de/start/download/doc_download/225-strategische-fernmeldeaufklaerung-durch-geheimdienste-des-bundes

It says that the BND forces all german internet providers and internet nodes (among them the world’s largest internet node de-cix) to make a full take of all their traffic and give it immediately to the secret service. BND then decides how he analyzes this data.

So BND certainly collects P2p traffic. It just “says” that it does discard this in its later analysis. But how “credible” is it when the BND says its analysts would discard something… They have the content anyway.

As to the question whether “BND can decrypt and analyze encrypted communication (for example with ssh or pgp)”, the answer is:

“Yes. The technology used by BND is capable of this. Depending on the encryption method and strength.”

this Rampart-a slide says they collect vpn traffic:

https://s3.amazonaws.com/s3.documentcloud.org/documents/1200866/foreignpartneraccessbudgetfy2013-redacted.pdf

At that point one should expect the worst, I think.

Especially given that a leading Openssl developer has it just 20 minutes with the suburb train to BND’s headquaters in Pullach…..

Benni June 21, 2014 1:30 PM

Seems BND also has phone numbers and a postal adress. If you want to opt out your email adress, perhaps this is worth a try.

http://www.bnd.bund.de/EN/_Home/Service_Box/Contact/Contact_node.html

Office Berlin
Gardeschützenweg 71-101
12203 Berlin
Phone (0 30) 4 14 64 57

Headquarter Pullach
Heilmannstrasse 30
82049 Pullach
Phone (0 89) 7 93 15 67

email of the central:
zentrale@bundesnachrichtendienst.de

Press enquiries
Phone: (030) 20 45 36 30
Fax: (0 30) 20 45 36 31
pressestelle@bundesnachrichtendienst.de

public relations and internet:
Phone: (030) 20 45 40 07
information@bundesnachrichtendienst.de

Moderator June 21, 2014 6:29 PM

As you can see, there’s now a question on the comment form that you’ll have to answer in order to comment. This is an experiment to try to improve spam control — if it doesn’t work well enough to justify the annoyance, it will go away. (At least it’s not asking you to transcribe illegible strings of letters and numbers.)

Chris Abbott June 21, 2014 8:48 PM

@Bob S.

Well, for now, given it’s in development, nobody should ever use it even to save their life. I just don’t like MS’s record on security anyway. And as with any Microsoft product, never use it if it’s just been released.

Chris Abbott June 21, 2014 8:53 PM

But on my first link, it seems this fork OpenSSL idea is gaining some steam. It could be dangerous at first, but you could start with the most commonly used features that are known to be good, and of course, eliminate the antiquated garbage that’s still in it (like support for DES, really?).

65535 June 21, 2014 10:43 PM

@ Nick P
Good stuff on a secure OS.

@ Benni
I have been following you in both the previous ‘NSA Tapping the Internet Backbone’ and this post. Your links and observations on the critical role of BDN/NSA/DE-CIX exchange is eye opening.

Your idea to block known BND/NSA IP’s is interesting.

I would go even farther and block all military IPs . If you do a cursory internet search you can also find all military IP’s and their ranges.

That might not be for everyone (such as active mil people and mil contractors) but for reporters and so on it could help.

True, all IP’s are essentially logical in the final analysis given NAT [plus transparent fire walls and port mirroring] but, blocking the known spy IP’s is a start.

Your observation that encrypted traffic is copied, stored and possibly decrypted is interesting.

‘BND certainly collects P2p traffic. It just “says” that it does discard this in its later analysis. But how “credible” is it when the BND says its analysts would discard something… They have the content anyway. As to the question whether “BND can decrypt and analyze encrypted communication (for example with ssh or pgp)”, the answer is: “Yes. The technology used by BND is capable of this. Depending on the encryption method and strength.”’ -Benni

Here is what Bruce S. said on the subject [excluding, keyloggers, side channel attacks and other NSA hacking]:

“…Right now the upper practical limit on brute force is somewhere under 80 bits. However, using that as a guide gives us some indication as to how good an attack has to be to break any of the modern algorithms. These days, encryption algorithms have, at a minimum, 128-bit keys. That means any NSA cryptanalytic breakthrough has to reduce the effective key length by at least 48 bits in order to be practical… There’s more, though. That DES attack requires an impractical 70 terabytes of known plaintext encrypted with the key we’re trying to break. Other mathematical attacks require similar amounts of data. In order to be effective in decrypting actual operational traffic, the NSA needs an attack that can be executed with the known plaintext in a common MS-Word header: much, much less… while the NSA certainly has symmetric cryptanalysis capabilities… converting that into practical attacks on the sorts of data it is likely to encounter seems so impossible as to be fanciful… The defense is easy… stick with symmetric cryptography based on shared secrets, and use 256-bit keys.”

[Discussion of quantum computers and NSA’s privileged position on the backbone]… “maybe some of it [quantum computing] even practical. Still, I trust the mathematics.”- Bruce S.

https://www.schneier.com/blog/archives/2013/09/the_nsas_crypto_1.html

[and]

Bruce’s tips on how to stay secure:

1) Hide in the network. Use Tor to anonymize yourself. Yes, the NSA targets Tor users, but it’s work for them [Tor users].
2) Encrypt your communications. Use TLS. Use IPsec…you’re much better protected than if you communicate in the clear.
3) Assume that while your computer can be compromised, it would take work and risk on the part of the NSA – so it probably isn’t. If you have something really important, use an air gap…
4) Be suspicious of commercial encryption software, especially from large vendors. My guess is that most encryption products from large US companies have NSA-friendly back doors…
5) Try to use public-domain encryption that has to be compatible with other implementations. For example, it’s harder for the NSA to backdoor TLS than BitLocker, because any vendor’s TLS has to be compatible with every other vendor’s TLS, while BitLocker only has to be compatible with itself… [possibly out of date given the Openssl heartbleed problem and now the TrueCrypt shutdown]

http://www.theguardian.com/world/2013/sep/05/nsa-how-to-remain-secure-surveillance

[Need for leaks]

“We need to know how exactly how the NSA and other agencies are subverting routers, switches, the internet backbone, encryption technologies and cloud systems…” Bruce S.

http://www.theguardian.com/commentisfree/2013/sep/05/government-betrayed-internet-nsa-spying

Benni, do you think the 70 Terabytes of clear text data still holds true to break encryption? That is a lot of Data for one person to generate in clear text. Do you think that Bruce’s tips still hold true? Are Tor users fairly safe?

Nick P June 21, 2014 10:45 PM

@ Moderator

It had to happen eventually… (sigh) The choice of a question instead of a CAPTCHA is interesting. I hate CAPTCHA’s for the reason you gave. I’m fine with the new scheme so long as questions continue to require little thought (like current one), although it might be a pain on mobile. I’m also looking forward to seeing how effective it will be as some spam is bots and some is at least partially human drones (eg Mechanical Turks). If it’s bypassed quick, that might tell us if this blog is targeted a bit more than others as a human will need to get in the loop. Number of bypasses will also say how often a human looks at it and if that increases due to your security feature. Will be interesting.

@ Chris

I was thinking that Google might have been better forking LibreSSL. Gerard and others have already made valid points about how awful the OpenSSL codebase was. I’m surprised there weren’t dozens more vulnerabilities in the news this year. The OpenBSD team cleaned it up a lot. Starting from that would’ve improved the security profile plenty, although their style might have posed some other problem. I just think forking OpenSSL wasn’t the best move. And Google even has the money to license a professional, high-quality implementation. That they fork the worst of the well-known OSS libraries instead is… not saying much for them.

And, yes, self-certification is a disaster that shows the regulators are doing lip service. They (or anyone over them) might even be paid off.

Daniel June 22, 2014 12:05 AM

@Chris Abbott
RE: Google SSL fork

It is significant because it speaks to a theme that I have mentioned before: the web is turning into a system of “what cryptologist do you trust?” In some ways one could argue this is a good thing because it promotes competition which–ideally–should improve quality. The problem is that there is no effective way for the average computer user to ascertain quality. So we are back to Bruce’s bugaboo about trust.

If there is one thing that I have grown to appreciate and that is the fact that when it comes to on-line security one is trusting someone somewhere. From secure OS, to SSL, to file encryption it is simply not possible for one person to be an expert on everything. So the question then is not /if/ one is going to trust the only question is /who/ one is going to trust. This makes evaluating the trust decision regarding people and organizations as vital as any technical evaluation about code or build out.

So the issue with forking OpenSSL is more than the issue of code maintenance among different repositories. It raises the more fundamental issue as to how much does one trust the Google team versus the OpenSSL team vs LibreSSL team. That’s a much more difficult evaluation than simply looking at some lines of code.

Wesley Parish June 22, 2014 1:38 AM

@Thomas_H

I read that article with a certain amount of disbelief. The UAV attack surface is much larger than I had expected. I can believe that the Iranians brought down that UAV now. I expect that flying UAVs in metropolitan state airspace will give various types a new target, with a corresponding increase in losses.

Gerard van Vooren June 22, 2014 5:09 AM

After looking at the BoringSSL code [1] for about an hour I think that this fork is older than LibreSSL, so they couldn’t have used it. Here is a short list of the things of changes:

  • CMake instead of GNU Autotools (much better)
  • Crypto drastically cleaned up. No Bluefish anymore (sorry Bruce).
  • API significantly improved.
  • X86 and ARM only. They only use it for Chrome and Android so why not.
  • Code has good comments
  • Code has good C style and uses standard typedefs etc.
  • SSL/TLS is still mostly (the crappy) OpenSSL code.
  • Refractoring is also done. This includes a “include” directory with symlinks, renaming of files, etc..
  • They use Go and C++ in the utilities (not in the library itself).

Overall I think this very good work. It actually looks like a boring security library 😉

@Daniel

You are absolutely right. The industry made a mess out of it. Having clean code and open source helps, at least technically.

[1] https://boringssl.googlesource.com/boringssl/

Gerard van Vooren June 22, 2014 5:24 AM

Adding to my previous post.

It also makes sense to me now. While the goal of LibreSSL is a drop-in replacement of OpenSSL, the Google fork isn’t. So they are able to do it their way and because they have good architects it looks this good.

Laurin June 22, 2014 6:01 AM

Assume Tor traffic is a tiny fraction of the entire internet traffic, and that XYZ directly and through partner agencies abroad and telecommunication companies intercepts a significant fraction of Tor exit nodes as well as traffic from/to an ISP or country. Then, would Tor provide any anonimity against XYZ when connecting to resources on the public internet and to compromised hidden Tor services? Could the processing load needed to correlate Tor traffic be offset by the much reduced volume of data to be processed? Can any low latency network cope with such a scenario at all?

Broadly related (apologies if already linked to): On the Effectiveness of Traffic Analysis Against Anonymity Networks Using Flow Records, publicliy available at https://mice.cs.columbia.edu/getTechreport.php?techreportID=1545&format=pdf&

Incredulous June 22, 2014 9:26 AM

@ Andrew Wallace

“The blog should be a meeting of expert minds. At the moment it is very anarchistic.”

I am not going to enable javascript to read your page since I really don’t trust where you are coming from. But since your list of 40 most influential people in security seems to be all corporate middlepersons, not technical experts, I find your comment strange.

Besides that, although Bruce started out writing quite technical books, he now writes books that are much more accessible. I don’t see why you would think he would want his blog limited to experts. That is not really his audience.

It blows my mind when people think that this blog should be limited to certain classes of comments. Is it really that hard to skim over that which doesn’t interest you? Am I really that much of a genius that I can follow this blog and handle its variety without much effort?

I think it is telling that few people who engaged in angry political discussion here thought their opponents should be banned, but now there is this strange idea of expertise or professionalism that seeks to so that. I think there is more to it.

Andrew Wallace June 22, 2014 10:51 AM

@Incredulous

I’m pro government, that is where I am coming from.

I know Schneier isn’t particularly anti government but a lot of his followers are.

I believe this is because some of his followers are misguided as to why he criticises the government sometimes.

It is not because he is against the government, he is making pointers to positive changes.

Some of his followers are on an anti government crusade and I don’t believe that represents Bruce or what he is trying to achieve.

D June 22, 2014 11:22 AM

I am not an expert.

I know a hell of a lot more than most of my fellow citizens because I read blogs like this one.

Occasionally I have something to contribute. I may be right. I may be wrong. But everybody learns when I speak up. Even the experts. They’re not infallible.

Pro- or anti- government views are largely irrelevant to the pursuit of pure knowledge. They becomes relevant only when our descent into a police state involves security that is both worthless and dangerous. The X-RAY scanners for example that caused how many innocent deaths and are now being deployed in the for-profit prison system.

Incredulous June 22, 2014 12:05 PM

@ Andrew Wallace

“I’m pro government, that is where I am coming from.”

Thanks for the straightforward response. No BS or obfuscation. I can’t argue with that. I think everybody has a right to be where they are.

I am not anti-government. I don’t think most critics are. Without government of some sort all sorts of oppression would occur.

What I think we are arguing about are policies and the right to know what the policies are so we can make democratic decisions. I believe that corporations and wealth lead to non-democratic governance.

However, the question of how to fix the situation at its core in a just and democratic way is a tough nut to crack. At least we can try to influence policies and provide people with information and alternatives to preserve the rights that are at the core of the constitutions of most of our governments.

Benni June 22, 2014 1:35 PM

@65535

I do not believe that BND cracks anything by brute force. I believe what Snowden said is true: “properly encrypted end to end encryption is one of the few things you can rely on”.

The problem is this phrase “properly implemented”. And in preventing “properly implemented” encryption, BND and the likes have experience since 1970’s. I think they are still doing the same what is described here by Spiegel

http://cryptome.org/jya/cryptoa2.htm

There BND just overtook the management of an entire major crypto hardware manufacturer, and it was not content with that but BND was even eager to swallow other crypto hardwere manufacturers in its own company:

“But a big part of the shares are owned by German owners in changing constellations. Eugen Freiberger, who is the head of the managing board in 1982 and resides in Munich, owns all but 6 of the 6,000 shares of Crypto AG. Josef Bauer, who was elected into managing board in 1970, now states that he, as an authorized tax agent of the Muenchner Treuhandgesellschaft KPMG [Munich trust company], worked due to a “mandate of the Siemens AG”. When the Crypto AG could no longer escape the news headlines, an insider said, the German shareholders parted with the high-explosive share.

Some of the changing managers of Crypto AG did work for Siemens before. Rumors, saying that the German secret service BND was hiding behind this engagement, were strongly denied by Crypto AG.

But on the other hand it appeared like the German service had an suspiciously great interest in the prosperity of the Swiss company. In October 1970 a secret meeting of the BND discussed, “how the Swiss company Graettner could be guided nearer to the Crypto AG or could even be incorporated with the Crypto AG.” Additionally the service considered, how “the Swedish company Ericsson could be influenced through Siemens to terminate its own cryptographic business.”

Then, BND was able to force the engineers to use “modified algorithms” that were defined by the “central office for encryption affairs”.

This was the ancestor of today’s “Bundesamt for security in information technology” BSI. Given that history it is no wonder that BSI still cooperates with the NSA according to the recent slides from Spiegel. But nowadays, BSI is largely separated from BND and the role of BSI is to improve security, whereas BND has several departments for cracking encryption. But really, one does not know.

For example this here seems to be one of the departments of BND where they do their cracking:

http://de.wikipedia.org/wiki/Bundesnachrichtendienst#Getarnte_Dienststellen_.28Deutschland.29

Bonn, “authority for military research” “Amt für Militärkunde” contains a so called “scientific division” („Wissenschaftlicher Fachbereich“). Here, supercomputers, for example Cray, are used to develop and crack encrypted communications. The “authority for military research” was a member at the cray user conference 2006 [21]. There is administrative assistance for other authorities.
A bureau of the Bundesamt for security in information technology (BSI) is in the same fenced area of the “authority for military research”.

From wikipedia, it seems that this division from Bonn will move to Pullach, after the BND headquater moves from there to Berlin. This suggests that there were and are also some code breakers working in Pullach.

In the 1970’s the “modified algorithms” that the BND infiltrated crypto hardware company had to use, produced the following effect:

“Depending on the projected usage area the manipulation on the cryptographic devices were more or less subtle, said Polzer. Some buyers only got simplified code technology according to the motto “for these customers that is sufficient, they don’t not need such a good stuff.”

In more delicate cases the specialists reached deeper into the cryptographic trick box: The machines prepared in this way enriched the encrypted text with “auxiliary informations” that allowed all who knew this addition to reconstruct the original key. The result was the same: What looked like inpenetrateable secret code to the users of the Crypto-machines, who acted in good faith, was readable with not more than a finger exercise for the informed listener.”

According to NSA,

http://www.nytimes.com/2013/09/06/us/nsa-foils-much-internet-encryption.html?pagewanted=all&_r=0

“Cryptanalytic capabilities are now coming online. Vast amounts of encrypted Internet data which have up till now been discarded are now exploitable”

I believe these “cryptanalytic capabilities” are working in similar ways as the modifications they compelled vendors of crypto hardware to in the 1970’s.

This here is an overview of te BND from recent Snowden files:

http://www.spiegel.de/media/media-33997.pdf

You see on p. 5 that BND now has an entire division only for “software analysis” and “reverse engineering” and for “software development”

The fact that they have two divisions for analysis and reverse engineering should make clear how they deal with encrypted communication.

And there we have Openssl:
http://en.wikipedia.org/wiki/OpenSSL

“Steve Marquess, a former military consultant in Maryland started the foundation for donations and consultancy contracts and garnered sponsorship from the United States Department of Homeland Security and the United States Department of Defense.”

In Maryland, the NSA has its headquater and one the largest employers there: http://clui.org/ludb/site/national-security-agency-nsa-headquarters

Appart from this, the pattern can be noted that the german secret service BND often hides behind the words “military”. For example, their department for cracking encryption is officially denoted as “authority for military research”.

People who have a similar background are more likely to understand each other. Therefore, a “military consultant” from Maryland who is responsible for the funding of an encryption library will certainly find nice friends to chat with at this BND “authority for military research”.

Dual_EC, the deliberately weakened NSA algorithm, was requested in openssl by an anonymous sponsor who apparently had so much trust in the openssl developers that he never even really tested whether this feature works in practice, or if there are bugs preventing this:

http://openssl.6102.n7.nabble.com/Flaw-in-Dual-EC-DRBG-no-not-that-one-td47744.html

To be clear: I do not believe that Robin Senglmann, who invented the Heartbleed bug, is a spy.

But if you write an own insecure malloc, you just have to wait for some useful idiot who in a drunken night submits code that will break something. This was the case with Heartbleed, which was submitted in new year’s eve.

And that, the introduction of insecure memory allocation routines, sounds more like BND (note that a leading developer of openssl works in Dachau, some 20 suburb train minutes away from Pullach, where the headquaters of BND are located).

Give the fact that BND has two divisions, one for Software analysis and another reverse engineering, you have to assume that by heartbleed, even if it was not authored by secret servicees, BND and NSA had access to the secret keys of most major webservers.

But they are working failure tolerant. Perhaps this is why Openssl contains a rop coding function that makes it easier to hack the system:

http://freshbsd.org/commit/openbsd/f868fc6f39a2c45a6c2bab70addc92525d467904

Unfortunately, the most scary bugs in these libraries must be assumed to be very hard to detect. For example, the Openssl developers noted that Openssl had a strange fallback method in case the random number generator ran out of entropy:

http://www.openbsd.org/papers/bsdcan14-libressl/mgp00017.html

In their video, https://www.youtube.com/watch?v=GnBbhXBDmwU

the openbsd developers note that, if an attacker makes the system believe it runs out of entropy, he then gets an attack surface, since the “randomness” that openssl provides is then, suddenly, predictable.

The problem is that a usual programmer or even someone who works at a university on encryption algorithms, does not have the long experience that these BND and NSA people have in practically weakening encryption algorithms.

It is difficult to answer for example, how much faster an ssl encrypted communication is easier to crack with a cray supercomputer, if we just seed the prng with “gettimeofday” instead of real entropy.

But the folks at BND and NSA know this.

They already practiced this kind of “cryptography research” since 1970, as the above Spiegel article on Crypto-AG shows.

One can assume that these agencies had much time to learn and improve their strange science of backdooring crypto algorithms.

The various weaknesses that they are now able introduce in algorithms are probably very very hard to detect.

These are the words from Spiegel in 1970:

“In more delicate cases the specialists reached deeper into the cryptographic trick box. […] What looked like inpenetrateable secret code to the users of the Crypto-machines, who acted in good faith, was readable with not more than a finger exercise for the informed listener.

In the industry everybody knows how such affairs will be dealed with,” said Polzer, a former colleague of Buehler. “Of course such devices protect against interception by unauthorized third parties, as stated in the prospectus. But the interesting question is: Who is the authorized fourth?”

For that reason, I welcome the google fork of openssl. And that they want to work together with libress. They should cooperate closely and review each other’s code. The most important flaws in this library that BND and NSA may either have introduced or if they have not written them, they are using the flaws nevertheless, they are probably very hard to detect.

Benni June 22, 2014 2:04 PM

It seems I now do not like google openssl library that much:

For example, the infamous ROP entry point that libressl has removed since long ago

http://freshbsd.org/commit/openbsd/f868fc6f39a2c45a6c2bab70addc92525d467904

which was discussed at length by the Openssl developers as a security problem in their video:

https://www.youtube.com/watch?v=GnBbhXBDmwU#t=47m20s

is still there in google’s boringssl:

https://boringssl.googlesource.com/boringssl/+/3ffd70ec3692f577a947295152fb041ff4b8607b/crypto/cpu-x86-asm.pl

This function can become handy under Win32 in situations when

we don’t know which calling convention, __stdcall or __cdecl(*),

indirect callee is using. In C it can be deployed as

#
#ifdef OPENSSL_CPUID_OBJ

type OPENSSL_indirect_call(void *f,…);

OPENSSL_indirect_call(func,[up to $max arguments]);

#endif
#

(*) it’s designed to work even for __fastcall if number of

arguments is 1 or 2!

&function_begin_B(“OPENSSL_indirect_call”);
{
my ($max,$i)=(7,); # $max has to be chosen as 4n-1
# in order to preserve eventual
# stack alignment
&push (“ebp”);
&mov (“ebp”,”esp”);
&sub (“esp”,$max
4);
&mov (“ecx”,&DWP(12,”ebp”));
&mov (&DWP(0,”esp”),”ecx”);
&mov (“edx”,&DWP(16,”ebp”));
&mov (&DWP(4,”esp”),”edx”);
for($i=2;$i<$max;$i++)
{
# Some copies will be redundant/bogus…
&mov (“eax”,&DWP(12+$i4,”ebp”));
&mov (&DWP(0+$i
4,”esp”),”eax”);
}
&call_ptr (&DWP(8,”ebp”));# make the call…
&mov (“esp”,”ebp”); # … and just restore the stack pointer
# without paying attention to what we called,
# (__cdecl *func) or (__stdcall *one).
&pop (“ebp”);
&ret ();
}

Petrobras June 22, 2014 2:25 PM

@Andrew Wallace: “Some of his followers are on an anti government crusade and I don’t believe that represents Bruce or what he is trying to achieve.”

(1) can you point to a handful of comments that you believe they do not represent “Bruce or what he is trying to achieve” ?

(2) Can you explain if and why Bruce should or would like to take action about them ?

Petrobras June 22, 2014 2:47 PM

I forgot to ask, as answers for question (1), coments that are representative of comments posted here, not outlier comments.

And please strike out “if and” from question (2) as your quote “My suggestion to Schneier is to clean up the blog in a new approach.” already answers that point.

Wael June 22, 2014 4:48 PM

@Benni
Re: OpenSSL
Informative video — thanks. Will summarize salient points:
1- Zero memory just before freeing it.
2- debugging malloc – disable. Make sure reenabling it cannot be achieved by flipping a bit. Must be a compile time control that’s not overridden at runtime
3- valgrind tool
4- trusting an entropy gathering daemon! Can be replaced with a rogue that returns controlled entropy! Fix the OS, not the user land library. If you can’t do it, don’t don’t doit.
5- fooling SSL into ending up de-referencing null pointers and crashing it during an SSL handshake. A vector of attack.
6- principle of POLA violated in the sense that some “static” APIs should only be used internally — at file scope, but macros witch #def static to “local” and exporting all the APIs in a public header file available to the “outside world” provides the means of POLA violations. This is a violation at the “principle” level.
7- forcing backwords compatibility with obsolete OS’s introduces bugs and vectors of attack
8- Use of standard POSIX APIs allowes more developers to be involved.
9- BIO_snprintf() doesn’t behave as snprintf and returns -1 when the buffer isn’t sufficient
10- BIO_strdup() ignores null.
11- ERR_add_error_data()
12- Hard coding numbers is OpenSSL style – not good. Not using sizeof because of backwards compatibility issues.
13- Big radian AMD 64 support. QEMU virtual machine that doesn’t exist
14- NO_OLD_ANSI and NO_ANSI_OLD not the same for compiling options — adds confusion!
15- socklen_t has a surprising code flow. Too convoluted to describe here
16- ROP — 47:23 return oriented programming.
17- has 1.01g 388k line cosebase ripped out 90k and added some stuff to it.
18- FIPS isn’t a goal for lebressl. If someone wants to certify, fine, but no explicit code changes to attain FIPS certifications, they say
19- quality of random data is the responsibility of the OS, not the library
20- BN zeroes memory because it’s typically used for crypto operations, and it’s important to delete intermediate values.

Great effort thy the OpenBSD crew!

Moderator June 22, 2014 5:06 PM

Nick,

Agreed — even if it fails totally, I’m interested to see how it fails.

The reason I thought this specific Turing test question might work well is that (1) it can’t be answered by looking it up on Wolfram Alpha, and (2) it doesn’t make sense if taken out of the context of this website. I have reason to believe that the spambots here do present CAPTCHAs to a human in some way, because I’ve seen what look like fragments of their logs posted as spam comments. I’m guessing that that involves passing along just the question/CAPTCHA, not actually sending people to the site. In that situation, a question like this may actually work better than a distorted-text CAPTCHA, while being trivial to answer for people actually on the site.

Of course the weakness of it is that the answer is fixed, and the spam software can simply be told what it is. We’ll see how long that takes to happen.

So far the spammers are just filling in the field with spam keywords and other garbage, so they don’t even recognize it as an antispam mechanism yet.

Everyone,

Let me just be very clear that the idea of restricting blog comments to “experts,” or managing them so that they represent Bruce’s own views of government, is not on the table. At all. So don’t worry about it.

Andrew Wallace June 22, 2014 6:23 PM

Incredulous said “What I think we are arguing about are policies and the right to know what the policies are so we can make democratic decisions. I believe that corporations and wealth lead to non-democratic governance.

However, the question of how to fix the situation at its core in a just and democratic way is a tough nut to crack. At least we can try to influence policies and provide people with information and alternatives to preserve the rights that are at the core of the constitutions of most of our governments.”

In cyber security the government and the private sector work in partnership. It has to be that way because the vast amount of citizen-based data is held by corporate firms.

The government don’t mind the data being held by the private sector as it saves them money.

Cyber security cannot exist without the government and the private sector coexisting to govern the internet.

You get a lot of cross-over in interest between the government and the top web companies.

One is for public protection and surveillance the other for profit and marketing analytics.

This is nothing new either. It is people as Snowden and other means are making the issue mainstream and more widely known.

A lot of the time the issue is mainstream to allow the government to press ahead with legislative change that needs to be publicly debated.

We are seeing a lot more things publicly disclosed and debated as the government updates from analogue era into digital.

Benni June 22, 2014 6:39 PM

@Wael:
The work of the openbsd crew is certainly an heroic effort which deserves large financial support.

But speaking of a great effort, I think that introducing so many attack vectors into a library is absolutely an ingenious effort that the OpenSSL BND NSA GCHQ team, sponsored by the defense ministry of the united states, has done here.

Usually, one would thing a library with so many problems would not work at all. But these supertalented Openssl developers made it actually running. A crypto library with thousands of attack vectors, openly written in a kind of c dialect that nobody recognizes the security flaws upon first sight. Opensll is, from the perspective of a covert SIGINT operation, an absolutely ingenious piece work.

Nick P June 22, 2014 6:47 PM

@ Andrew Wallace

re blog style

The standards that failed to protect us were created by meetings of acknowledged expert minds along with some bureaucrats. This blog has produced much stronger technical, psychological and other measures for security. It also has a very diverse array of other content with many perspectives. Quite a treasure trove of information going back years that few other blogs compare to. The acknowledged experts contributed a significant portion, but so did people nobody saw coming (myself included). I always judge a thing by its results. The results tell me the blog is best the way it is.

I have considered building a forum dedicated to high assurance systems. These would include assurance in correctness, reliability, and/or security. The topics would range from hardware to software tools to economic incentives. Only people who have contributed something to the fields would be allowed to post. I’d probably do that through looking at work in academia, business, open source, etc. Referrals would be used, too. I’d probably copy the moderation approach used here, along with letting thread creators moderate their own threads so long as discussion isn’t stifled. The board might be opened to the public for reading, although private threads/messaging would be available. It would also serve as a subversion resistant repository of software, designs, documentation and so on. It would cost a small monthly or yearly fee with waivers available where reasonable. The project is a bit too big to manage for me right now but I’m posting it here in case it inspires someone else to do it.

re anti-government commenters

The interesting thing about that is that consumer crypto got started through those types. Groups of geeks with libertarian type ideas, such as Cypherpunks, started creating many ways to be private and anonymous on the Internet. They did this because they didn’t want to trust the government to respect their freedoms. The theory is that an architecture with less trust is always superior. (eg PGP vs Lavabit) The ideas dispersed to the point that people of many mindsets are involved today.

I’m actually not seeing the current reactions as anti-government. There’s been a few posts of that nature. Most of it, though, is aimed at a government that serves its own interests, has little to no accountability, and mistreats/deceives its people. These are specific governments doing specific types of abuse. Some people want more accountability, some want punishments, some want certain programs gone entirely due to understandable distrust, and a few hate government in general. We also have to remember what the TLA’s have been doing to motivate this attitude. That conspiracy nuts were more on the mark than the average American about what was happening says a lot.

Personally, I’m for a government with strong accountability to the point of prison sentences (that actually happen) for instances of corruption. Otherwise, corruption takes over and the government is essentially a mob representing elite’s interests. The U.S. government, for instance, mostly backs the interests of a priveleged few while harming most of its citizens and many in foreign countries. Certain groups in it also won’t hesitate to kidnap, torture, or murder their opponents. People opposing that kind of government makes a ton of sense. I’m opposed. I think we can do much better as a republic.

@ Moderator

“(2) it doesn’t make sense if taken out of the context of this website”

That’s a clever idea. I’m going to try to remember to see if anyone in security field is working on schemes just like that. Someone has to be. There’s probably a scheme waiting to be found that uses the principle while being as simple and low-admin-overhead as a CAPTCHA .

Wael June 22, 2014 6:58 PM

@Benni,
If you watch the video you posted, towards the end — the last 10 minutes or so, a question is asked regarding “subversion”. The presenter, Bob Beck, answers the question cautiously. He basically says: Don’t attribute to malice what can be attributed to incompetency. I tend to subscribe to this as well. Sure, such horrible code opens doors for attacks, but the main reason is incompetence and incoherency. Some of the code does seem to be a deliberate “attack vector / conduit” (ROP and heart bleed), but not all. I witnessed that first hand in one of the groups I was working with. We were being accused of defining the specifications and the API’s and security capabilities of the device to “allow privacy invasion” by state agents and big corporations. That was far from the truth, as one of the commenters replied to the question by saying: Guys, we are not thaaaat clever.

Now why aren’t you replying to my other question? Das kotzt mich an 😉

Wael June 22, 2014 7:00 PM

@Nick P, @Moderator
CAPTCHA is a horrible method. I am rarely ever able to read it the first time. So far, spam has not made it through! Good…

NCD June 22, 2014 7:46 PM

Perhaps you should have commenters transcribe a loyalty oath. Andrew Wallace can help you write one.

Chris Abbott June 22, 2014 8:04 PM

@Daniel:

As far as trust goes, we not only have to trust the people behind a product, but unfortunately, we have to trust everyone and everything they trust (software, compilers, OS, and so forth). I don’t see anyway around this type of thing. I guess it’s like trusting the grocery store you buy fruit from that trusts that their supplier didn’t spill listeria all over it. Educating end-users would be helpful, but given how technical and complex all of this is, being difficult even for us, that’s somewhat of a pipe dream.

@Gerard:

They removed Twofish? Interestingly enough, I use Calomel SSL validator and have never seen Twofish used in a ciphersuite. I wasn’t aware of it being in there. Oddly enough, a few times, I’ve seen Camellia-128 used before. Yes, Camellia. Has anyone else ever seen that? I’d actually rather see a ciphersuite like Twofish-256, SHA-512, RSA-2048 than what I see on most sites.

As far as key-exchange goes, what’s everyone’s opinion about ECDH verses DH?

Benni June 22, 2014 8:25 PM

@Wael:
“Don’t attribute to malice what can be attributed to incompetency.”

The Openssl guys have financial support from the Core Infrastructure Initiative.

And there is a fork from Openbsd which contributes code that could often be easily imported into Openssl. For example, all these cases about properly freeing memory, or checking the validity of pointers.

Also, I am absolutely certain that no one in the world would regret it, if Openssl would remove its MsDos support.

I mean this is a rather simple delete operation. But the Openssl developers do not do anything in that direction? Is this really Just because of incompetence?

Please ask any first year computer science student

“is special MsDos support for a security critical library that is widely used today a good thing, or should it be removed?”

I have looked a bit at the openssl developer mailinglist, and the user mailinglist.

From what I have seen there, I had the impression that there are often people reporting bugs, and these do not interest the developers, or the developers make comments that deliberately playing the bugs down, in order to have a reason for not fixing them. But that is just my impression.

In its fork, google writes

https://www.imperialviolet.org/2014/06/20/boringssl.html

“We have used a number of patches on top of OpenSSL for many years. Some of them have been accepted into the main OpenSSL repository, but many of them don’t mesh with OpenSSL’s guarantee of API and ABI stability”

Apparently, someone at google got the same feeling. You send them fixes, and for some reason they do not apply them.

For example, a “guarantee of API and ABI stabillity” means that Openssl apparently wants to keep its MSDos support forever.

Why is Openssl wanting that? Because of incompetence, or because they deliberately want an obfuscated, messy codebase?

There is this function that makes ROP attacks easy:

http://freshbsd.org/commit/openbsd/f868fc6f39a2c45a6c2bab70addc92525d467904

If Openssl has a “guarantee of API and ABI stability”, well then it means Openssl intends to keep this ROP API for ever ever and ever.

Why are they doing that? Introducing a hacking interface, and then saying, “well, we have an ABI and API stability guarantee and therefore can not remove any of this stuff”…..

Yea, just incompetence, of course, nothing to see here, happens everyday, all systems OK…..

Incredulous June 22, 2014 8:37 PM

n.b. I accidentally posted this in an old thread, so I am posting it where it should be. I apologize for my carelessness.

I noted a couple of weeks ago that Pacific Northwest National Laboratory was inserting itself as the Fedora repository appropriate for my Latin American proxy IP address, which seemed unexpected and concerning.

Now I read that they are collaborating in research designed to undercut peaceful popular movements that are militating for social, economic and political change, such as Occupy:

“In 2013, Minerva funded a University of Maryland project in collaboration with the US Department of Energy’s Pacific Northwest National Laboratory to gauge the risk of civil unrest due to climate change. The three-year $1.9 million project is developing models to anticipate what could happen to societies under a range of potential climate change scenarios.”

(You need to read the whole article to understand the full creepiness factor: The Guardian This is a good article too: Common Dreams)

The laboratory seems like an interesting place to keep an eye on…

Andrew Wallace June 22, 2014 8:37 PM

Actually, it would interesting to hear Bruce’s take on the CIA joining Facebook and Twitter. I don’t think he has mentioned it on his blog yet.

Wael June 22, 2014 9:00 PM

@Benni,
The main task is to identify the weakness and fix it, as the OpenBSD team are doing. Who put the weakness is… secondary — it needs to be fixed, regardless. I excluded Heartbleed and ROP code as “suspects” of deliberate mechanisms of weakness introduction, and circumstantial evidence points towards this direction. However, I am not in a position to prove that some entity is doing that deliberately. There is no need to twist my words. Yes?

Benni June 22, 2014 9:05 PM

@Incredulous:
Often, research institutions are the only fast ones that publicly provide unix packages. Simply because universities have a large computer department where they can host stuff that they need for their own computers.

I would guess that the server of a national research facillity is in most cases much faster than a privately founded server. At least this is the case in germany. For example, when I’m at work at the research institution, I’m connected to the X-Win. This network connects german universities and provides 1 Tbit/s http://de.wikipedia.org/wiki/X-WiN . Compare: The NSA, with their program Rampart-a gets 3 Tbit/s. Such a connection is especially good if you are running file sharing applications, like retroshare http://retroshare.sourceforge.net/ on it.

I guess american research institutions provide similar fast networks and servers. Although the codename “wharpdrive” for a mere 3 Tbit/s network suggests that the american ones are much lower in speed. But it is no wonder at all that linux distros fetch their packets usually from scientific servers.

As the packages are signed, I would simply ping them and see who responds first. This one should then go in the list. There maybe faster servers that the developers simply have forgotten to include in their list, so chosing it by oneself is a good choice, but its not necessary for security.

Benni June 22, 2014 9:10 PM

@ Wael:

I did not intend to attack you.

I just find the case for incompetence of the Openssl developers rather weak. Yes, it is important, to fix these problems. But what the Openssl folks are now doing is that they hiding behind some “ABI and API stability guarantee” and on this grounds are making an excuse for not fixing their code. At least that seems to be the situation until now.

At least they should change their habit when they have now support by the Core Infrastructure Initiative.

Wael June 22, 2014 9:28 PM

@Benni,

I did not intend to attack you.

I don’t worry about that, Kein Problem, Mein guter Freund. I am more worried that I come across as attacking others.

At least they should change their habit when they have now support by the Core Infrastructure Initiative.

I do admire the German mentality 🙂 it’s more productive to talk about actionable items. Snowden already revealed what you are trying to prove, no? What you showed is a possible instantiation of some of the techniques used, but still not a proof — that’s all I meant, no more, no less.

Nick P June 22, 2014 9:45 PM

@ Benni

re Educational Internet

The article and other links said the German network typically runs 10Gbps to 100Gbps depending on location. There was also 1Tbps capacity for core sites, which include most big name universities. Nice network.

“Although the codename “wharpdrive” for a mere 3 Tbit/s network suggests that the american ones are much lower in speed.”

Another poke at American tech, eh? The American equivalent is Internet2. Its standard connection is 100Gbps and max was upgraded to 8.8Tbps in 2011. So, that’s 100Gbps-8.8Tbps here vs 10Gbps-1Tbps in Germany. “Lower in speed?” Yes, the German network is. 😉

Mike the goat June 23, 2014 10:43 AM

Wael: I’m glad to hear I am not the only one who has to try ten times to get through a captcha-wall. Sometimes I even resort to the “spoken for the blind” button.

Nick P June 23, 2014 12:07 PM

Time for more papers. Todays topic is obfuscation of operation at the chip level.

Arc3D: A 3D Obfuscation Architecture (2005)

Abstract: “In DRM domain, the adversary has complete control of the computing node – supervisory privileges along with full physical as well as architectural object observational capabilities. Thus robust obfuscation is impossible to achieve with the existing software only solutions. In this paper, we develop architecture level support for obfuscation with the help of well known cryptographic methods. The three protected dimensions of this architecture Arc3D are address sequencing, contents associated with an address, and the temporal reuse of address sequences such as loops. Such an obfuscation makes the detection of good tampering points infinitesimally likely providing tamper resistance. With the use of already known software distribution model of ABYSS and XOM, we can also ensure copy protection. This results in a complete DRM architecture to provide both copy protection and IP protection.”

A Secure Processor Architecture for Encrypted Computation on Untrusted Programs (2012)

Abstract: “This paper considers encrypted computation where the user specifies encrypted inputs to an untrusted program, and the server computes on those encrypted inputs. To this end we propose a secure processor architecture, called Ascend, that guarantees privacy of data when arbitrary programs use the data running in a cloud-like environment (e.g., an untrusted server running an untrusted software stack). The key idea to guarantee privacy is obfuscated instruction execution; Ascend does not disclose what instruction is being run at any given time, be it an arithmetic instruction or a memory instruc-
tion. Periodic accesses to external instruction and data memory are performed through an Oblivious RAM (ORAM) interface to prevent leakage through memory access patterns. We evaluate the processor architecture on SPEC benchmarks running on encrypted data and quantify overheads.”

Proactive Obfuscation (2010)

Abstract: “Proactive obfuscation is a new method for creating server replicas that are likely to have fewer shared vulnerabilities. It uses semantics-preserving code transformations to generate diverse executables, periodically restarting servers with these fresh versions. The periodic restarts help bound
the number of compromised replicas that a service ever concurrently runs, and therefore proactive obfuscation makes an adversary’s job harder. Proactive obfuscation was used in implementing two prototypes: a distributed firewall based on state-machine replication and a distributed storage
service based on quorum systems. Costs intrinsic to supporting proactive obfuscation in replicated systems were evaluated by measuring the performance of these prototypes. The results show that employing proactive obfuscation adds little to the cost of replica-management protocols.”

Using Address Independent Seed Encryption and Bonsai Merkle Trees to Make Secure Processors OS- and Performance-Friendly (2007)

Abstract: “…researchers have proposed designs for secure processors which utilize hardware-based memory encryption and integrity verification to protect the privacy and integrity of computation even from sophisticated physical attacks. However, currently proposed schemes remain hampered by problems that make them impractical for use in today’s computer systems: lack of virtual memory and Inter-Process Communication support as well as excessive storage and performance overheads. In this paper, we propose 1) Address Independent Seed Encryption (AISE), a counter-mode based memory encryption scheme using a novel seed composition, and 2) Bonsai Merkle Trees (BMT), a novel Merkle Tree-based memory integrity verification technique, to eliminate these system and performance issues associated with prior counter-mode memory encryption and Merkle Tree integrity verification schemes. We present both a qualitative discussion and a quantitative analysis to illustrate the advantages of our techniques over previously proposed approaches in terms of complexity, feasibility, performance, and storage. Our results show that AISE+BMT reduces the overhead of prior memory encryption and integrity verification schemes from 12% to 2% on average, while eliminating critical system-level problems.”

Compiler-Assisted Memory Encryption for Embedded Processors (2007)

Abstract: “A critical component in the design of secure processors is memory encryption which provides protection for the privacy of code and data stored in off-chip memory. The overhead of the decryption operation that must precede a load requiring an off-chip memory access, decryption being on the critical path, can significantly degrade performance. Recently hardware counter-based one-time pad encryption techniques [11, 13, 9] have been proposed to reduce this overhead. For highend processors the performance impact of decryption has been successfully limited due to: presence of fairly large on-chip L1 and L2 caches that reduce off-chip accesses; and additional hardware support proposed in [13, 9] to reduce decryption latency. However, for low- to medium-end
embedded processors the performance degradation is high because first they only support small (if any) on-chip L1 caches thus leading to significant off-chip accesses and second the hardware cost of decryption latency reduction solutions in [13, 9] is too high making them unattractive for embedded processors. In this paper we present a compiler-assisted strategy that uses minimal hardware support to reduce the overhead of memory encryption in low- to medium-end embedded processors. Our experiments show that the proposed technique reduces average execution time overhead of memory encryption for low-end (medium-end) embedded processor with 0 KB (32 KB) L1 cache from 60% (13.1%), with single counter, to 12.5% (2.1%) by additionally using only 8 hardware
counter-registers.”

On the Secure Obfuscation of Deterministic Finite Automata

Abstract: “In this paper, we show how to construct secure obfuscation for Deterministic Finite Automata, assuming non-uniformly strong one-way functions exist. We revisit the software protection approaches originally proposed by [5, 10, 12, 17] and revise them to the current obfuscation setting of Barak et al. [2]. Under this model, we introduce an efficient oracle that retains some “small” secret about the original program. Using this secret, we can construct an obfuscator and two-party protocol that securely obfuscates Deterministic Finite Automata against malicious ad-
versaries. The security of this model retains the strong “virtual black box” property originally proposed in [2] while incorporating the stronger condition of dependent auxiliary inputs in [15]. Additionally, we show that our techniques remain secure under concurrent self-composition with
adaptive inputs and that Turing machines are obfuscatable under this model.”

A Flexible Framework for Secure and Efficient Program Obfuscation (2013)

Abstract: “In this paper, we present a modular framework for constructing a secure and efficient program obfuscation scheme. Our approach, inspired by the obfuscation with respect to oracle machines model of [4], retains an interactive online protocol with an oracle, but relaxes the original computational and storage restrictions. We argue this is reasonable given the computational resources of modern personal devices. Furthermore, we relax the information-theoretic security requirement for computational security to utilize established cryptographic primitives. With this additional flexibility we are free to explore different cryptographic building-blocks. Our approach combines authenticated encryption with private information retrieval to construct a secure program obfuscation framework. We give a formal specification of our framework, based on desired functionality and security properties, and provide an example instantiation. In particular, we implement AES in Galois/Counter Mode for authenticated encryption and the Gentry-Ramzan [13] constant communication-rate private information retrieval scheme. We present our implementation results and show that non-trivial sized programs can be realized, but scalability is quickly limited by computational overhead. Finally, we include a discussion on security considerations when instantiating specific modules.”

Hardware Assisted Control Flow Obfuscation for Embedded Processors (2004)

Abstract: “…However, as this paper points out that protecting software with either encryption or obfuscation cannot completely preclude the
control flow information from being leaked. Encryption has been widely studied and employed as a traditional approach for software protection, however, the control flow information is not 100% hidden with solely encrypting the code. On the other hand, pure software-based obfuscation has been proved inefficient to protect software due to its lack of theoretical foundation and considerable performance overhead introduced by complicated transformations. Moreover, even though obfuscation can prevent static reverse engineering, attacker can still successfully bypass the obfuscation by monitoring the dynamic program execution. To address all of these shortcomings, this paper presents a hardware assisted obfuscation technique that is capable of obfuscating the control flow information dynamically. Dynamic obfuscation changes memory access sequence on-the-fly and conceals recurrent instruction access sequences from being identified. Our scheme makes it provably difficult for the attacker to extract any useful information. Our results show that a high-level security protection is possible with only minor performance penalty. Finally, we show that our scheme can be implemented on embedded systems with very little hardware overhead.”

Embedded Software Security through Key-Based Control Flow Obfuscation (2011)

Abstract: “Protection against software piracy and malicious modification of software is proving to be a great challenge for resource-constrained
embedded systems. In this paper, we develop a non-cryptographic, key-based, control flow obfuscation technique, which can be implemented by computationally efficient means, and is capable of operating with minimal hardware support. The scheme is based on matching a series of expected keys in sequence, similar to the unlocking process in a combination lock, and provides high levels of resistance to static and dynamic analyses. It is capable of protecting embedded software against both piracy as well as non-self-replicating malicious modifications. Simulation results on a set of MIPS assembly language programs show that the technique is capable of providing high levels of security at nominal computational overhead and about 10% code-size increase.

HARPOON: An Obfuscation-Based SoC Design Methodology for Hardware Protection (2009)

Abstract: “Hardware intellectual-property (IP) cores have emerged as an integral part of modern system-on-chip (SoC) designs. However, IP vendors are facing major challenges to protect hardware IPs from IP piracy. This paper proposes a novel design methodology for hardware IP protection using netlist-level obfus-
cation. The proposed methodology can be integrated in the SoC design and manufacturing flow to simultaneously obfuscate and authenticate the design. Simulation results for a set of ISCAS-89 benchmark circuits and the advanced-encryption-standard IP core show that high levels of security can be achieved at less than 5% area and power overhead under delay constraint.”

Embedded Reconfigurable Logic for ASIC Design Obfuscation Against Supply Chain Attacks (2014)

Abstract: “Hardware is the foundation and the root of trust of any security system. However, in today’s global IC industry, an IP provider, an IC design house, a CAD company, or a foundry may subvert a VLSI system with back doors or logic bombs. Such a supply chain adversary’s capability is rooted in his knowledge on the hardware design. Successful hardware design obfuscation would severely limit a supply chain adversary’s capability if not preventing all supply chain attacks. However, not all designs are obfuscatable in traditional technologies. We propose to achieve ASIC design obfuscation based on embedded reconfigurable logic which is determined by the end user and unknown to any party in the supply chain. Combined with other security techniques, embedded reconfigurable logic can provide the root of ASIC design obfuscation, data confidentiality and tamper-proofness. As a case study, we evaluate hardware-based code injection attacks and reconfiguration-based instruction set obfuscation based on an open source SPARC processor LEON2. We prevent program
monitor Trojan attacks and increase the area of a minimum code injection Trojan with a 1KB ROM by 2.38% for every 1% area increase of the LEON2 processor.”

Authenticating executions for trusted systems (2013)

Abstract: “Constructing trustworthy computer systems requires validating that every executed piece of code is genuine and that the programs do exactly what they are supposed to do. However, pre-execution code integrity validations can fail to detect runtime compromises, such as code injection, return and jump-oriented programming, and illegal linking of code to compromised library functions. In this dissertation, we propose and investigate three distinct mechanisms for authenticating code execution at run-time. The common goal of these techniques is to detect run-time compromises, irrespective of
the specific type of attack that may have been carried out. Thus, these solutions are universal and are in sharp contrast to piecemeal solutions that have been proposed to detect specific types of attacks that compromise execution.

Our first technique does not rely on specialized hardware support within the
platform and uses a challenge-response mechanism to verify the signature of the execution program running on a remote host. The execution signature at randomly chosen points in the code is verified against a reference. This technique is limited in its use of run time modifications of the binary and has a high execution overhead. Our next technique attempts to avoid these limitations by performing run-time authentication of control flow by using existing hardware support in contemporary CPUs for branch tracing, a mechanism that was originally added to support debugging. As the program executes, its trace of taken branches, as logged by the tracing hardware is verified against a reference control flow graph, that has edges corresponding to control flow paths, by a separated and ivtrusted control thread. The execution overhead for full authentication of the control flow path is considerable. However, used judiciously, this mechanism can authenticate the full control flow path in critical functions, such as frequent systems calls, with a reasonably low execution overhead, as low as 20% of 30%. This technique does not require any modification of the binaries, so reentrancy is ensured.

The final mechanism for execution recognizes and addresses the need for full execution authentication with a low overhead, where both code integrity and control flow integrity are simultaneously validated. This mechanism, called REV (Run-time Execution Validator) can be retrofitted into an existing out-of-order CPU pipeline. Prior to presenting REV, we also formulate the ideal requirements of a mechanism for authenticating execution and use it later to see if REV meets these requirements. REV not only authenticates the control flow path and instructions along the instructions along the execution path but it also prevents the results of compromised executions from propagating to memory. REV offers a scalable solution that handles multiple execution modules, irrespective of their sizes and does not require any binary modification, nor access to the source code. The assessment of REV using a cycle-accurate simulator shows that execution overhead is limited to an average of less than 2% on the SPEC benchmarks. REV thus meets all of the requirements of an ideal mechanism for authenticating code and control flow integrity at run-time.”

name.withheld.for.obvious.reasons June 23, 2014 2:38 PM

Some observations on the recently released NSPD-54

The tenent that “United States Policy” is designated by way of this National Security President Directive is laughable. POLICY of the United States, should be statutory in nature–not rule based.

Labeling or classifying the directive as “TOP SECRET” represents a slap in the face to the public and congress. As this is a presidential directive, where is the oversight that might otherwise be the role of congress. Especially offensive is the inclusion of private sector entities as this is the overlap of national security and private companies–where’s the accountability in that…I can see companies hiding behind the “national security” or “state secrets” curtain.

Section 3 of page 1, Cybersecurity Policy, is a list of related orders (such as EO and other presidential directives) and the last item in the list is redacted…suggesting another classified policy.

Under Definitions, page 2, sections (a) and (b) were labeled secret; (a) describes “computer network attack” and (b) “computer network explotation”. What the hell is wrong with our government–this is information clearly available in publications world-wide. This suggests to me that it is their use in the context of this policy that is at issue and means that the government is intent on obscuring actions that it will take in our name–and ask for an excuse–oh, I’m sorry that’s to maintain national security.

Under Policy, paragraph 9 page 4, expands the scope of federal involvement;
“…to national security, national economic security, or public health or safety.”

Mike the goat June 23, 2014 2:47 PM

Nick: really interesting reading – esp ‘HARPOON’, but I have to wonder about commercial feasibility and, after all that expense, the relative efficacy. As I am completely unqualified to answer either of these questions, I’m only speculating…

Wael June 23, 2014 3:09 PM

@Nick P,
Thanks for the obfuscation links — something I am intrested in at the moment.
Say, Does Google present you with a CAPTCHA whenever yuo do a search? (Impliying that you use them a lot, and they want to make sure you are not a robot)

Nick P June 23, 2014 4:48 PM

@ Mike the Goat

Quite a few of these submissions and those from my last huge paper release report sub-10% performance drops with small chip utilization too. Some were closer to 1%. This is typically the case where the mechanism does one thing and is designed well.

Some have much larger performance impacts more in range of 30%-70%. The largest hit comes with oblivious or secure multiparty computing schemes. Although long thought impractical, one of these papers manages a 15x slowdown putting it in range of first Java VMs. So, for first time, that’s now practical in some applications.

@ Wael

Nah, Google presents me more options rather than less to ensure NSA has a continuing record of my online activity. People in Ft. Meade sleep better that way and don’t feel the need to have FBI SWAT teams “investigate” what I might be doing. 😉

Mike the goat June 23, 2014 5:15 PM

Nick: I guess it depends on the niche as to how important that performance penalty really is. I’d happily take a radically slower device if it could be (more) trustworthy.

Speaking of Google keeping tabs on users – interestingly enough a buddy of mine showed me something very interesting. He has an android handset, stock unrooted and installed the official Facebook app. Pretty much immediately after installation the desktop FB interface started showing up people in his “Do you know?” box that weren’t ‘friends of friends’ on in his home town. These people were pulled directly from his phone’s contact list. I explained to him that if he looked carefully at the Facebook application’s permissions you’ll see that it has the privileges to do just that, and that nothing would surprise me about that particular Palo Alto institution. Okay, so it’s not related to Google, but I had to make some attempt at a segue.

DB June 23, 2014 9:20 PM

On many devices, you could just put in a “faster” processor (i.e. more expensive) to offset any slowdown due to security… Of course, it might be even better to use a processor designed from the ground up with better security than slapping some on top of an insecure architecture, so…

65535 June 23, 2014 9:33 PM

@ Benni

Your comments are quite interesting.

It takes me time to go through all of the links; all of the material and digest the data. Thus, I might respond somewhat late. I do appreciate your work!

Nick P June 23, 2014 10:56 PM

Two more tagged architectures

HARDWARE SECURITY TAGS FOR ENHANCED OPERATING SYSTEM SECURITY (2013)

Abstract: “This paper addresses the design and implementation of a new tagging scheme for access control and information flow; specifically the implementation at the assembly language level for a zero-kernel operating system. We also discuss key lessons learned that we have not seen addressed in related literature.”

Hardware Enforcement of Application Security Policies Using Tagged Memory

Abstract: “We present the Loki tagged memory architecture, along with a novel operating system structure that takes advantage of tagged memory to enforce application security policies in hardware. We built a full-system prototype of Loki by modifying a synthesizable SPARC core, mapping it to an FPGA board, and porting HiStar, a Unix-like operating system, to run on it. One result is that Loki allows HiStar, an OS already designed to have a small trusted kernel, to further reduce the amount of trusted code by a factor of two, and to enforce security despite kernel compromises. Using various workloads, we also demonstrate that HiStar running on Loki incurs a low performance overhead.”

Note: Both of the above modify a SPARC architecture processor.

@ Mike the Goat

One thing I was thinking about recently is licensing cost for processor architecture and baseline implementation. I promoted MIPS designs because they cost from $700,000-900,000 vs $1-15 mil for ARM. The SPARC architecture is open so you can freely build your own so long as you don’t call it SPARC. You can say SPARC-compatible or something like that. That’s nice given that open MIPS and ARM cores all mention they’re taking steps to avoid lawsuits, including using obsolete versions. I can’t find anyone talking about the price of SPARC licensing so, given it’s open, it might be much cheaper than MIPS. And a few security modifications for processors (examples above) already put it to use. And there’s quite a few open-ish cores. And OpenBSD always had good support for it. So, I see those being the selling points for going with a SPARC instead of the others.

Wait, you actually use a SPARC machine. Why the hell am I telling you of all people lol… (shrugs) (submits anyway)

Benni June 23, 2014 11:18 PM

@65535

I have written here more links on the specifics of the hard and software that the BND is using for its mass surveillance:

https://www.schneier.com/blog/archives/2014/06/more_details_on_1.html#c6672958

Apparently, BND has been sold hardware from Narus company by the NSA.

https://netzpolitik.org/2013/ard-fakt-bnd-nutzt-dieselbe-uberwachungstechnologie-wie-prism/

http://www.focus.de/politik/deutschland/geheimdienst-wirtschaftskrimi-beim-bnd_aid_190185.html

And then there comes the interesting thing that BND apparently stole some of the software for its mass surveillance.

BND had interest in a database software that a small german company should sell to interpol.

BND tried to ruin this company, which however, successfully sued the BND agent….
Unfortunately, the agent already had influenced europol, to sign the contracts that were originally prepared for the victim, with the BND company instead.

With the agent convicted of a crime, the project of a modern database for europol simply died.

The developer of this database system has a security blog like Schneiers, (but her blog is unfortunately german) http://blog.polygon.de/

The tale she tells how BND wanted to ruin her is simply breathtaking:

http://blog.polygon.de/2013/07/26/zweipluszwei_i1/3286

http://blog.polygon.de/2013/07/26/zweipluszwei_i2/3298

http://blog.polygon.de/2013/08/04/zweipluszwei_i3/3303

http://blog.polygon.de/2013/08/05/technologiebeschaffung-nach-art-des-bnd-i-4-die-anfaenge-von-lernouthauspie-und-das-ende-von-metal-bei-siemens/3138

http://blog.polygon.de/2013/08/06/zweipluszwei_i5/3328

http://blog.polygon.de/2013/08/12/technologiebeschaffung-nach-art-des-bnd-i-6-was-bodenkamp-und-der-bnd-mit-dem-groessten-boersenbetrugsfall-in-europa-zu-tun-haben/3333

http://blog.polygon.de/2013/09/01/zweipluszwei_ii1/4098

http://blog.polygon.de/2013/09/02/zweipluszwei_ii2/4100

And finally BND was involved in one of the larger economic crimes in history. The same agent who tried to ruin that german database company wanted to steal language translation software from learnout and houspie

The point is that all these, seemingly separate things can be pieced together:

The (stolen) database system
The (stolen) language translation technology from learnout and hauspie
and the surveillance hardware from Narus company.

This article below from the german computer magazine c’t comes from a pre snowden aera, but still the tech journalists were able to piece the different steps together:

http://www.heise.de/ct/artikel/Die-Bayern-Belgien-Connection-284812.html

The article shows how the german secret service has collected several different software components that are necessary for the analysis of communications from bulk surveillance…

Benni June 24, 2014 12:00 AM

Oh no, now I have read what about the BND and Learnout and Hauspie:

http://blog.polygon.de/2013/08/12/technologiebeschaffung-nach-art-des-bnd-i-6-was-bodenkamp-und-der-bnd-mit-dem-groessten-boersenbetrugsfall-in-europa-zu-tun-haben/3333

http://www.heise.de/ct/artikel/Die-Bayern-Belgien-Connection-284812.html

The same BND agent that tried to ruin the german database company polygon created several dozend companies, which he called “language developent company” for separate languages like Farsi, Urdu, Bahassa etc..

These ccompanies then all signed contracts from learnout and hauspie and their translation ans speech software. Their task was to create dictionaries, that the language software could implement a “speech to text”, “text to speech” and a translator function for many arabian, african and asian languages.

Soon learnout and hauspie was valuable enough to…. to…

yes, to swallow the american language companys Dictaphone and Dragon….

And that was certainly not of the NSA’s liking….

Soon journalists noted that the shareholdervalue of learnout and hauspie was an invention, or made up.

In the summer 2000, the managers of learnout and hauspie had a crysis meeting. They met at Capri, since Jo Learnout spent his holiday there. Participants were four topmanagers: the two founders, Jo Lernout and Paul Hauspie, the chairman Nico Willaert, and the BND agent “Stephan Bodenkamp” who at the same time was busy to ruin the german database company “polygon”.

(That Stephan Bodenkamp was an agent was revealed later by the german court which sentenced him in the case of polygon against BND).

During the meeting, Stephan Bodenkamp told is conversation partners that they have not done sufficient lobby work, and that they did not devote sufficient energy to nurture their contacts in america….

The question is why in the world a BND agent runs to a crysis meeting of learnout
and hauspie in order to advise the management?…..

At the end learnout and hauspie went bancrupt.

With that 6.000 employees had to be fired….

And the BND has lost its attempt to take control over the american market of language software…..

Figureitout June 24, 2014 12:26 AM

Why the hell am I telling you of all people lol
Nick P
–B/c he’s got experience w/ SPARC and he mentioned he’s interested in a secure chip build, that’s why. Further verified SPARC is better than nothing and also avoiding lots of problems and organizing something that will be extremely hard…At worst, it’s a practice run; at best it can be used for another run (stepping stones…). BTW, where’s “Nick’s note”? :p

Mike the goat June 24, 2014 1:52 PM

Anura: haha, you just reminded me of the Boston bomb scare. I can’t believe the police thought it was an explosive device. Seriously, if you’re going to plant a bomb, why decorate it with LEDs in the shape of a Mooninite?

Nick: yeah, an ancient SPARC machine 🙂 I like the idea of using a SPARC core, if only for the licensing – as you’ve already mentioned. That said, I like the architecture. I’ll check out those links you’ve referenced.

Benni June 24, 2014 2:33 PM

With the agent Bodenkamp convicted of a crime, one could believe that the BND plan of selling its own software to europol in order to backdoor it for spying on suspects did not work out.

However, in june 2014, the german internet provider deutsche Telekom just has swallowed another leading developer for police software: http://goo.gl/d0RUhk

Wikileaks revealed some time ago that deutsche Telekom is the hardware provider of the BND http://goo.gl/MlgAZ1

The hardware of the largest internet node in the world, de-cix, is also provided by deutsche Telekom. BND makes a full take of all data from de-cix and shares this with the NSA. So it is no surprise that deutsche Telekom can invite the former NSA boss general Alexander to its conferences as Spiegel notes http://goo.gl/HO9FO7

It is mentioned by heise that the security company which telekom just had swallowed, is for some reason, often preferred in government projects http://goo.gl/QjCoce

Perhaps it falls into place that the company is also a specialist for surveillance technology and signals intelligence http://goo.gl/IIXZ3A so the project of a police software produced BND associates may finally come to reality.

Incredulous June 24, 2014 3:16 PM

@ Benni

“Often, research institutions are the only fast ones that publicly provide unix packages.”

Perhaps it is only that. But the US government has shown itself to be untrustworthy, has it not? They are not welcome providers of software for my system. Who’s knows whether the signing key has been compromised? Or the server could simply deliver old, legitimately signed but dangerously obsolete software, perhaps to targeted ip addresses? Why wouldn’t the government do that? It’s not like they have any reputation left to protect. Or any ethics that would forestall such actions.

The only mechanism I can find documented to change the update server is to change the yum config to specify an ip address with a different location, as I described in my earlier post.

In any case why does this supposed DOE (Department of Energy) laboratory have its fingers in so many pies? Collaborating in research to undermine grassroots movements? I suspect that each department now has its own intelligence apparatus. Unfortunately power seeking yields more power seeking and there is little to restrain the militarization of the whole government. Except the citizens, may they wake up in greater numbers soon.

Until then I am keeping my eye on this laboratory and doing my best to keep them out of my system.

Mike the goat June 24, 2014 5:11 PM

Nick: one other thing the SPARC processor has that makes it attractive is scalability. If you recall the Sun Fire 15K could sport over a hundred processors. I know that it will adversely affect performance but having some kind of ‘extreme SMP’ in our hypothetical virtualization layer where small fragments of code execute on different cores would certainly further obfuscate what we’re doing, as each processor module would only have a compartmentalized and likely useless ‘view’ (and you could execute decoy instructions, with cores chosen pseudorandomly to further frustrate differential analysis) but I guess this is kind of defeated by the fact that the hypervisor that controls all of the cores would know the true state, and anyone familiar with how we’ve laid things out would presumably know to invest their effort in attacking that.

Nick P June 24, 2014 7:14 PM

@ Mike the Goat

You might find this MPP/NUMA security architecture interesting. (skip to bold heading) It’s one of my few truly original works and can integrate almost anything I’ve posted here without middleware killing performance. I also found prior art for the most critical component in 20+ year old systems. Only problem might be cost to develop and operate. Interconnect switch, cables and custom chips that is. The individual boards can get pretty cheap.

Which is where this budget, SPARC-based, DSM machine comes into play. 😉 It just required two custom chips, theoretically scaled to 512 nodes, and was actually built to 128 I think. Could always pay SGI to port their NUMAFlex architecture to it, too, as they focus on technical computing rather than secure business computing. That they’re not really the competition and especially not Oracle are both nice points for me.

Figureitout June 24, 2014 11:39 PM

Russia wants to replace US computer chips with local processors (w/ an ARM-Cortex lol…)

http://en.itar-tass.com/economy/736804

Guess they’re comfortable w/ UK, what can you say though, ARM dominates the market. Curious about the “TrustZone”

It is suggested that in booting the device, a complete “root of trust” process be used. In many cases, this would be done via an integrated Boot ROM that runs the base OS and then loads the monitor and SecureOS. Once completed the SecureOS would then launch the traditional rich OS, ensuring that no malicious code can enter the process.

And the “TrustZone Address Space Controller (TZASC)” sounds just like another MMU…All talk I guess until you get your hands on one and then hey that’s what they want…

Wael June 25, 2014 12:41 AM

@Figureitout,

It is suggested that in booting the device, a complete “root of trust” process be used…

“Root of trust” looks good on paper until you see some real world implementations. I saw one where the stem and branches created the “root of trust!” This one sucked on paper too! It’s an expression sometimes used to impress those who haven’t heard of it. If you can’t impress them with brilliance, baffle ’em with a “root of trust”.

name.withheld.for.obvious.reasons June 25, 2014 2:43 AM

@ Weal

“‘Root of Trust’ looks good on paper until you some real world implementations.”

Is the “Root of Trust” something Keith Alexander is smoking–or–is it the people hiring him @ 600K a month?

Wael June 25, 2014 6:01 AM

@ name.withheld.for.obvious.reasons,
Lol! Perhaps both? I have some of that stuff 🙂
Weal, eh? Spell checker, I guess…

name.withheld.for.obvious.reasons June 25, 2014 8:22 AM

@ Wael

Transposition error–not always wearing the editor’s hat (or codpiece). So in short, my bad! You know it aint broke until I fix it. Maybe I was thinking of Ale instead of a stout (like Guinness). In fact, that was my response to Bush’s statement right after 9/11 “You’re either with us or again us!” (beside the binary thinking/thinker of the supposed leader of the free world) I responded by saying “I’ll have another Guinness”. Cheers

Scared June 25, 2014 1:16 PM

Judge Rules U.S. ‘No-Fly List’ Violates Constitution

http://www.businessweek.com/articles/2014-06-25/judge-rules-the-government-no-fly-list-doesnt-fly#r=hpt-fs

Thirteen people who had been denied airline boarding sued U.S. Attorney General Eric Holder and the FBI in 2010, contending that the list violated their rights. The plaintiffs live in several states, and include Sheikh Mohamed Abdirahman Kariye, the imam of Portland’s largest mosque. Two plaintiffs are former Marines, one served in the Army, and another was in the Air Force. Some have been unable to visit family, have lost job opportunities, and have been unable to travel to Saudi Arabia for the hajj, a fundamental pilgrimage for Muslims. Several said they had been detained and questioned abroad as they tried to travel to the U.S. because of the list. (The government doesn’t inform people when they’re placed on the list, and the Justice Department did not confirm or deny whether any of the plaintiffs are on the list; Brown therefore assumed that the plaintiffs’ contention that they are on the list was true.) “The Court concludes international travel is not a mere convenience or luxury in this modern world,” Brown wrote in her decision. “Indeed, for many international travel is a necessary aspect of liberties sacred to members of a free society.”

Nick P June 25, 2014 1:16 PM

@ Figureitout

“Where’s Nick’s Note?”

Lol. I was rushing it out so didn’t bother to analyze them. The most practical one’s are a bit self-explanatory, though. 🙂

Figureitout June 25, 2014 11:18 PM

“Root of trust” looks good on paper until you see some real world implementations.
Wael
–Yeah I’m finding that out, all talk. I still can’t get over the fact that making a secure CPU or even worse a SoC is so damn hard; some new fundamental concept needs to be invented. Surely ARM has people capable but they’ve got business flowing so they just let it be. But the engineers have a nice lab, lots of components, nice equipment to do a little something on the side…

BTW, remembered you talking about “line of sight” “being done right”. Well happened to come across an article (don’t ask me where! 🙂 talking about non-line of sight being turned into line of sight, simply, just like you’d expect…w/ mirrors. What further interested me was the “interference coating” on the mirrors.

http://www.laserfocusworld.com/articles/print/volume-50/issue-06/newsbreaks/uv-optical-antenna-enables-short-range-nonline-of-sight-broadband-access.html

Nick P
The most practical one’s are a bit self-explanatory, though. 🙂
–Good, it’s about time. I don’t have time for theoretical games. Let’s get building. :p

65535 June 26, 2014 2:04 AM

@ Nick P

Great links! I have gotten through some of them and they look worthy of further investigation.

@ Benni

Your posts contain nasty aspects of the NSA/BND combination. There should be an investigation of both agencies (in the open – not behind closed doors).

Nick P June 26, 2014 4:56 PM

@ Clive Robinson and other chip design enthusiasts

Check out this radical, cellular, computing architecture:

http://www.zettaflops.org/fec05/Thomas-Sterling.pdf

The guy had been working on it for several years before this paper. It combines computation, memory, and communication into primitive processing elements. The overall hardware is a large 3D construct of layers that are each a grid of these and other logic (eg external I/O connectors). The author also maps a parallel computing model onto it that even uses continuations. The idea is achieve exascale computing by using an architecture that eliminates most bottlenecks caused by traditional Turing-style architecture. It’s quite an interesting design.

Far as security, I’d use a guard with data validation, a safe API, and certifying compilers. Although it’s too early to worry about security of Continuum Computing Architecture, but we know it will take a few decades at the least. So, I’ll stick to putting less radical computer in front using methods I can actually understand. 😉

Mike the goat June 27, 2014 4:46 PM

Nick: Sorry I was away from the forum for 48h…. interesting link — but I wholeheartedly agree with you re choosing something more conservative. 😉

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.