Another QUANTUMINSERT Attack Example

Der Spiegel is reporting that the GCHQ used QUANTUMINSERT to direct users to fake LinkedIn and Slashdot pages run by—this code name is not in the article—FOXACID servers. There’s not a lot technically new in the article, but we do get some information about popularity and jargon.

According to other secret documents, Quantum is an extremely sophisticated exploitation tool developed by the NSA and comes in various versions. The Quantum Insert method used with Belgacom is especially popular among British and US spies. It was also used by GCHQ to infiltrate the computer network of OPEC’s Vienna headquarters.

The injection attempts are known internally as “shots,” and they have apparently been relatively successful, especially the LinkedIn version. “For LinkedIn the success rate per shot is looking to be greater than 50 percent,” states a 2012 document.

Slashdot has reacted to the story.

I wrote about QUANTUMINSERT, and the whole infection process, here. We have a list of “implants” that the NSA uses to “exfiltrate” information here.

Posted on November 13, 2013 at 6:46 AM51 Comments


pointless_hack November 13, 2013 8:06 AM

Congrats just on getting documentation!

It’s far more sophisticated than merely “bricking,” a mobile device.

Clive Robinson November 13, 2013 8:46 AM


Exploiting web browsers appers popular with the theives be they paid by their ill gotten gains or if paid by a government extorting ill gotten taxes from citizens.

Perhaps it’s time we rethink the way browsers work.

But one real anoyance is the number of websites with the “you’ve got to enable javascript” attitude, usually for some stupidity that can be done better another way.

So I guess as long as web designers carry on insisting on forcing every day users into using javascript the likes of the NSA, GCHQ and any other tin pot dictatorship will continue to florish…

Nicholas Weaver November 13, 2013 9:09 AM

Linkedin/Slashdot were for user identification.

You identify your target, and you parse the HTML enough to determine if its your target visiting the page (both have lots of such information). That allows you to identify the user’s cookies, so a subsequent request you then packet inject your redirection to a FOXACID exploit server.

It allows you to nail your victim when they are not at work, or to nail a specific victim (rather than everybody) within the work network. It will even work behind a NAT/proxy, since you have the distinct user’s cookies to attack with.

CallMeLateForSupper November 13, 2013 9:14 AM

@ Roland Giersig
“It is “Spiegel”, not “Speigel”. ;-)”

Maybe “Spei…” (say “spy”) was a Freudian slip.
Or not. 🙂

Dirk Praet November 13, 2013 9:20 AM

A provocative thought:

Snowden at some point said that he could have wiretapped anyone’s e-mails, including the president’s personal account.
Now imagine he also had access to QUANTUMINSERT and some way or another managed to successfully use it against some of his NSA colleagues to compromise their systems, gain elevated access privileges and siphon off documents and information he normally would never have been able to get at. How totally wicked would that have been ? As well as a nice alternative for the bizarre story that he talked 20-25 people into handing him their password.

@ Clive

But one real anoyance is the number of websites with the “you’ve got to enable javascript” attitude, usually for some stupidity that can be done better another way.

Motion sustained. We need to get rid of all this useless Flash, ActiveX, Java and javascript stuff.

Douglas Knight November 13, 2013 9:40 AM

One difference between linkedin and slashdot is that the first forces https and the second forces http. Of course the server using https isn’t much help against malicious link insertion unless the browser knows to force https, but some do.

Nicholas Weaver November 13, 2013 9:53 AM

LinkedIn uses HTTPS for login, but once you are browsing itself, its back to plain old HTTP.

edge November 13, 2013 10:33 AM

Any chance of studying one of these FOXACID machines in the wild? It would be interesting to see if there are anything that could be used to detect one (e.g. cert weirdness, response time anomalies, etc.).
Perhaps if tell-tale malware could be detected after the fact. Can Belgacom or OPEC be persuaded to donate compromised machines to researchers?

Anura November 13, 2013 10:37 AM

@Dirk Praet

If you have Java or ActiveX enabled, you’re doing internet wrong. Flash is dying, but I don’t see how HTML5 is going to be so much more secure, and JavaScript isn’t going anywhere.

What I would love to see is a way to do permissions and isolation on the subroutine level. Macros, scripts, anything that’s potentially untrusted should have kernel-managed isolation turned on with just a property on the function. Not sure how this would actually work in practice, but if we ever want to put a serious damper in exploits, I think it’s a necessary step. This is on top of application-specific permissions; no reason my browser needs to be able to read my gpg keyrings.

CallMeLateForSupper November 13, 2013 11:48 AM

@ Clive @ Dirk
Re: Java, JavaScript, cookies, Flash, ActiveX

I previously shared, in at least one other thread, that I keep JavaScript OFF; the only exception is when I am actively engaged with Gmail (which requires JavaScript).

The same goes for cookies, and turn OFF cookies and delete all cookies after closing Gmail. My late brother-in-law had an opinion of cookies that was at once spot-on and hilarious. “Did a sales person in a traditional store ever observe you closely and continuously, scribble cryptic entries on Post-Its, and stick each Post-It on your body? How rude would that be?! Same with cookies; store ’em on your own hard drive!”

My nix systems have no Java. Nor do they have Flash. I *read news, interviews, opinions, stories, etc., on the web; the only moving pictures I watch are on the television. It chaps my butt to follow links to a story, only to find a video, not a transcript.

I ripped Firefox off all of my systems a month or two ago because the radio button for turning JavaScript ON/OFF was removed from the new release. I attempted to tell the developers that I had abandoned Firefox, and my reason for doing so, via a post on the web site, but when I clicked Submit, nothing happened. That’s right: ya need to have JavaScript enabled. (sigh)

I used to rage at sites that refused – probably because they wanted to serve cookies and/or JavaScript – to load, but reminding myself each time that I had good reasons for “saying no” chilled me out eventually. “Fair enough, [web-site], but it is your loss.”

Nick P November 13, 2013 12:47 PM

@ Nicholas Weaver

Excellent article and good analysis of their capabilities. The only thing I’d dispute is the conclusion you reached about encryption being the solution. The actual problem is that the protocols/libraries/standards that power most of the Internet are vulnerable to TLA’s on multiple angles. Add to that the centralization in protocols like DNS and the CA’s. Changing the situation will require dealing with all of that.

However, I’ve always advocated removing the low hanging fruit for attackers across the board as the first step to more secure computers. Better encryption and authentication of existing protocols can certainly help. Cookies are another problem that should be replaced, damage limited, or phased out. All native code executables interpreting web activity should be armored against code injection at a minimum (e.g. Native Client SFI) and at a maximum be designed for isolation of different domains/components (e.g. OP-style browsers). Application-level security in-page a la NoScript. Finally, the platforms themselves should have both a trustworthy boot process and be able to use it for recovery media in event of suspected compromise.

These are the most minimal requirements for safe[r] online activity. Not a single existing option meets all of them far as I know. Yet, without addressing their major areas of attack, all the crypto in the world won’t save you when they rootkit the computer via unsafe protocols or code in the system.

peterxyz November 13, 2013 12:52 PM

apparently Charles Stross is upset that they’re stealing all of the good code names for his next book in The Laundry series

Fluffy November 13, 2013 2:02 PM

“If you have Java or ActiveX enabled, you’re doing internet wrong.”

Anura, others, I do realize that you’re the very most knowledgeable and select of internet users. Even I qualify as above average for using Noscript and toggling a few “always delete history” buttons on every browser I use.

However, if you aren’t thinking universally in terms of better security for all users — for making privacy and security de facto and normative for the vast majority………..

then you’re doing the internet wrong.

Flippant contempt for present normative practices is common among you techno-elite, and utterly besides the point. Java-enabled is the way most people function on the net, day in and day out. It’s forced upon them.

I.e. In the U.S., it’s impossible to use mandated middle school websites without it. If my daughter wants to complete homework assignments, I must disable damned near ever sort of protection on my browsers. Sometimes I can’t even find every change I need to make in order to be naked enough to please these sorts of websites. Commercial sites are nearly as bad, and critically important for functional life if you’re a working parent.

The basic usage paradigms need to change. Chit-chat about the “lameness” of the average user is a distraction. Techno-Brahmin huffing about his hapless stupidity isn’t productive, isn’t right, and it’s so common you hardly see the damage it does to your own perspectives.

Your implicit disrespect for them is perfectly analogous to the disrespect afforded you by the security state elite. Do you really like your particular location on life’s totem pole?

I shouldn’t post here; all I do is scold and I have no technical expertise. But, this crap edifice we all function with was built by a subculture that made (and makes) some severe mistakes about humanity, dignity, and the worth of the average guy.

squarooticus November 13, 2013 2:19 PM

Fluffy: hear, hear!

If someone wants to feel superior to everyone else by turning off JavaScript, making 90% of the web unavailable to them, they can feel free to do so. The rest of us want the functionality only possible today with JS, so the question is truly, “How do we get this experience without the security issues?”, not “Why don’t people neuter the web in a futile attempt to keep the NSA out?”

Just to be clear, I’m not saying JS is a necessary part of a semantic and asynchronous web; just that it is the only universal solution today, which is why it is virtually required to navigate any modern website.

As a developer, I frankly refuse to waste my time catering to the tiny fraction of a tiny fraction of users who run plugins like NoScript: they simply aren’t worth my time. And anyway, that time would be better spent advocating for and developing secure technologies that replace insecure technologies without sacrificing functionality.

Anura November 13, 2013 2:33 PM

@Fluffy, squarooticus

Java and JavaScript are two different things. While your average user will probably get annoyed if JavaScript is disabled, Java applets are few and far between. I’ve had the Java plug-in disabled for several years now, and I think there have been maybe two or three occasions where I came across a website that used it, and no occasions where I felt the website was worth enabling it for. Malware, however, loves it.

ActiveX is even less common, as it only works on Internet Explorer. I don’t think I’ve ran across a site making use of that in over five years; however, IE still has it enabled by default (although it requires a prompt).

It’s also a good idea to disable the Acrobat plugin; PDFs can be downloaded and prompted to load, instead of loaded automatically in the browser. It’s just one more point of attack for no gain; I’ve had it disabled for 5-10 years now, since in the past it would simply crash my browser.

Ben Richards November 13, 2013 2:47 PM

Squarerooticus, I believe Fluffy is complaining in part about developers like you, who unnecessarily require him/her to disable security settings. Yes, there are some things that require javascript, or at least would be much trickier to code for without it, but web developers seem addicted to using it even in instances where it is completely unnecessary, or worse, covering the whole page with “please enable javascript” notices instead of allowing users to view the it with merely reduced functionality.

name.withheld.for.obvious.reasons November 13, 2013 3:38 PM

A while back I mentioned the disappearance of one of Bruce’s books (I purchased more than two in the last five years. Today Monty Pythons’ Holy Grail went AWOL with a neat error message saying the video had expired, and, summarily removed from my device. But, the backstory is starting to move forward. The process is being reniced and moved to the foreground. Event delivery across the comms can be expected. I’m forewarned and am exercising diligence and awareness.

omnipresent javascript November 13, 2013 4:31 PM

What about running the browser in a VM / sandbox? Presumably that would somewhat mitigate these attacks. Especially if you spin the VM up and down every time.

RobertT November 13, 2013 7:27 PM

I know I’m simultaneously using 4 physically different devices when web browsing just to try and limit the information flow. But Fluffy raises an interesting question about what good is this level of personal security on my part, when the vast majority of web users run without any protections what-so-ever. Even if they try to use Noscript they’ll turn it off eventually because some web sites (like Gmail, I believe) will not function at all with NoScript running.

This is precisely why I was saying email encryption is useless because most mail recipients are not running computers that are even remotely secure. If they are “interesting” or their friends are “interesting”, then they are compromised its that simple. Whatever mail I encrypt leaks from their machines just as surely as if I had left it as plain text (probably quicker because encryption nurtures the false hope of real message security)

The perverse side of this problem is that the more cyber secure I become the more I personally stand out from the crowd and more resources, at both corporate and state levels, are devoted to compensate for the lack of data flowing into their databases from their normal sources.

Dirk Praet November 13, 2013 7:54 PM

@ Fluffy

Flippant contempt for present normative practices is common among you techno-elite, and utterly besides the point.

I wouldn’t call it contempt. The simple fact of the matter is that these technologies – however ubiquitous and normative – are among the most common internet attack vectors and are actively exploited not just by state actors, but just as well by script kiddies, black hats and organised crime. Some of us may at times convey this message in less than optimal ways, but it doesn’t quite change the reality of its content and the risks associated with unmitigated use of thereof.

Ultimately it is up to you. You can take the red pill or the blue pill, but what you choose is your choice.

Nick P November 13, 2013 9:24 PM

@ RobertT

+1 on most of that comment esp relying on the physical security instead of software-based alternatives

Re NoScript

” Even if they try to use Noscript they’ll turn it off eventually because some web sites (like Gmail, I believe) will not function at all with NoScript running. ”

That’s not how NoScript works. When you need scripts, you have several options:

  1. Allow one or more specific domains to execute a script on a page.
  2. Allow all of a page’s scripts.
  3. Allow scripts globally (marked as “dangerous” in parenthesis).

Most of the time a script heavy web site will work if you say allow all this page. Worst case, which I can’t recall running into, you can temporarily allow all scripts, then afterwards turn that off. People who want to put more effort into it can selectively enable scripts for certain web sites until they hit the sweet spot where the content displays, but nothing else runs. IIRC, there are even site-specific profiles and a community around making them. It can also be combined with things like sandboxing (Sandboxie is well-tested), CFI, virtualization, etc. Finally, the location to do all the common actions is an icon next to the address bar.

All in all, it’s a well-designed solution that’s easy to get plenty of benefit from even for lay users. A friend of mine on their support forums said he’s taught plenty of lay people to use the more advanced features. I can only imagine that “allow this site” is even easier. So, it’s certainly a nice solution for web application layer security at the client that let’s people make their own tradeoffs.

Hell, I’m using it right now. 🙂

Buck November 13, 2013 9:43 PM

About NoScript…
The biggest problem is with web-developers not using it for development. Get basic functionality down first and foremost, then add optional fancy JS wherever you like.
If absolutely necessary to get the job done, please serve your scripts from a singular static domain!

Nick P November 13, 2013 11:35 PM

@ Buck

“If absolutely necessary to get the job done, please serve your scripts from a singular static domain!”

I feel your frustration. Worst part of advanced NoScript usage is figuring out what combination to turn on or off. It’s also aggravating to see scripts from over half a dozen domains I’ve never heard of to access a piece of content with no embedded rich functionality. (grits teeth)

@ all in Javascript tangent

Javascript isn’t a problem: it’s a solution to a problem

When talking about Javascript on web sites, we must also remember two things about web sites:

  1. They’re the personal property of the creator.
  2. Their standards of operation respond to demand.

Point 1 says quite simply that they can do whatever they want with their content so long as there’s nothing illegal about it. “If you don’t like it, don’t use it.” In the U.S., that means people wanting non-JS alternatives to popular sites can feel free to create them. The site is an example of free expression and our Constitution protects their right to express themselves with Javascript. Just a fact of life here.

Point 2, the more important one, is one I’ve spent the past year or two trying to get into heads of those demanding security in apps/web. If the public (or market) doesn’t demand something in a market economy, public facing content providers won’t (and shouldn’t) care about it. If they’re also free to deliver content however they please, there’s no telling what specific deployment they’ll use based on any number of motivations. Data out there suggests standard, insecure web technologies are delivered by suppliers and accepted by consumers in over 99% of cases. Makes little sense for them to turn off Javascript.

The public wants eye candy, smarter sites, AJAX-style updates, more functionality… all working in their normal web browser with great speed and ease of use. Javascript fit right into their requirements over the years. The legacy requirement in particular will make it hard to get rid of. It’s currently the de facto deployment language for applications that must run in various web browsers. There’s tons of code and users. It might be required by ad networks willing to pay a site owner or other services they must interface with. And so on.

The current situation is simply a product of site owners’ freedom and the public’s demand. Until the public demands (with their wallets) security, we won’t see the market place provide it. Until the govt mandates a baseline, we won’t see companies reluctantly provide it.

So, widespread JavaScript is seen by majority as a solution (or embedded into them) rather than a problem. Javascript opponents must accept this and compromise with them somehow. Historically, all compromises have failed unless they allow legacy content and support Javascript. It’s why I promote cutting-edge web browser research into strong isolation of domains, trusted path, sandboxing untrusted code, and protection of confidential data. Such research has delivered several prototype browsers, one which inspired Chrome’s security architecture. More recent ones are even stronger. Seems with web you can have best of both worlds if you engineer the solution properly. 😉

(Note: my opinions on securing javascript in web browser aren’t meant to imply I think the result will be secure. The web is insecure by design with much patchwork keeping it just safe enough to operate. I’m merely talking relatively of Internet baseline with JS vs w/out JS. JS has a lower impact on your security in a well-designed browser than you’d think.)

Figureitout November 13, 2013 11:46 PM

–You may like this: –

“Drive-by” malicious javascript, I find it to be one of the most ugly languages I’ve ever seen. This is what we get for letting marketers and MBA’s drive the engineering process…Someone’s got to tell them NO! Go make ads!

–Yeah, you’ll get your functionality and a nice virus. It’ll enjoy the functionality of your pc.

I’m just sick and tired of bloated garbage I don’t want on my pc, pre-loaded crap, way too many peripherals I never use. Lately, I’m finding myself attracted to graphing calculators…

Figureitout November 14, 2013 12:16 AM

Dick I mean Nick P (joke)
According to the training presentation provided by Snowden, EgotisticalGiraffe exploits a type confusion vulnerability in E4X, which is an XML extension for Javascript.

So not really a “tangent” on javascript, it’s all along the curve. And if you call it a “solution”, it’s a hacky fragile solution; that will end up wasting more time than it’s worth.

It’s a “Fat Bastard” “Infinite Loop” problem we find ourselves in. We don’t demand new alternatives b/c they don’t exist, and we don’t make new alternatives b/c there’s no demand for them. Kill it w/ fire ‘cuz it sucks.

The Message That Can Stand On Its Own November 14, 2013 12:20 AM

Nick P

apropos Javascript

“If you don’t like it, don’t use it.” In the U.S., that means people wanting non-JS alternatives to popular sites can feel free to create them.

Bullshit criteria, simply because in most cases it is very difficult for users to determine what that Javascript even does.

Point 2, the more important one, is one I’ve spent the past year or two trying to get into heads of those demanding security in apps/web. If the public (or market) doesn’t demand something in a market economy, public facing content providers won’t (and shouldn’t) care about it.

Again bullshit because of two reasons:
1. regardless of your personal opinions, the “market” and the “public” are not some kind of individuals with a single voice.
2. (see point #1) in at least USA the situation is as follows:
If individuals of the “market”/”public” demand non-JS web sites, but potential profit from JavaScript (adverts, user tracking, etc) exceeds the benefits of listening to the “market”/”public”, then the “market”/”public” is conveniently ignored.

RobertT November 14, 2013 12:31 AM

@Nick P

It’s the family that has problems with Noscript I find it very easy to use, they find it incomprehensible and an unnecessary limitation, different stroke ..different folks I guess. However it does create an ongoing problem for me to keep their computing resources completely separate from mine. Naturally they think I’m just being plain mean when I refuse to let them use one of my PC’s and refuse to let any of their USB memory stick anywhere near my stuff.

Good Opsec is hard work. My wife also seems to believe that no harm has been done if the computer didnt actually self destruct when the USB stick is inserted.

Nick P November 14, 2013 1:34 AM

@ RobertT

“It’s the family that has problems with Noscript I find it very easy to use, they find it incomprehensible and an unnecessary limitation, different stroke ..different folks I guess.”

Probably so.

“Good Opsec is hard work. My wife also seems to believe that no harm has been done if the computer didnt actually self destruct when the USB stick is inserted. ”

You have a wife casual about security and maintain OPSEC/INFOSEC on four devices. That must be fun. 😉 Btw, unless it’s confidential, what’s your spread of use cases or functions across the four? (eg why four) My old paranoid setup had one that was very isolated, one for benign apps, and one for risky stuff with a KVM switch to add ease of use. All were hardened to varying degrees with lots of work into a clean backup process. Worked well enough.

@ figureitout

“Dick I mean Nick P (joke)”

Ha. Playing devil’s advocate I can come off that way but someone’s gotta show their di… err, “willingness to question the status quo of a particular debate” every once in a while.

” And if you call it a “solution”, it’s a hacky fragile solution; that will end up wasting more time than it’s worth.”

Oh i totally agree. Many popular things in IT are in this category. I promoted alternatives to Javascript back in the day before it became ridiculously popular. There was great work in the 90’s and early 2000’s on doing things Javascript does in a safer way. Not just security either: read old papers on “mobile agents” with things like Telescript and Obliq languages to see some really out of the box thinking on client-server tech. For various reasons, developers and browser writers favored Javascript. Now it’s s*** we must live with. 🙁

Yet, like many other things, it continues to be improved. V8 gave it stunning performance. Highly compatible languages give it typesafety, easier development, etc. Cross-compilation like Asm.js lets us reuse previously vetted native libraries or typesafe languages. Sandboxing libraries like JSand focus on making things safe for us and easy on developers. Sandboxing browser tech like Native Client or Gazelle take that much further. Truth is, Javascript is a problem for us risk adverse types, but less of one every year from a technical standpoint. It just needs a lot of hand-holding for safe deployment.

@ The Message…

“Bullshit criteria, simply because in most cases it is very difficult for users to determine what that Javascript even does.”

What’s knowing the internal details of a particular service/site have to do with creating an alternative with comparable content or features? Nothing. If markets do anything, it’s incentivize people to create compelling alternatives to the No 1 in any particular category. If none are showing up or succeeding, that usually says something about demand: nonexistent or very weak.

re point 1: they can reach consensus’s and critical masses though. Enough people care about something, someone will offer it for fame or fortune. It’s a pattern that keeps repeating itself esp in a market economy. Govt intervention, mainly I.P. enforcement, is the largest potential obstacle here for tech imho.

“2. (see point #1) in at least USA the situation is as follows:
If individuals of the “market”/”public” demand non-JS web sites, but potential profit from JavaScript (adverts, user tracking, etc) exceeds the benefits of listening to the “market”/”public”, then the “market”/”public” is conveniently ignored.”

I’m going to ignore the fact that, in those situations, the public isn’t the market: the advertisers are. The public is merely “the product” (source: Schneier). I’m not a fan of that particular choice of the majority. At least, I’d have preferred more safeguards or legal protections. Yet, even with advertising models, the users as a whole have ways to get site security or non-JS into play:

  1. Avoid sites that use Javascript or other risky web tech.
  2. Specifically use sites that take a safer approach and let them know that.
  3. Pay sites that put premium effort into security. (Lavabit’s model.)
  4. Push lawmakers to pass some kind of law regarding it, such as liability for site owner in event of JS-based threats.

This isn’t a comprehensive list. Each is an option for consumers. Yet, there’s been no mass effort against Javascript by either lay or IT people, personal or business. Even many security loving types use JS with a bit of risk management thrown in. So, no demand in practice. A study by Yahoo showed Javascript disabled rate to hover around 1%. So, comparisons by some that websites acting on the issue would be merely catering to a minority that has little to no impact on their success… are quite accurate. Even though they come off as jerks when they say it.

I don’t like the realities of Javascript dominance, but people can’t change reality if they don’t see it for what it is. The odds are in favor of the win going to better ways to isolate untrusted code from the system rather than no javascript. That the list of Javascript sandboxes, compatible replacements, “safe” libraries, etc keeps growing is only further indication of this.

My prediction: “Disable Javascript” will remain a niche option only appealing to around 1% of users. The benefits to site owners will continue to outweigh the negatives. They will continue to choose to leverage Javascript or related tech. And it will be the effective choice from a site owner’s perspective unless they’re serving very static, plain HTML content.

Figureitout November 14, 2013 1:53 AM

Nick P
–Well I don’t want to see your vagi–I mean dick. Stop dancing w/ the devil. I hate javascript, such a garbage language.

Now it’s s*** we must live with. 🙁
–No, I refuse. More garbage data for javascript. I’m getting too angry now so I’m done.

Aspie November 14, 2013 4:45 AM

@Figureitout – I feel your pain man and totally agree.
* My bandwidth, my electricity, my compute power. Therefore I’ll decide who it’s working for. If someone’s business model depends on using my compute power beyond an HTML-compliant browser to deliver their solution then that’s going to be a problem for them, not for me.
Probably web-designers rely more on higher-generation solutions that often inherently require JS for lower level functionality that /they/ depend on. The more layers the more ingrained the requirements.
For server farms it’s a godsend; make the poor schmuck on the remote serve up the compute power needed to do gimmicky eye-candy things that they are probably no happier for seeing.
If a website requires JS, it won’t load on my system since Squid won’t even /load/ JS pages except for very limited exceptions.

Anyway as many here have said, or at least not disagreed with, JS is pants and Java is for ‘phones. HTML worked fine for years until the hucks turned up and started stinking the web out with pages of virtually zero original content. I was gifted a subscription to Wrd many years ago. In the first issue I counted seven pages of actual content (which in itself was thinly veiled advertising for a product) and the remaining 200+ pages were ads or fluff. The web is the same now and JS has made it possible to distract people from this fact.

Muddy Road November 14, 2013 6:02 AM

NSA and GCHQ are beginning to sound like a vast criminal enterprise.

In the name of security they claim an inalienable right to secrets and deem everything they do is “legal”.

What they do is legal simply because they say it’s legal.

At least 99% of the legislators involved in cyber law have no clue what they read or vote on. Also, laws are written by the agencies, lobbyists or defense contractors.

Brit spies appear to have it even easier. Brits seem to wallow in fear, secrets and spying.

It is likely any new law in the USA will make matters worse. The title may be “STOP NSA SPYING on AMERICANS”, but somewhere in the 600+ pages of legal gobbledygook will be express permissions, exemptions, exceptions and double talk allowing them to do whatever they want, “legally”.

I think it’s very sad our country which once was the light of liberty and freedom has declared cyber war on every living person on earth.

It’s a war with no chance of victory or peace. It is what it is.

“We have met the enemy and he is us.”


Aspie November 14, 2013 6:35 AM

@Muddy Road

It’s the problem with law. Ideally a legal statement would be as concise and brief as possible.
However the briefer it is the more likely the concision is to be undermined by interpretation.
Objectivity gives way to subjectivity and relativism gains ground over fundamentals.
At least, this has been my experience.

CallMeLateForSupper November 14, 2013 11:25 AM

@ RobertT
Re: your family using your ‘puters.

How about putting a HD dock in one system and supplying a HD in a tray, for family use. Put your HD in a second tray and store it in a safe place.

That kind of setup works well here. When someone buggers his system (HD) – which does happen, with stunning regularity – nobody else is affected. This modus was a natural result of my desire to keep my own multiple OS’s firewalled from each other. (My greatest fear was that a Windows would touch some other file or file system in an icky way.)

Nick P November 14, 2013 12:31 PM

@ Aspie

“In the first issue I counted seven pages of actual content (which in itself was thinly veiled advertising for a product) and the remaining 200+ pages were ads or fluff. The web is the same now and JS has made it possible to distract people from this fact.”

Very well said. It’s why I thought Consumer Reports was so refreshing: pay a little over 20 bucks, then get useful content regularly for a year instead of tons of ads. New Scientist is another one packed with content although I think it does have ads. Can’t recall cuz it’s been a while. Pop Sci had plenty of ads but also had about as many pages of content so it wasn’t so bad for me. That’s a tradeoff I tend to accept as turning a page or hitting “x” to get to something worthwhile isn’t so much trouble.

(Wired, the “men’s journal,” etc. Don’t get me started on those whopping encyclopedia’s of advertising pieces. F*** that.)

So, in light of my previous comments, I don’t read or use any site that overloads me with ads. Magazines either. I have alternatives for most of them. The one exception is YouTube. There’s so much useful/enjoyable content on there and their ad system is so thorough that I usually have to just suck it up. Course, if I am hit with a forced ad, I don’t quit being a rebel there: I mute it, look away from the screen and count until it’s probably gone. Or do something else in another tab/app. So, at least they had no effect on me other than inconvenience.

RobertT November 14, 2013 6:08 PM

@Nick P

Why Four personal Pc’s?:

Basically its what I have available at home. When travelling I reduce it to three devices. Two pads and one laptop.

At home I have one laptop that is for general web browsing and some garbage email accounts. the main hardening is visualization and sandboxing. Basically make sure that nothing on the computer is what it seems to be when viewed from the outside.

I have two laptops devoted to private / business email and other forms of personally identifiable communications. I do this because I’m very concerned with limiting the whole picture. TLA’s and commercial entities want to develop as comprehensive a picture as possible so being a mean SOB I’m actively denying them. I also think it is the only safe way to avoid phishing attacks, especially from malicious business insiders. These 2 devices are only ever operated through VPN’s which I’ve setup and control.

I have one old desktop, very minimally configured and hardened, I devote this device to anything I think could be risky. For this device the bios cant be changed and I have images of the HD which I regularly restore. If in doubt I boot it from a live CD.

All real work that I do at home is on an isolated computer system, IMHO the less I say about that system the better its security.

RobertT November 14, 2013 6:21 PM

“How about putting a HD dock in one system and supplying a HD in a tray, for family use. Put your HD in a second tray and store it in a safe place.”

Ahhh NO I dont think that’ll cut it. I guess it all depends on how active and ongoing a target you personally are. I wouldn’t think I’m that interesting a target for persistent attacks, but experience definitely suggests otherwise.

Figureitout November 14, 2013 10:14 PM

–Sorry you feel my pain…my poor computers are zombies and they get treated very badly…They want to be put out of their misery. I want to build my own (I found a model that I want to try. and I know you like layouts, so pretty) to do calculations but I don’t even know where to get trusted chips.

Not only are they stealing your bandwidth, cpu cycles, and power. Maybe your data (duh) and even worse using your machine to commit atrocities and plant false evidence on your pc.

If you want to read Wired magazine, just go to a shop that sells them, there’s a “megastore” that kills all local business so I feel no guilt going to read the magazine off the bookshelf in the store then placing it back when I’m done. I even took out all the “subscriber cards” and trashed them. Get a subscription to QST, since you showed some interest in radio; I would give them to you but we’re internet strangers. There are some neat articles, but even they have quite a lot of ads…at the end. What’s messed up is they’re trying to sell factory made “perfect digital” radios, when what made amateur radio so great is people literally built a radio from scratch.

Figureitout November 14, 2013 10:42 PM

–While I agree w/ some of your points, please don’t label us (or me at least) as “techno elites”. First off everyone has weaknesses so the only way to mitigate some is to not use them. There’s a difference between demanding and getting respect for yourself and thinking you’re better than someone. I’ve made friends w/ people all over the world, I don’t care where you’re from, what you look like; so long as you have basic respect and manners to me.

I’m so f’in poor now though so I can’t really help all the homeless people and charity organizations pan-handling for money. It really makes me mad that a company like Lowe’s hardware store would ask every customer if it wants to donate a dollar “for the children” when they’ll take credit for donating that money. While rich people don’t give their money to people who are dying everywhere.

Nick P November 15, 2013 12:23 AM

@ RobertT

Interesting. All seems reasonable and wise. Quick question: how do you keep the BIOS from being altered? I remember older computers had jumpers and BIOS settings to that effect. However, with the proliferation of software/firmware control of hardware, I was curious what your opinion is about BIOS protection in modern chips (UEFI and non-UEFI).

Is there an easy way to maintain BIOS integrity that isn’t software bypassable? The ROM Primary BIOS + Flash secondary BIOS trick in chromebooks is a nice concept that I’m a fan of. But, if we’re talking the kind of systems most people will be acquiring (that aren’t chromebooks), what are the options for protecting the BIOS? Or easy route to robust BIOS/firmware in a special purpose, embedded device?

I figure you’ve had to solve this problem more than once in your line of work and might have interesting ideas, some which can be made public. 😉

(Note: this is actually relevant to the blog thread as all the software protection in the world won’t help if they can attack the BIOS. The BIOS is a root of trust imho that, if strong, can be leveraged to great effect even without a TPM or secure coprocessor. They’re just too damned vulnerable on most systems so if you have a secure OS, NSA might have an attack below it. Most BIOS protection is also ad hoc, although there is an NIST guide on it now.)

Aspie November 15, 2013 3:43 AM

@Nick P – had to laugh at the “look away from the screen until they’re probably over” – I do that too; refuse to even give the sublims a chance.

The ASAP has a very sweet layout – for those of us who like circuits that can be soldered by humans if required. Reminiscent of the Superbrain and the UK-101. The thing about TTL logic is those chips are getting scarce and that drives up the cost of such projects. If speed is not an issue, some of those things can be replaced by microcontrollers masquerading as them with suitable code.

Funnily enough, I’m working on my own project in a similar vein. I’m building a sort of CPU running my variant of FORTH in dense bytecode atop a cluster of microcontrollers. SPI networking between the uCUs and flexible task assignment. It’ll never break any speed records but uCs are about $1 apiece and flat out at 16 MIPS use about 10mA each. They need virtually no support circuitry and can be proto-boarded easily onto the bus. They’ll all run the same FORTH primitive kernel cut in asm by hand and implement a bignum stack. I’m waiting for important parts to get the main together and begin unit testing. If I don’t let the magic smoke out and it all works I’ll drop a detail link for interested parties.

I did this; (1) to investigate other forms of CPU that are more independently parallel (2) for the fun of it and (3) for a simple system that has no hidden variables. The last reason is (adjusts shiny hat) a result of hanging out here with you lot.

I agree with you that building things from scratch is what makes the whole project worthwhile, tremendously interesting and fun, and deeply educational. My knowledge of electronics is pretty poor but it’s improving because of this and other mini projects. Modern devices are becoming the exclusive domain of fab-plants which, whilst it makes them cheap, teaches us DIY types nothing useful and makes them very black-boxy which I for one find unsettling. Admittedly uCs are a bit black-boxy but the older ones, I beleive, can be trusted (Microchip’s 16F and 18F lines in this case).

CallMeLateForSupper November 15, 2013 11:35 AM

I just noticed a strange phenomenon when I loaded this:
Specifically, my browser shows a broken lock icon, indicating that the Secure HTTP session… is not.

I always pull up the main page first. I load the Comments of any thread in a new tab. The main page of this blog has always come up Secure, and I have never noticed that a Comments page failed to come up Secure.

Could be my browser, I guess; it does have some warts.

Aspie November 15, 2013 12:10 PM


I’ve noticed this recently too. Lynx through squid used to work – now it warns me that it can’t verify the CA and bombs unless I unset https_proxy.

Clive Robinson November 15, 2013 2:22 PM

@ Nick P,

    Is there an easy way to maintain BIOS integrity that isn’t software bypassable?

It depends on the chip… an old style EPROM requires a high voltage generator (12-15v) to put write pulses on the Vpp line (pin 1 on 28pin JEDEC pin out) without which it cannot be re-programed. Most other xROMs require a write/ line to be actioned taken low pin 27 on 28pin JEDEC) a small amount of PCB surgury with scalple or hot iron will often break the connection.

On my older boards I’ve taken the ROMs off and soldered turned pin sockets in and put the ROMs back with the appropriate pin disabled.

RobertT November 15, 2013 3:09 PM

@Nick P
“how do you keep the BIOS from being altered?”

Clive beat me to the answer, on older EPROM’s and Flash there is always a VPP pin (usually goes to something like 12V) if you remove / cut this pin than it is impossible for the device to be reprogrammed. This high voltage is needed to induce Fowler-Nordhime tunneling, which is the form of electron tunneling fundamental to the operation of all EPROM, EEPROM devices. Flash devices normally use a different mechanism called “Hot Hole or Hot Electron” injection but this also requires a High voltage, usually greater than 6 volts. these days this voltage is normally generated on chip with a circuit called a voltage multiplier. This consists of two or three external Capacitors that are charged to the highest available voltage and then successively added to double, triple or achieve even higher multiples of the original voltage. These caps are always external (for any serious amount of non volatile memory) because it takes a lot of power to program a Flash.

If you remove one side of the switching Cap or simply ground it then the multiplier circuit can never generate the required voltage so it can never reprogram.

Figureitout November 15, 2013 6:26 PM

–Sounds awesome! I’m an interested party for a link sometime. I’ll look for you on hackaday. 🙂

Nick P November 16, 2013 12:01 AM

@ Clive, RobertT

Thanks for the tips. Now, I either need an old chip, a college EE major to help me rig a newer one, or something custom. Swell options. 😉

Clive Robinson November 16, 2013 5:16 AM

@ Figureitout,

Whilst you can get 74xx chips including the 74181 the price you are going to pay is disproportianate to their functionality and speed.

For instance you can replace the 74181 functionality entirely with a RAM chip you pre-load with a table and get considerably higher clock speed. Or replace it with a small (PIC) microprocessor or a Programable Logic array like a 22V10.

You should however implement micro code of a “standard” CPU you can get an emulator for that runs under *nix or Windoze.

You could do as Charles Moore did and thats use a DSP chip to implemement a Forth engine, you thus need to only to microcode up 29 basic instructions that makes it an X-RISC processor (ie eXtreamly Reduced Instruction Set)

If you do decide to do a “go it alone” design the bit you need to optomise beyond all others is the Adder Carry to give minimum delay, there are various “fast adders” and surprisingly it is still an active research field especialy when considered as part of a multiplier.

And for some reason they never realy talk about multiplier types in tutorial books… They are all integer multipliers plain and simple with two N-bit inputs and one 2N-bit output that needs to be mapped onto an N-bit bus. If the 2N LSB maps directly to the bus LSB it’s a standard interger multiplier with the assumed “point” to the right of all LSBs with all numbers less than 2^N. However if the 2N MSB maps to directly to the MSB of the bus then the point is assumed to be to the left of all the MSBs and therefore all numbers are less than 1. If the mapping is somewhere in between as is seen in some DSPs it’s a real pain. The choice of which way to map is very dependent on what you intend to do with the ALU, if mainly scientific calcs then go for the radix point to be left of the MSB as this makes floating point calcs and normalisation easy, however it makes crypto long integer math harder which favours the radix point to the right of the LSB. You could of course use a MUX (or barrel shifter) to make the ALU do either however it uses a lot of gates and much worse adds gate delay times that slow your basic clock rate down. Which is why some ALUs use two N-Bit registers and other logic to catch the full 2N-bit output and let the programer decide which half they want to drag onto the N-bit bus or use for branching conditions.

Which brings up another issue to use conditional branches/jumps or conditional skips and jumps. The norm is conditional short branches and long jumps, however this makes the microcode more complex, increases the gate count and slows both instruction decode and gate propergation time slowing the clock rate and thus though put. Conditional skips are very fast and use little microcode or gates and are easy to pipline and thus increase throughput, however from the programers perspective they use backward logic which causes bugs to arise due to faulty thinking (ie branch on less than is not skip on greater than, but greater than or equal to…).

You also have to chose what extra instructions to include, the 6502 had BCD instructions and others had bit set/clear instructions as well as branching on bit state. Some specialised computers used by the intel community had not just parity instructions but also voting instructions.

It’s upto you but you need to have in mind what you want the CPU to do prior to putting pencil to paper.

Then there are architectural descisions involving not just pipelining but also how instructions are decoded from my experiance the best way to go is a very minimal highly optomised RISC core which is then wrapped in a CISC outer layer that you can change at will. Further make the core Harvard based, von Neuman sharing of busses needs only be done at the outer bus interface layer. Whilst it appears Harvard would use twice the gates, it actually does not due to the swings and roundabouts nature of instruction decoding. Further Harvard if done properly can not only double the throughput at the same clock rate it also reduces inline gate count allowing the clock to be faster thus increasing throughput further. Also for some types of system (DSP / embedded) a full Harvard architecture offers a significant increase in security if used properly.

Leave a comment


Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via

Sidebar photo of Bruce Schneier by Joe MacInnis.