Friday Squid Blogging: 30-Foot Giant Squid Washes Ashore

A 30-foot-long giant squid has washed ashore in Cantabria, Spain. It died at sea, with a broken tentacle.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Posted on October 11, 2013 at 4:09 PM95 Comments


CallMeLateForSupper October 11, 2013 4:33 PM

A GPG appliance. Meh… What the heck is up with the “S.NSA”?!

“S.NSA first product – The Cardano
The Cardano is a custom made solid-state USB mass-storage device, similar in size and shape to a standard external hard drive. The unit comes equipped with a USB connector, a red toggle switch (enclosed under a safety flip-cover) and a bicolor indicator light.

“The Cardano allows you to sign and decrypt gpg messages while ensuring that your private key remains inaccessible to an attacker, even should that attacker have control of the machine Cardano is attached to.”

David Leppik October 11, 2013 4:58 PM

Bruce, either you are sending us a not-so-secret message, or you need to fix your first link.

Kent October 11, 2013 5:04 PM

Nah, first link is cool, but Stallman invented it first! Emacs spook mode:

South Africa War on Terrorism SRI Telex Zachawi unclassified 9705
Samford Road world domination counter terrorism bank asset munitions
clandestine New World Order S Key

Toni October 11, 2013 5:07 PM

Oh good heavens. Mircea Popescu? Check out the bitcoin world for this guy, and his ego – which precedes him.

This bit of vaporware looks like a Trezor ( ), with some interesting changes. But given the source, I doubt anything will come of it.

Nick P October 11, 2013 7:48 PM

@ CallMeLateForSupper and Kent

I think you’re basically wanting a Hardware Security Module for digital signatures. Safenet’s been making those kinds of things for a while. I think they’re either the market leader for that kind of thing or one of the top sellers. Plenty of solutions and specs to look at.

Of course, they are proprietary and recent NSA leaks might make people not trust them. Even so, people looking to develop an open solution should look at the features, safeguards and interfaces of mature proprietary solutions for inspiration on how to go about making theirs.

Mike the goat October 11, 2013 7:49 PM

Brian M. » I know it was a few days back but I ended up hacking up an implementation of that idea you had re encrypted junk in the Oct 9 blog – funny included pastebin of my scary encrypted email with “accidental” leak of suspicious content in subject line.

Mike the goat October 11, 2013 8:02 PM

Callmelateforsupper/Toni: yeah no way I would ever trust a guy like Popescu. He has made some outrageous claims in the past. His whole idea of ‘secure key storage’ seems a little 1998. We have smartcards that can do just that – and gpg plays nice with most of them, esp the OpenPGP ones. There are also USB dongles with enough space to store several keys.

kashmarek October 11, 2013 9:09 PM

Big Brother Big Data inspires little brother big data (imitation is some form of flattery):

According to the report, the big tech outfits (Microsoft, Google, Amazon, Facebook et al) are going to use the data for targeted advertising. Really? Let’s go to the end game.

Given that they have all this data, and swamp the internet with “targeted” ads, who benefits? Those paying for the ads, i.e. marketing the products, might benefit if anyone buys the products. Those buying the products won’t because the prices will be outrageous in order to pay fo all of the ad targeting (you know, the data collection on users, formulating ad streams, pushing the ads, etc.) Yet, those same bit tech outfits are out-sourcing everying to low paying jobs, so where is the money to buy products pushed by targeted ads (the same is true for all other industries, paying less, reducing benefits, etc.) This seems to be an inward downward spiral, and when it crashes, the likes of which will not have been seen before.

Oh, and that data…it will all end up with the NSA to be used to influence, intimidate, and control of the masses (never mind what the big tech firms say about how they are going to protect the data from each other; they won’t be able to protect from the all seeing eye).

All this data collection really leads to NOWHERE. People ignore the targeted ads the same as non-targeted ads (or block ads altogether; whoa, that means we might soon see laws outlawing ad blockers, which signals the failure of ad targeting). And, all the ads are just overload on the network (as well as on product cost), which should drive Big Brother crazy trying to sort it all out. Why, they will have to use ad filtering at the ISP level just to cut down on the data volume shipped to that big western data center, with its power problems. In fact, this extra data load should be just enough to break the internet, or most certainly, change the observed behavior.

TangledupinNSLsandgagorders October 11, 2013 11:31 PM

Do you plan to do any writing about targeted killing in the US by the FBI? For a few minutes suspend disbelief… its going on overseas, and we are doing things to citizens here that we do to the enemy overseas witness dmeas/directed energy weapons like silent guardian with Raytheon…

What if specific ethnic/religious groups were targeted under the FBI domestic terror program and no one could go to the press? What if it went beyond just the folks who irritate the FBI? What if NSLs and gag orders prevented people from talking about it being used to destroy specific populations as in genocide? How could there be a conversation about it that didn’t sound paranoid because even attorneys and judges can’t talk about NSLs and its under “domestic terror” or “need to know”?

How could you even warn people who are targeted for murder if its on an NSL as a targeted kill and no one can talk about it? What if the police/witnesses/coroners could not even talk about it (gag orders)? What if the police arrested perps who had just murdered people but they had an NSL with the name of the victim(s) so they couldn’t investigate it?
What if there were a database of targeted kills at the FBI and they were all Jews and the NSLs prevent investigations, seal autopsies, and cut off the press?
And even if you were aware and tried to escape, there is the no fly list, FBI frame ups, and you are tracked if targeted so you don’t get out through the airport. How could you warn people without sounding paranoid?

What if a previous FBI Director was a Neo Nazi and had an agenda to kill Jews and promoted agents who shared his views? Antisemitism is not something that has been outgrown or educated out of the population.

How could people be warned if the press can’t be told? If even Infragard can’t respond? What if even the NSA/DHS/DIA can’t stop it?

What if in one Resident Agent’s territory over 600 Jewish families were targeted for death and it would take place by home invasion to mimic crime and it would be done over time so as not to cause too much commotion? What if it were called “fruit of the vine” by a hate group inside the FBI called the “fellowship” and was meant to wipe out the children so the next generation would be destroyed?

Placebo October 12, 2013 4:10 AM

New BIOSs malware found by Dragos Ruiu

  • Persistent BIOS malware (survives reflashing).
  • Seems to have a BIOS hypervisor, SDR functionality that bridges air gaps, wifi card removed.
  • This particular BIOS persistent malware sample seems use TLS encrypted DHCP HostOptions as a command and control.
  • This sample was on a Dell Alienware, but we have verified infected Thinkpads and Sonys too. Potentially MacBooks, unverified.

Benoit October 12, 2013 6:54 AM

The press-kit of my company, developing an end2end communication platform (and hopefully not based in the US.. 🙂 ), has been released and distributed last week.

It’s quite strange to see that media doesn’t seems to be really interested into possible solutions and alternatives to “traditional US security companies and products”.
There’s many articles based on Snowdens documents, but so few concerning alternatives (not especially mine !).

I did not expect of course to have a direct call from the Editorial Team of the NY Times, but at last some questions .. “Who are you ?”, “Is your product really secure ?”, “Why should you be more trusted than other companies ?”

Strange times for crypto companies … !


(The presskit I’m talking about : )

Mike the goat October 12, 2013 7:08 AM

Benoit: yes, it seems that the media is touting the line “nothing is safe and we are powerless to stop surveillance” and the public are largely accepting of the current state of affairs. I would have expected riots and protests over this but this doesn’t seem to have occurred. Why? A decade of incursions on our rights in the name of “security” from an often imagined bogeyman has people cowed into believing that being molestered at the airport is in the common civic good (an airport in Houston has a sign saying “No jokes – offenders may be arrested” in the screening lane. Since when did making a joke – an understandable reaction to airport security theater become an offense?). CCTV (with audio recording) everywhere, biometrics, forced fingerprinting for mundane daily activities like DMV permits in some states, etc. So I guess it is understandable that we security oriented IT professionals are the only ones (along with civil rights groups) jumping up and down over the Snowden leaks. The random guy on the street views this all as reasonable to somehow “protect us from terrorism” and views Snowden as the traitor. Sheesh!

Benoit October 12, 2013 7:17 AM

@Mike the goat : I think your right, the general public is interested in saying “that’s really bad” but doesn’t really feel concerned about all this.
“Why do I really have to keep secret after all, nobody’s interested in my life and if the NSA wants to read my email, they’ll soon be bored !”

For us it’s more a question of principle : no-one should be able to read my documents, access my computer, precisely because I’ve nothing to hide !

Soon we’ll be running the streets naked and screaming “everyone’s crazy but me !” 🙂

Mark Johnson October 12, 2013 7:36 AM

@Benoit – you have several problems. From both technical and marketing standpoints, here are some considerations.

It is a sure sign of marketing frustration when everyone starts to seem stupid because they’re not interested. Example: “People are really dumb and don’t know how much they need our product!”

Prospects don’t care about your credentials. If they’re bad it might hurt you, but even if your team consisted of ten PhD’s it won’t make anyone buy.

Just because a product is better that doesn’t mean people will buy it. There has to be a compelling reason for someone to change what they are doing. You can explain all you want and some may seem interested, but they won’t buy.

Your website is mostly just prose. Study good web designs, it may be more important than spewing more crypto stuff at visitors. People just will not read anymore. They will get through, maybe, the first paragraph. Then they’ll shrug and go “so what.”

There are other problems you need to understand. Right now everyone and their friends are cobbling together a new encryption thingie, they’ll stick it on the web, then two months later sit back and wonder why a million people didn’t buy it.

You have more problems than that though.

Mike the goat October 12, 2013 7:47 AM

Benoit: this is precisely the reason why I insist on having emails from even my remotely technically minded friends encrypted using PGP. I have an SMIME cert for those who have Outlook /Exchange and can’t click a few buttons to install one of the two excellent free OpenPGP addons. As I was discussing with Brian M the other day on the forums, it is about two things a) increasing their workload – if only evil people encrypt their email then it makes it a lot easier on the NSA as sending or receiving large quantities of it immediately gives them the intel that you might be a threat. b) we don’t write all our mail on postcards – so why should we send even unimportant stuff in the clear? I believe Zimmerman famously said just that in his original PGP docs. For my part I am now running this in my cron at 0901,1210,1620 and my buddy is reciprocating at 1012,1234,1805. We will wait and see if I get a knock at the door 🙂 If their intelligence gathering is done with fricking perl scripts then it might just fool them initially.

Benoit October 12, 2013 8:27 AM

@Mark Johnson : I’m hopefully (!) not frustrated yet, just surprised. We’re still in beta-test phase, and as you’ve said, we have to work on the web design, communication and more! The technical part is not really the most complex one, but definitively the most deterministic !

@Mike : Some kind of reverse-steganography ? 😉

doctor October 12, 2013 9:47 AM

  • “In what condition is the patient, doctor.”
  • “He’s dead, but he is in a good condition except one arm had been broken.”


Mike the goat October 12, 2013 11:35 AM

Off topic but it is the squid article: the people who run the PGP key servers really need to do a purge. Perhaps keys older than 10 years. I am not the only one here who has a key that was lost along with revocation cert to remove it way back in the late 90s and the damn thing is still up there. Worse still are others who have like 50 keys because someone has maliciously generated them in their name and spammed the servers.

Perhaps it is time that they introduced some kind of basic validation, like sending an email to the email in the public key’s metadata with a unique ID to confirm you actually have control of the account. Those who choose to push one with an invalid email or nothing in the field can still be supported but give preference to those that are validated.

Perhaps also only cache keys for 2 years. At 2y send an email to the user explaining they need to reupload their key, give them a 3month grace and then kill it. Not only will this stop old keys clogging up the works but also ensure the keyserver gets updated keys that have e.g. extra trust signatures etc on them.

Identity Theft Victem October 12, 2013 11:58 AM

Mike, an implementation of your idea would help people like me greatly. Around 3/4 of keys with my identity on the servers are clever fakes, claiming to be me.

Mike the goat October 12, 2013 12:51 PM

Identity theft victim: If the demand is there and I can justify the bandwidth expense I would not be averse to something along the lines of the idea below: the site would potentially…

→ provide a standard MIT style keyserver so all your PGP compliant software can use it as usual along with a friendly web interface.
→ when a new key is injected into the keyserver (either using PGP/GPG send-key or by pasting it into our site’s interface) the e-mail address of the key is sanity checked and, if valid an e-mail is sent to the individual explaining that a key has been submitted and giving instructions on how to validate it (it will most likely involve them replying to the message, fully quoted and signing the message. Practically it will just check the signature for validity and inspect the message content for the unique ID sent in the original). We will probably not just have a “reply to validate” or “click this link to validate” as neither of these actions confirm the user at said email actually has possession of the secret key. If people play games with the server by sending a whole heap of keys with different emails effectively causing us to send a heap of verify emails then we may need to rate limit submissions per IP or require the user to first visit our website, fill out a captcha and be whitelisted before running their software to send the key (for those who don’t want to cut and paste their key into the web interface)
→ Those who successfully complete the validation process will have their keys signed with a key to indicate they’ve been robot verified by our project. The key will then be available for download via the keyserver interface or via our web interface. Unverified or unverifiable keys will still be grabbable by keyID but using the web interface search you’ll clearly see them marked as not verified as owning the email they purport to own. This allows people who don’t want to share their email publically to still use the server in a limited fashion.
→ if any user signs another user’s key then the trust relationship will be shown on the website. We will even grab those keys from the MIT servers if we don’t already have them but we will mark them as “legacy/unverified” and colorize them differently. You will be able to see a beautiful graph showing trust webs.
→ We will endeavor to notify verified users when anything related to their key changes – examples include another user signing their key and uploading it, a key expiration date approaching or passing, a valid revocation certificate being posted against it.
→ Every 2 years a user will receive, via email a request to reupload or “freshen” their key. If they do not do this within a further 6 months then the key will lose its verified status, will get the legacy tag (as it is retained solely to maintain trust metadata in the graphs people see) and will also get an explanatory tag that will show in the web interface that the key wasn’t resubmitted.
→ additionally if any of our emails that we happen to send (if someone signs your key, etc) bounce in more than two consecutive attempts over at least two weeks your key will lose verified status. This is to ensure that if an email address goes dark the key quickly dies with it.
→ the site could also foster relationships between notaries (well networked users who enjoy signing other people’s keys) and people who wish to expand their web of trust through a forum and also allow others to schedule key signing parties and list them on the site. With the permission of other sites already devoted to this purpose we could even share info of their events too.
→ I’ll put this in here because someone will ask. We don’t want to become a CA of sorts so we will not review people’s ID and sign their keys with a group key like, say CACert does. We will only have one group key and being signed by it only means we have verified the email address at some point in time. This will be clearly delineated in the comments of that key.

Brian M. October 12, 2013 3:30 PM

@Mike the goat:
I know it was a few days back but I ended up hacking up an implementation of that idea you had

HAHAHAHAHAHAHAHA! Man, I gotta do that with my accounts I use for baiting 419ers.

@Mark Johnson:
Right now everyone and their friends are cobbling together a new encryption thingie, they’ll stick it on the web,

Me, too! I’m working on an idea someone posted on Ars Technica: what if the files are stored on other people’s cell phones? It’s going to be open source (already have a spot on GitHub) for Android and iOS, and the protocol will be open. I don’t want it to be abused like with Bittorrent freeloaders, so it has a trust mechanism in it. Since cell phone IPs change anywhere from six to 120 minutes and may go silent for hours or days, there’s some complications that things like DHT don’t handle well. Fun stuff!

Nick P October 12, 2013 4:16 PM

@ Brian M

Interesting idea. Not sure of practicality but probably a fun project. 😉 Have you considered building on the code or design of one of the many decentralized, secure filesystems out there? Or a group tech like Secure Spread? Projects like those might already have solutions to some problems you’ll run into.

Jacob October 12, 2013 7:08 PM

Bruce mentioned that he uses TrueCrypt although he has some reservation about it, but compared to the other 2 big name commercial candidates, TrueCrypt may be the lesser evil.
A few years ago I looked at TrueCrypt and at another open-source competitor – FreeOTFE by Sarah Dean. I gravitated toward TrueCrypt due to the sleeker GUI and its popularity, but my interest was academic – I have never used TrueCrypt since I did not trust the program, and I kept my few private files private by individually encrypting them.

Now, I think that I will give FreeOTFE a second chance. Although the developer “closed the shop” earlier this year, and the last version is from 2010, it now feels much more secure – the dev process was very transparent at the time, and now the NSA can’t find anyone at home…

For those interested, the files are available at and some tech details at the wayback machine at

Nick P October 12, 2013 7:30 PM

@ Jacob

You might find this interesting. Remember that FreeOTFE’s open or transparent development != reviewed thoroughly by outsiders. It’s just potential. Further, that it’s a dead project means it won’t get bugfixes.

That TrueCrypt has been heavily audited and developers continue to update/bugfix it makes it a superior to FreeOTFE. Just build from the source and use the release referenced in a review paper if you’re paranoid.

And remember that the vast majority of attacks on encryption schemes, esp by feds/TLA’s, are attacks and the system that are then used to bypass strong crypto. Protecting the machine that uses the crypto is where you should put most of your effort.

Jacob October 12, 2013 8:08 PM

@Nick P
My opinion re TrueCrypt was given here:

The only question that matters (and I trust both TrueCrypt and FreeOTFE to have good implementation of encryption in their programs – both were distributed since the mid 2000’s) is whether there is a backdoor in any of them.
I challenge you on your statement “That TrueCrypt has been heavily audited …” and your point #4:
I’ve looked extensively for any security review done on TC, and except one that raised a serious issue about a possible backdoor in the Windows distribution (I can provide a link), there were none that I found.

Also, I disagree with your point #6 – if the NSA places a high value on a TC backdoor, they would not compromise it for drug or child pornography cases. There are known cases where soldeirs were let to die in combat in order to protect high-value intel sources.

Nick P October 12, 2013 11:15 PM

@ Jacob

“My opinion re TrueCrypt was given here”

I read it. It’s a sensible view.

“I’ve looked extensively for any security review done on TC, and except one that raised a serious issue about a possible backdoor in the Windows distribution (I can provide a link), there were none that I found.”

You must be talking about the one by Ubuntu Remix Team [1]. You left out how they concluded it was a secure program with no visible source-level backdoors and the only risk was the official binary from TrueCrypt team. Their words: “TrueCrypt 7.0a is a highly secure program for encrypting containers based on the current state of the art in cryptography. We found no back door or security-related mistake in the published source
code except for our attack on keyfiles [my edit: which has no effect if the password is strong].”

And the other FOSS crypto projects have plenty of bugs and vulnerabilities in their track records. Who needs an intentional backdoor when FOSS developers keep accidentally including their own that nobody catches for sometimes years at a time? 😉

Your last point we totally agree on:

“Consequently, authors recommend using only the linux version compiled by the user.”

[1] “Security Analysis of TrueCrypt 7.0a
with an Attack on the Keyfile Algorithm”, Ubuntu Privacy Remix Team, 2011

Jonathan Wilson October 12, 2013 11:28 PM

Regarding the airport boarding pass incident, last time I flew here in Australia (last xmas) I never had to show my ID to anyone.

Although if someone had asked, I would have been able to show photo ID no problems.

As for the idea of a crypto device, I had an idea for a USB-connected crypto device with a reasonably fast CPU and a suitable source of randomness. It would be open source (the software running on the CPU) and open hardware (schematics, BOM, everything you need to build your own)
The hardware and software on the device would be built such that its impossible to alter the software or the device without the stored keys being erased. (

The device would expose ONLY the following commands to the outside world:
1.Generate keys. Causes the device to throw away its stored keys and generate new RSA key pairs for encryption and signing.
2.Get public encryption key. Causes the device to provide the public half of its stored RSA encryption key.
3.Get public signing key. Causes the device to provide the public half of its stored RSA signing key.
4.Decrypt data. Causes the device to decrypt a block of input data using the stored RSA private encryption key.
5.Sign data. Causes the device to sign a block of input data using the stored RSA private signing key.

Basically it would generate and store RSA encryption and signing keys on the device in a way that is impossible (or as close to impossible as current technology allows) to get the private keys out of the device. If its built properly it would then be impossible for anyone (including law enforcement) to get you to hand over the keys as even the owner of the key cant get at it.

Could go further and have a secure unlock code required to unlock the device (i.e. when you first set up the device and generate crypto keys, you feed it a high-security unlock code, if you change the code or feed it an incorrect code, the stored crypto keys are erased). Such an “unlock code” setup would render the device useless to any thief who is able to obtain access to it (who wouldn’t have the unlock code) and any hacker or malware on your box who wanted to encrypt things (since the device would only be plugged in and unlocked when the user wants to decrypt things and then locked as soon as the decryption is complete).

It (by virtue of the unlock code) would hopefully be able to satisfy the 5th amendment test against being required to hand over crypto keys.

Also, a modified version of the device with more grunt and no unlock code could be used along with SSL to make it physically impossible for the operator of a web site using the tech to comply with a government request to hand over SSL private keys (not without handing over the entire device for the hardware geeks to pull to bits which would then render the website inoperable)

Brian M. October 13, 2013 12:28 AM

@Nick P:
Interesting idea. Not sure of practicality but probably a fun project. 😉 Have you considered building on the code or design of one of the many decentralized, secure filesystems out there?

Yeah, the project is mainly for my personal interest.

I’ve been looking around. I looked at the Freenet Project and others, but the cell phone network presents some unique hurdles.

The project actually is for storing email in a distributed fashion, with the files jumping from phone to phone based on cellular automata (think Game of Life). After I get the base system down, then I’m going to put an IMAP interface on the top of it.

Anon October 13, 2013 1:35 AM


Interesting idea, but my guess is that a court could order a company to stop using your device. If you’re thinking of lavabit, the big picture seems to be that cloud providers have a legal obligation to design their systems in such a way that they can comply lawful intercept requests.

HardKeyboard October 13, 2013 2:52 AM

@Jonathan Wilson: “you feed it a high-security unlock code”

This unlock code should only be entered on a physical keyboard on the crypto device. See previous comments on this blog.

Numeric keys, or better, the 10 most used english letters.

Mike the goat October 13, 2013 3:08 AM

Anon: indeed back when I was involved in an ISP we were forced to implement LI at our own cost after one of our customers raised the ire of law enforcement. Supposedly we were meant to already have LI infrastructure but we didn’t even know it was a requirement.

Mike the goat October 13, 2013 3:14 AM

Brian M: yeah, I am just waiting for the feds to go smash down my door and demand to know where I am keeping my AK47 and terrorist training manuals.

Jacob October 13, 2013 3:53 AM

@Nick P

Yes, I was referring to that document – and it was the only one I found that studied TC to some depth.
From their wording it appears to me that their concern re possible backdoor also applies to the Windows source material, although they immediately qualify that and say they did not see an indication of actual backdoor in the code. But the unknown encrypted string in the header is there, and after reading about the cooking of the coefficients of the DUAL_EC_DRBG standard (open source specs…) I am concerned.

The full relevant section is this:

” the Windows version of TrueCrypt 7.0a deviates from the Linux version in that it fills the last 65024 bytes of the header with random values whereas the Linux version fills this with encrypted zero bytes. From the point of view of a security analysis the behavior of the Windows version is problematic. By an analysis of the decrypted header data it can’t be distinguished whether these are indeed random values or a second encryption of the master and XTS
key with a back door password. From the analysis of the source code we could preclude that this is
a back door. For the readability of the source code this duplication of code which does the same
thing in slightly different ways was however a great impediment. It certainly must also hamper the
maintainability of the code.
As it can’t be ruled out that the published Windows executable of TrueCrypt 7.0a is compiled from
a different source code than the code published in “TrueCrypt 7.0a” we however can’t preclude that the binary Windows package uses the header bytes after the key for a back door. The Linux version does not have that problem with these bytes as their decryption to zero
proves that they don’t hide a duplicate key.
In principle such a duplicate key could also be hidden within the salt value. The 64 salt bytes would
be enough to store the master key and the XTS key for an encryption with a single cipher. This could be encrypted with a fixed key known to the vendor of the binary package or possibly to someone who payed the vendor for the back door. Such a back door would be possible also with the binary packages for Linux. If a combination of two or three ciphers has been selected for the
container the 64 bytes of salt do not suffice to store the key there but then the salt bytes of the
backup header could be used in addition”.

So the only trustworthy approach is to compile the linux source yourself. However, for a Windows user (me) I think to have more trust in the much less popular (read: less interest in backdoor implementation) program as FreeOTFE, at least for 32 bit Windows (since the drivers are not signed, one needs to jump through hoops to get it to run on 64bit Win machine). And for the truely private files stored in the FreeOTFE container, to encrypt the relevant file separately with a different algorithm.

Side note: my take on concerns about a possible implementation bug in FreeOTFE is that none were published until now (using standard googling), and since it is a dead project nobody would now be interested to study it from a research pov. And me not being a big-name terrorist, I doubt that anyone would spent months analyzing the program to get to my files. I think it boils down to a level of trust – I trust Sarah Dean from the way he was open with his project, and I don’t trust the TrueCrypt developers since they are anonymous.

Jacob October 13, 2013 4:10 AM

Oh, and I just could not resist (an excerpt from Wikipedia – take it as a jab at TrueCrypt being “scrutinized” albeit with much weaker scrutiny and conclusion):

“After the 2013 backdoor revelation, RSA security Chief of Technology Sam Curry has emailed the website Ars Technica a rationale for originally choosing the flawed Dual EC DRBG standard as default over the alternative random number generators.[16] The technical accuracy of the statement was widely criticized by cryptographers Johns Hopkins University professor Matthew Green[17] and University of Pennsylvania professor Matt Blaze. An example of an easily refutable claim in Sam Curry statement is Curry’s claim that “Dual_EC_DRBG was an accepted and publicly scrutinized standard”; as Matthew Green points out, it is true that Dual_EC_DRBG was strongly publicly scrutinized, but the scrutiny had shown that “no sensible cryptographer would go near the thing”.”

Mike the goat October 13, 2013 5:30 AM

another off topic thought – my android cell has long been a source of frustration for me and others who enjoy having a moderately secure environ. It was with sadness however that I learned that opera software had discontinued its presto based browser and was pushing a new one based on chromium. Fortunately they have put “classic opera” back on the mobile store and presumably that means they will at least keep patches rolling through which is about as much as we can ask for given they are definitively moving towards chromium based builds.

I don’t think I am the only user who feels that the speed of the presto engine, along with its very modest resource utilization is worth the odd page not rendering as it is supposed to. Perhaps I am biased in that regard having been used to using text only browsers and navigating the mess that many sites become.

Anyway aside from all that – and the fact they’ve effectively rolled out nothing more than a rebadged Google Chrome that doesn’t differentiate itself in the market – the chromium based build lacks a about:config which means that you can’t even do something as simple as use a proxy.

The older presto based opera has an extensive config menu – you can enable/disable JS, java, change your user-agent, disable certain formats you dislike, etc.

The cool thing is that you can enable SOCKS (to say connect to a locally running tor client), disable JavaScript and set the user agent to desktop Opera in a few quick steps. You can’t do this easily with any other Android browser, and the ones specifically designed for privacy lack basic features like tabbed browsing. You can also disable caching in the settings – I don’t have mine disabled, just set permanently to a tmpfs volume so it disappears on reboot (along with my cookie jar etc.).

Why the hell is it that vendors push “upgrades” that lack features that were standard ten damn years ago?

and why do I have to run a proprietary browser just to have a reasonably complete feature set?

That said – I take all the usual precautions with my mobile devices – Google services disabled including market services and the Google play app (I can enable when I need them but I do not like them pushing changes without consent), stock browser has been deleted and replaced with opera (stupid having two browsers for no reason), all apps have their settings locked down using 4.3’s appops (e.g. browser has camera permissions disabled, coarse and fine location disabled etc), unused app apks removed including all the Google bloatware, dm-crypt /data encryption enabled (virtual SD fuse mounted so its actual content is within the /data partition so this too is encrypted) with password changed to something different to screen unlock password, policy set to power off device after three consecutive incorrect screen unlock passwords, adb and USB disabled, Obama alert system apps disabled (you know the crazy presidential alert system – obviously even without the app you’ll still get the texts but at least it isn’t interfacing with an app that can read all text messages and also has internet permissions)

However I have come to the disgusting realization that mobile security can be somehow hacked together to be slightly ‘better’ but will never be even remotely secure due to fundamental architectural problems with Android that just can’t be fixed. Not to mention the mysterious binary blobs and the baseband firmware that could do gawd knows what (activate e911, send location, turn on mike, who knows?) and the fact that we are willingly carrying around a device that allows precise location whenever TPTB desire and even if they don’t the data is logged nonetheless.

end rant.

Jacob October 13, 2013 7:45 AM

@Mike the goat

How do you reconcile the fact that you do not trust all those Google services and apps running on your cellphone, while your base OS is coming from a company whose President publicly stated that “when people go on-line, they should not expect any privacy” – i.e. Google’s?

Mike the goat October 13, 2013 7:55 AM

Jacob: I can’t reconcile it! Which is why I never, ever use the phone for anything remotely confidential. Sure – I run a tor client on it and push my browser through it but that’s just for a bit of added privacy. I wouldn’t dare use it for anything that is remotely sensitive. I am, of course not using the handset build – I am running cyanogenmod 10.2 compiled from source.

Asterix October 13, 2013 8:52 AM

World Largest Advertizing Company Prevails After Being Sued For Bypassing Browsers Cookie-Blocking Settings:

Google Prevails in Legal Dispute Over Browser Tracking

A legal dispute over Google’s practice of tracking users to create targeted advertisements ended Wednesday as a federal judge ruled in the company’s favor.

A class action lawsuit, titled Google Inc. Cookie Placement Consumer Privacy Litigation, was brought by web browser users who alleged that Google avoided browser security settings, using cookies to track usage on computers and mobile devices. The plaintiffs alleged that the company wrongfully maneuvered its way through browser security. They further claimed that this tracking information informed Google’s use of targeted ads.

The lawsuit, which also named online advertisers Vibrant Media and Media Innovation Group, was thrown out by a federal judge in Delaware on Wednesday. Judge Sue Robinson acknowledged the fact that the companies in question avoided browser security, tracking the users, but said the plaintiffs did not prove they suffered damage from this action.

Of course in USA you can track others to your hearts content. The NSA does it, paparazzi’s do it, and with this mentality it probably does not look like anything wrong if Google does it too.

Last I heard though cookie tracking without permission was not legal in EU. Which could be why here in Amerika I ended up with a cookie in my Opera browser (which was set to block cookies from when I visited some time ago…

kashmarek October 13, 2013 10:15 AM

Dataland: The Emerging Dystopia

Could Snowden have been stopped in 2009?

The above represent a dichotomy of sorts: one talks about the failure to use data to discriminate against an individual based on that person’s actions, while the other references using data indiscriminately against everybody.

Petréa Mitchell October 13, 2013 11:47 AM

NSA humor. For all I know, it may count as squid-related too.

(Note for readers outside the US: there is an actual burger chain called Five Guys Burgers and Fries.)

Clive Robinson October 13, 2013 12:53 PM

@ Petrea,

Speaking of jokes about inteligence or lack there of…

Many years ago there was this joke about the London School of Ecanomics,

Why do the lectures always go around in threes?
One who can read, one who can write and the third to look after the two interlectuals.

It was by the way the LSE sghool of journalism that taught many of their students to “hack” phone voice mail and email in order to get stories. And importantly how to do what is now called “Parellel Construction” so they would not get caught. However Rupert Murdoch decided that this protective measure took to long and thus the News of The World was closed and senior staff at the Sun have had their jobs terminated and their collars felt by Inspector “one eye” plod who famously investigated with the wrong eye and thus as Nelson did “Saw no shits” at the NoTW/Sun/Mirror befor sailing into the distance weighed down with several bungs and wishes of happy retirment.

Nick P October 13, 2013 2:55 PM

@ Jacob

“However, for a Windows user (me) I think to have more trust in the much less popular (read: less interest in backdoor implementation) program as FreeOTFE, at least for 32 bit Windows (since the drivers are not signed, one needs to jump through hoops to get it to run on 64bit Win machine). And for the truely private files stored in the FreeOTFE container, to encrypt the relevant file separately with a different algorithm.”

Makes sense.

“Side note: my take on concerns about a possible implementation bug in FreeOTFE is that none were published until now (using standard googling), and since it is a dead project nobody would now be interested to study it from a research pov. And me not being a big-name terrorist, I doubt that anyone would spent months analyzing the program to get to my files. I think it boils down to a level of trust – I trust Sarah Dean from the way he was open with his project, and I don’t trust the TrueCrypt developers since they are anonymous.”

I see. The Sarah thing is immaterial unless you’ve vetted her thoroughly: she could be working for someone. The benefit I do glean from your post is obfuscation. Anything that gets really popular will get plenty scrutiny by TLA’s, whether others audit or not. Using a less popular product to avoid that might be the best selling point for the TrueCrypt opposition.

(Of course, one can just modify TrueCrypt to do the encryption slightly differently and defeat all existing tools. Maybe I did it, maybe not. wink)

re the excerpt you gave (and final thoughts)

Funny. I enjoyed that. The TrueCrypt system certainly needs peer review by expert cryptographers, platform specialists, coders, driver writers, covert channel specialists, and security engineers to catch any possible subversions or subtle vulnerabilities in their respective areas. It’s competitors need the same thing, though, albeit some having a headstart in a few areas. 😉

I’ve reviewed few pieces of software that could prove (convincingly) that they’re not subverted. Only one disk encryption product is designed similarly today that I know of is partly open and partly commercial European effort. All the rest are guaranteed to have vulnerabilities due to lifecycle process choices. I’ve written on this blog about what a development process needs to make it easy to detect subversion and vast majority of FOSS coders wouldn’t use it.

TrueCrypt has worrisome traits. There’s possible subversion in there for sure. People reading our posts need to know that (and Linux web machine has other benefits no doubt). They should also know that none of the competitors can prove they aren’t subverted either. Hell, OpenOffice had an entire game hidden in it most users didn’t know about. A crypto subversion in an open source project that’s had little to no expert review is incredibly easy to pull off, even looking like an accident. And every truly accidental vulnerability is a subversion in practice. So, the security of both Truecrypt and competitors against TLA’s are comparable.

(Albeit, the popularity of Truecrypt means that the obfuscation benefit exists when you avoid it, as you pointed out. The idea obfuscation would use the TrueCrypt container format and file name, although totally not being TrueCrypt.)

Bottom line: I’m using it for defence against the vast majority of attackers, with OPSEC modified for consideration a TLA backdoored it. That’s what I have to do with my desktop, laptop, phone, and servers too if their COTS. Nothing new sadly.

I’d love to see a sponsored project done in public eye to reimplement (not just audit) TrueCrypt using a subversion resistant development process. The build machine and process would also be standardized so the executable images should always be the same if source was the same. The compilation and code hosting would be done by mutually distrusting parties. This might win everyone over, yeah? It would also cost a few hundred thousand dollars at the least, maybe moving closer to high millions.

I’d say US should fund it with tax dollars but everyone on the blog knows why that’s no longer a good idea. 😉 So, maybe a libertarian, anti-government, rich fat cat might cough up the dough. If I get the contract, I’ll even throw in some custom hardened computers as a bonus for he or she.

@ Andy Fletcher

Skype being subverted was as close to open and shut as conspiracy theories get. I mean, the design alone allows them to have everything if they choose. Then I pointed out in the past that the professional review of their crypto and the reversed engineered version at a conference had very different designs, the latter with an easy backdoor. And its servers are in US, with its ownership being by US companies cooperative with Feds.

Pretty much everything paranoids speculate about Truecrypt has been provably true about Skype for quite a while now. 😉

I proposed design alternative here. Best bet for lay users is to combine FOSS VOIP client and ZRTP, all compiled from source from public repositories. There’s guides for that online too.

Truecrypt audit October 13, 2013 4:05 PM

For Truecrypt there exists a kind of certification/audit by the French ANSSI (Agence nationale de la sécurité des systèmes d’information). It is relatively old (2008) but I don’t see it mentioned often:

For French speakers there is a list of possible vulnerabilities on page 15:

The ANSSI looks comparable to the German BSI, so a state organisation. The details of the certification are quite complicated as is usual with these schemes.

Jacob October 13, 2013 6:11 PM

@Truecrypt audit

Interesting. Thanks.
Some comments and notable excepts from the report (Google translated):

  1. They spent 1 man-month on testing and preparing the report. Commendable.
  2. “Certification does not in itself constitute an endorsement by …, and does not guarantee that the product certified is totally free of exploitable vulnerabilities”.
  3. They also don’t like the key-file key generation process, but for different reasons than given by the Ubuntu Privacy Remix Team. The latter seems to be much more thorough and technical in its analysis.

Overall, ANSSI says that TC does perform as required by its check list, and the flaws found can be used for an attack if the adversary has an acces to the encrypting system. For me, this is a minor point since if an attacker has an access to my system, it is his machine. My only concern is a possible security compromise (either by design flaws or, more interestingly, by a backdoor) of encrypted files in an off-storage location or on a machine in an off-state , and for me that is still an open question.

Figureitout October 13, 2013 6:42 PM

–Sounds like he’s having a fun time trying to figure it out. To me, it feels a bit like cryptanalysis which I hate; too irritating and maddening, always a side channel. Can’t trust any output from my pc, don’t want to connect it to others and infect them too. A part of me wants to burn it in thermite, another, save it for possible later analysis. Maybe even this infected garbage hardware will be better than hardware forced on us all in the future as has been evidenced by history and pretty much everyone recommending a “secure system”…

name.withheld.for.obvious.reasons October 13, 2013 7:47 PM

For me it is simple; before Snowden, plausible deniability…after Snowden, potential liability.

Several years ago whilst working at a government installation, an event occurred that required the attention of management and TLA’s, though no TLA was contacted. A question existed about the activity on a system in the plant, my response to management was that we could not establish the physical presence of the person responsible for the activity. It could be someone else using their login, it could be a programmatic source (internally or externally), and a number of other plausible explanations. I bring this up as a preface to the Silk Road case–you’ll see that my reasoning should provide any attorney defending this case a useful tool…

  1. Actually tying the activity and the person to activity on networks is extremely difficult, indisputable and non-controvertible evidence in a criminal case is needed.

  2. Reputation and History; The TLA’s have enjoined illegal methods to obtain surveillance, monitor, and track individuals or groups on the internet. They have gone as far as subverting whole networks and service providers, what is the jury to believe is the possible explanation for the chain of events prior to the court proceeding.

  3. Witness Purger; The director of the lead agency lied to congress, as do his subordinates–there is a cultural in the community (TLA that is) of lying. How can a case brought before a court by these entities have any shred of integrity.

  4. These agency’s policies are in direct contravention of the law, how is it that the TLA can break the law and bring a case before the court. I guess you are going to have Ted Bundy bring a case concerning the securitized loan fraudsters on wall street next…

NobodySpecial October 13, 2013 8:13 PM

@Asterix – if you aren’t paying for the service, you are the service.
Don’t want cookie tracking on Google don’t use Google.
The eu cookie law is essentially pointless. It gives you a popup asking you to allow cookies, if you don’t you can’t use the site. It’s a commercial site they can choose their customers.

Government sites generally don’t have login so don’t use cookies much anyway

Figureitout October 13, 2013 9:04 PM

D-link router backdoor

In other words, if your browser’s user agent string is “xmlset_roodkcableoj28840ybtide” (no quotes), you can access the web interface without any authentication and view/change the device settings (a DI-524UP is shown, as I don’t have a DIR-100 and the DI-524UP uses the same firmware)

–{roodkcableoj28840ybtide} is funnily enough {editby04882joelbackdoor} backwards.

SignedPkg October 14, 2013 2:25 AM

Nick P: “The build machine and process would also be standardized so the executable images should always be the same if source was the same.”

Tor claims to have achieved that standardization:

But I think that this standardization should be done at the distribution level, not at the application level.

See my ideas about that in

@Mike the goat: “largely (80%) successful recovery of keystrokes using an iPhone accelerometer.”

Idea already reported by Bruce himself, here:

Mike the goat October 14, 2013 3:23 AM

SignedPkg: I must have missed that. I assumed it was indeed possible but figured the sampling rate of a cellphone sensor wouldn’t be sufficient. Guess these sensors have become pretty good…

Btw a guy posted something interesting in Bruce’s blog entry on Silk Road blog entry… I think it is credible. We have known that the tor people had tor not change ‘guard’ (entry nodes) frequently as it increases the risk you’ll stumble upon one that is a honeypot. Seems likely that someone running a hidden service will eventually chance upon them.

Mike the goat October 14, 2013 5:49 AM

The answer: the guy made multiple posts so I just decided to link to the entire page, but you’re right, had I linked to the first of the relevant posts it would have saved people some scrolling.

Mike the goat October 14, 2013 5:53 AM

The answer: sorry, you’re the ‘guy’ – really overtired today , my apologies. I searched for ‘’ out of curiosity as to what they were referencing. What has been seen cannot be unseen!

Curious October 14, 2013 6:26 AM

For public key krypto, when multiplying two prime numbers, is one of the numbers always 65537?


I just watched video on youtube and I was left with the impression that one of the two prime numbers would always be the smaller 65537 while the other one would be huge.

Mike the goat October 14, 2013 7:12 AM

Curious: I believe you’re a bit confused. The RSA algorithm starts with finding two primes (p, q). These are the same bitlength. We then sum them together to get n. The length of n in bits is the keylength you see expressed in encryption software. We then find the value of φ(n). We then select e so that e:gcd(e,φ(n))=1 (at this point the RSA implementation selects a value for e which is <65537 for efficiency purposes) and this becomes the public key exponent.

So e is generally <65535 – not p or q (which are generally primes of the same bit length)

Mike the goat October 14, 2013 7:15 AM

Damn blog chopped off at the less than sign. Let me try again.
RSA algorithm starts with finding two primes (p, q). These are the same bitlength. We then sum them together to get n. The length of n in bits is the keylength you see expressed in encryption software. We then find the value of φ(n). We then select e so that e:gcd(e,φ(n))=1 (at this point the RSA implementation selects a value for e which is less than 65537 for efficiency purposes) and this becomes the public key exponent.

So e is generally less than 65537 – not p or q (which are primes of the same bitlength)

Nick P October 14, 2013 9:42 AM

@ SignedPkg

“Tor claims to have achieved that standardization”

That’s quite awesome. Confirms a volunteer type project can pull it off. Now if they would just use typesafe code for their program meant to stop TLA’s hunting 0-days…

“But I think that this standardization should be done at the distribution level, not at the application level.”

Interesting idea. This is all part of the larger issue of Software Configuration Management security, often neglected by dev’s. Wheeler has the definitive overview and good links on it here:

Certifying compilers will also be useful as well to prove object code = source code. There are basically three right now: VLISP, TILT for Standard ML, and CompCert for C. Ocaml’s compiler stages are clean enough that it was used for a DO-178B code generator with some manual work. Wirth’s platforms like Oberon are simple enough that something similar could be done with them.

Curious October 14, 2013 10:00 AM

@Mike the goat

I see now that the guy that mentioned the number 65537 was in fact talking about that number as being the exponent in calculation for the public key. My bad. (I know next to nothing about cryptography, and so I ought not bother anyone with my blatant lack of understanding of cryptography, but I couldn’t help myself.)

Why is ‘e’ generally less than 65537 as you say? Would perhaps ‘e’ sometimes be exactly 65537? Perhaps that number is supposed to not be kept the same?

The reason I mentioned all of this, was in part due to my imagination running off abit after vaguely reading about bit flipping somewhere. And I just thought it was a little odd if crypto somehow was dependent on a number like 10000000000000001 , because if you “flipped” the first bit then I imagined that one would perhaps end up with some very simple number all of the sudden. 😛 Since I cannot even pose a sensibel problem about anything so that it makes any sense, I’ll just refrain from making more comments here today. 🙂

SignedPkg October 14, 2013 10:56 AM

@Nick P: “This is all part of the larger issue of Software Configuration Management security”

Your link about SCM is about protecting developments of an application from evil: upstream protection.
Tor (and Bitcoin) use Gitian for downstream protection, preventing evil from interfering on the users of an application.

Gitian uses libfaketime to ensure that the compilation process of an application does not depend on who compiled it.

Application users are then able to check between them that they were able to compile independently the same binary, communicating with Gitian or without.

SignedPkg October 14, 2013 10:58 AM

And Gentoo Linux should use Gitian (or a variant) to compile applications.

I wonder if a Gentoo dev is reading here.

Mike the goat October 14, 2013 11:02 AM

Curious: I am not a cryptographer – my knowledge of RSA is limited to writing an implementation in perl back when crypto export was forbidden and everyone was publishing their implementations in their signatures and on t-shirts (they did the sane thing with deCSS a few years later too).

Looking back to my earlier post, you see how we find φ(n) – in other words φ(n)=(p-1)(q-1). Let’s pretend we had p=3,q=11. Our φ(n)=210=20. Now our requirement for e is that it be gcd(e, φ(n))=1 in other words coprime. So we could use 11 or 13 or 19 etc… So we choose one at random that’s below 65,537.

So let’s say we pick 11. We now have the public key: n=33,e=11.

Obviously we would be working with much bigger numbers for p and n. Oh and correction from my previous post the third sentence should read multiply not sum.

Nick P October 14, 2013 4:25 PM

@ SignedPkg

“Your link about SCM is about protecting developments of an application from evil: upstream protection.
Tor (and Bitcoin) use Gitian for downstream protection, preventing evil from interfering on the users of an application.”

It actually encompasses that. SCM are also called repo’s and build systems. They can take the code in, protect it, build it, test it, prep it for distribution, and provide it to the user. Depending on what you want. The old Orange Book requirements for high security systems, for instance, required the entire system to be generated on site from source and tools vetted by indepenent reviewer. This was just one part of A1 SCM. A more modern example is DO-178B. Part of it is source to object code mapping. They have to demonstrate that the compilation process happened without altering the software in an unjustified way.

The reason upstream and downstream protectionthey seem to be different things is because the popular SCM tools ignore this. OpenCM and Aegis addressed many security requirements, but didn’t get much uptake by FOSS community. Most high assurance projects seem to combine custom extensions to existing SCM’s and manual procedures. Subversion and the overly complicated Git have much more traction than high integrity SCM tools. A few even died off because of this it seems. Very unfortunate.

“Gitian uses libfaketime to ensure that the compilation process of an application does not depend on who compiled it.

Application users are then able to check between them that they were able to compile independently the same binary, communicating with Gitian or without.”

I appreciate the reference to the software they’re using. Looks to me like a great tool. Might use it in the future. If they’ve solved this part of the problem already, perhaps more FOSS projects should just start with a good SCM tool and Gitian. And if Hardened Gentoo uses it that should make that kind of system even more robust in face of subversion threats. I almost want to bootstrap Gitian on Hardened Gentoo just for the heck of it. 😉

GregW October 14, 2013 5:14 PM

Some of you know about the databases of blood spots taken from heel pricks from pretty much every hospital-born baby in the US. (This happens in other countries and the EU but I am less aware of the details.)

The stated purpose of this is to screen for ~50 known childhood disorders, but the actual blood spots are kept for a number of years that varies by state and often lasts 18 years and sometimes even indefinitely. Potentially rescreening can be done at a later date? I forget the details. Anyway, I always wondered why so few people were aware or concerned about the DNA-sequencing ramifications of this mass storage of blood spots.

Anyway, now buried at the bottom of the following NYTimes article is a explicit confirmation that the National Institutes of Health has awarded several research grants for DNA sequencing of newborn blood.

You know that once the sequence is stored in a computer, it’s not going to get deleted. A blood spot on a card in a filesystem somewhere is “paper paper, never data” to appropriate Clive’s phrase, but this puts the data into the online, never-deleted realm. The uber-database containing every future citizen’s DNA shouldn’t be far behind, with the older of us perhaps “grandfathered out” so to speak (until blood-based drug tests used for employment screening get merged in, no doubt.)

As with the NSA, I would prefer that the NIH and other agencies not retain this data indefinitely for “research purposes” in hopes that it “might be useful”… the government needs to be able to say how often research was carried out and how useful it was and will be in order to justify acquisition and retention of such personal information.

Amit October 14, 2013 10:09 PM

This is old news here by now…

U.S. spy agency collects millions of email address lists – report

…but this part was sort of interesting:

The U.S. National Security Agency collects hundreds of millions of contact lists from personal email and instant messaging accounts around the world, including many from Americans, The Washington Post reported on Monday.

The data collection takes place outside the United States, but sweeps in the contacts of many Americans, the report said, citing two senior U.S. intelligence officials.

A spokesman for the Office of the Director of National Intelligence, which oversees the NSA, said the agency is focused on discovering and developing intelligence about foreign intelligence targets. “We are not interested in personal information about ordinary Americans,” he told the Post.

Presumably “ordinary Americans” is part of the NSA wordplay, so if the US citizen is suspected of wrongdoing then that Citizen is no longer an “ordinary American”.

SignedPkg October 15, 2013 12:44 AM

@Nick P: “I almost want to bootstrap Gitian on Hardened Gentoo just for the heck of it. ;)”

To someone that might start to distribute Gitian (or a variant) on Hardened Gentoo (even in alpha state): please remember to post the URL here.

GregW October 15, 2013 8:24 AM

Followup to my earlier post. 23AndMe is starting to offer genetic sequencing tests for $99.

As the Fast Company headline says, “It can also tell you what might kill you.” Although in this case they’re talking about long-term disease, not short-term fatal security vulnerabilities.

As 23andMe scales, its business model will shift. Right now it gets most of its revenue from the $99 that people like me pay in return for test-tube kits and the results we get back after we send off our spit-filled tubes. “The long game here is not to make money selling kits, although the kits are essential to get the base level data,” says Patrick Chung, a 23andMe board member and partner at the venture-capital firm NEA. “Once you have the data, [the company] does actually become the Google of personalized health care.” Genetic data on a massive scale is likely to be an extremely valuable commodity to pharmaceutical companies, hospitals, and even governments. This is where the real growth potential is.

On September 4, the NIH announced that it had issued a $6 million grant to fund the first-ever randomized trial to “explore the risks and benefits” of whole genome sequencing. The volunteer group? Four hundred eighty Boston newborns. The five-year study, known as the BabySeq Project, “will accelerate the use of genomics in clinical pediatric medicine by creating and safely testing novel methods for integrating sequence into the care of newborns,” says Dr. Robert Green, a medical geneticist and genomics researcher at Harvard Medical School who heads up the study.

What I–and all the parents in the BabySeq Project and all of 23andMe’s customers–also have to wrestle with is whether offering up DNA has compromised our children’s and our own rights to privacy. 23andMe’s privacy statement clearly states that it collects a person’s genetic, registration, web browsing, and self-reported information. The company can share its data with third parties “[after] it has been stripped of Registration Information and combined with data from a number of other users sufficient to minimize the possibility of exposing individual-level information while still providing scientific evidence.” Minimize the possibility does not equal a legal-bound guarantee.

“Why should 23andMe have my health information so they can sell it?” asks genetic counselor Hercher.

“Nothing’s private. It’s your genetic sequence. It’s literally the best identifier that we have!” I ask her if she finds the fact that I gave my daughter’s genetic information to 23andMe unethical. “Does it bother me that you, a loving mother of a 5-year-old kid whom you have no history on did this? No,” she says. “It doesn’t trouble me at all. Does it bother me globally that when we do direct-to-consumer testing via these Internet things, we have privacy issues and confidentiality issues that can’t be controlled? That is a problem.”

Page disagrees. “I view this as a tidal wave of inevitable data and a trend in the marketplace,” he says. “The technology is available; the price point is decreasing.”

ACruz October 15, 2013 9:08 AM

23AndMe is starting to offer genetic sequencing tests for $99.

That could be beneficial but considering their “relationship” to Google I would probably look for some other company.

GregW October 15, 2013 11:13 AM

I think with the above two articles combined with this one, I could make a nice movie plot threat, all based on news articles from the past week!

Combine universal infant blood spots/heel pricks, infant DNA sequencing brought into an online, semi-secure national database “for research purposes” (history suggests that once stored digitally, neither scientists at national labs making nuclear weapons nor the NSA can keep data secure ultimately), and finally, biological virus manufacturing based on DNA sequencing:

Craig Venter reclines in his chair, puts his feet up on his desk and … shares his vision of the household appliance of the future. It is a box attached to a computer that would receive DNA sequences over the internet to synthesise proteins, viruses and even living cells.

It could, for example, fill a prescription for insulin, provide flu vaccine during a pandemic or even produce phage viruses…

“We call it a Digital Biological Converter. And we have the prototype,” says Venter.

“Life is a DNA software system,” says Venter. All living things are solely reducible to DNA and the cellular apparatus it uses to run on. The DNA software both creates and directs the more visible “hardware” of life such as proteins and cells.

With that question settled, says Venter, it’s clear that if you give an organism new software by rewriting its genome, you have rewritten the software and life itself. He dismisses his scientific critics who say there is more to remaking life than creating DNA molecules as guilty of a kind of modern day vitalism, the pre-scientific notion that an intangible something sets life apart from other things made from atoms and molecules.

The current prototype can produce only DNA, not proteins or living cells, but even that could be enough to make the device practical. Some vaccines are made using just DNA molecules, points out Venter. “If there is a pandemic, everyone around you is dying and you cannot go outdoors, you can download the vaccine in a couple of seconds from the internet,” he says. That digital file would allow DBCs in homes, hospitals and companies to “just spit out a loaded syringe”. His researchers believe their current prototype is already capable of producing DNA precisely enough that it could be used as a vaccine.

Venter also sees a DNA-printing version of his device helping with more regular medical care. It could print out the DNA that encodes the hormone insulin so important to diabetics he says. Adding that DNA to a protein synthesis kit, a tool that is commonplace in research labs around the world, would produce the finished treatment for injection.

To Venter, this is cool. To a security-concious software guy the software metaphor is not exactly reassuring! It may still be cool in some sense, but not quite with the same unmitigated glory!

GregW October 15, 2013 11:44 AM


I agree with the spirit of your remarks.

I’d quibble that it’s not the connection with Google per se that bothers me, its the common-to-many-firms “big data”/cloud business model, with revenue streams from both the sale to me of the data, and the sale of the data to others.

As was made explicit in that article, “the long game here is not to make money selling kits”… you are the product, and your info is being sold to others.

And the kicker is that the safeguards to prevent abuse to make us feel better about the prospect– redaction/de-identification and computer security– both have remarkably shaky foundations. If there is money to be made unredacting the redacted data, a third party will make a business doing so.

name.withheld.for.obvious.reasons October 16, 2013 4:07 AM

11 Sept 2013, the Future of Yahoo! and Facebook

Mirassa Myers, the CEO of Yahoo!, when asked about the NSA requests and systems (aka Prism) she said–kind of paraphrasing here…”we think it is better to work within the system…” and “we are taking charge.” That’s kind of like saying “Yeah, I bent over for it–but I didn’t like it. I guess I showed them!” Actually, they did push back–a little. They lost an appellate case–if they wanted to be patriotic, the case should have been brought before the supreme court.

This speaks to Bruce’s point about having the major players push back–yeah, right! I guess I can order that princess cup and tiara–I’m with the emperor.

Facebook CEO, suck a face–uh–Zuckerberg, claims he cares about the mission…what…mission impossible? Zuckerberg says they want to “build a knowledge economy”, is that after or during the build-out of the surveillance economy?

name.withheld.for.obvious.reasons October 16, 2013 7:19 AM

The single largest threat to U.S. interests–the government.

The economists and even the GAO/CBO have it wrong…maybe the CRS can get it right…

Okay, I through out a few TLA’s (not LEA’s), the Government Accounting Office, the Congressional Budget Office, and the Congressional Research Service all produce various reports on the economy; private sector activity and federal/state spending and accounts. My thinking is that the broadest sweep of the economic factors and conditions could be insightful…

The United States of America has the following economic issues…

  1. Trade deficits (three decades with China, Korea, Japan, Saudi Arabia)
  2. Energy imbalance, consumption exceeds supply (currently), rest of world pays premium (oil dollar)
  3. Structural Failures; financing (vis-via savings), governmental, and occupational)
  4. Government deficits, spending, and management; procurement and acquisition of services exceed private sector by an order of magnitude, reforming budget processes alone could produce significant savings (budget planning, verses mission, versus capital requirements and costs), savings levels across every segment is problematic–makes cost of capital a concern, job markets are competitive but when automation makes leisure time possible then leisure time is the new job…
  5. Capital markets and flows; finance and banks centralize the management of financial services–loans,
  6. finance agreements, capital management, etc…this homogeneous environment is both brittle and stringent…central planning means central control…meaning the bank’s first obligation is the maintenance of centrality.

  7. Institutional malaise, the long term disconnect with our democratic republic has consequences–we get the government we deserve…
  8. Education, knowledge, and wisdom are in short supply in the United States of America…so is honesty and integrity. When parties in a dispute lie to each other, there audience is watching, no actor is compelled to bring honesty to the conversation–it’s a modified form of “he said, she said!” crap.
  9. An intelligentsia that is all but absent–“pundits and experts” extol the virtue of their positions–not their ideas.

And I almost forgot–we’ve pissed off just about every other nation in the world. Congratulations U.S of A.

Nick P October 16, 2013 12:08 PM

@ name.withheld

“The United States of America has the following economic issues…”

Pretty good analysis. Seeing the problem is the easy part, though. Solutions in each of these areas that allow change without utterly destroying the country and the lives of the people in it… well, I don’t hear so many of these. That’s where people wanting change need to focus their efforts on. The people’s first question on any issue will be “what’s our alternative?” And we have to have something ready to go to give them.

name.withheld.for.obvious.reasons October 16, 2013 12:37 PM

@ Nick P

Okay, I believe (figuratively) that solutions are obvious. Seems to me that a forest for the trees issue exists in our (meaning U.S.) federal system. I’m sorry I’d give you a response that is more thoughtful but until the critters in congress (I am so embarrassed) can pull their cranium out of their arse I don’t see the point. What I am emphasizing is the willingness of our asses to even thoughtfully recognize reality–but I’m afraid asking the people that created much of the problem are incapable of solving it. I am afraid that wishful thinking will not be useful at this point. The level of abject failure, I personally apologize for our actions, is so spectacular that even the simplistic observation gives cause for concern. Where the fuck are any sane, rationale, deliberative individuals that can stand in front of this tidal wave of shit?

Nick P October 16, 2013 1:45 PM

@ name.withheld

“Where the fuck are any sane, rationale, deliberative individuals that can stand in front of this tidal wave of shit?”

That’s been about my feeling lately. It’s why my comments seem more defeatist than they used to. Seeing the influence the majority has over the situation & what they’ve done with it, it’s hard to see how a rational minority can get much done.

name.withheld.for.obvious.reasons October 17, 2013 6:57 AM

The Federation of American Scientists posted an article about travel difficulties for scientists coming to the United States.

Adi Shamir, of RSA fame, has been prevented from attending the History of Cryptology conference (and had submitted a paper that was accepted). It appears that scientists world-wide are having extensive problems in Visa applications. He appealed to the NSA regarding his predicament and received the following response:

“The trouble you are having is regrettable…Sorry you won’t be able to come to our conference. We have submitted our program and did not include you on it.”

By the way, those of you that were concerned for my friend in Argentina, I have confirmation of contact–I’m relieved. Thank you for your concern, I may not know for a while what happened. Seems our communicating is problematic. During our last phone call my friend mention that the computers went bonkers after receiving my email.

name.withheld.for.obvious.reasons October 17, 2013 7:25 AM

Article from Truth Out ( regarding the defense authorization act of 2014.
I hadn’t realized that a change in the language regarding surveillance guidance. It doesn’t appear that the DoD is getting it…adds a new agency to process this crap? Where the hell does the DoD get off–we are spending over one trillion dollars yearly for defense related programs.

name.withheld.for.obvious.reasons October 18, 2013 4:33 AM

Wanted to start on two political issues, they are close to home for me and give me cause for concern. The first is the Death of Aaron Swartz and what he was doing at the time (working on an digital drop for whistle-blowers/journalists) that is used by the NY Times. I believe he was self-motivated though he probably never suggested what purpose it might serve. The JSTOR issue probably produced a surprise–something was exposed. One question, who was the reporter that covered the story and assisted Aaron with the NY Times project?

Now I can guess at the intersection of Aaron’s death and that of Barnaby Jackm but I don’t want to speculate–need some conclusive information first. Adding to the drama was the demise of Michael Hastings in Los Angeles (all under 35 years of age), in one way or another they have some connection–Wikileaks. Concerning is the caviler treatment of talented systems technologists of social conscience and active to some degree–they were advocates in one way or another. And just as the latest Snowden taping proffering that the truth-teller is prosecuted and criminal behavior on the part of the prosecutor is not an anomaly.

The issue for the masters, they may have their hands on the wheel–but they aren’t mechanics and if their engine(s) fail they are essentially rendered harmless. Anonymous was engaged in this type of activity but suffered the fate of time and was in need of a “developed” strategic objective. And the masters puppets were more than happy to launch a direct attack against not just anonymous but a whole circle of like minded and effective individuals. Look to the prisons (nearly 100 internationally) and the cemeteries (handfuls including journalists) for some proof…but Aaron…

Even if the masters (masters of male cow fecal matter as far as I am concerned) attempt to align in direct opposition to “enlightened self-interest” and disregard the risk/benefit given the numeracy (100,000 knowledgeable and motivated persons makes for a formidable adversary) of potential responders. And, I can imagine they are capable of employing the Tsu reflector. I am not suggesting that anyone start an anonymous like effort, could prove to be an unhealthy activity. But, what is understood is that there are “mechanics” that could be persuaded to fix the masters vehicle. What the master doesn’t know, and what the master does know, can be useful–I can think of a concept “Knowfare”, that describes a strategic warfare thesis conducted overtly and supported by knowledge-based architectures. It’s just a thesis?

Buck October 19, 2013 1:12 AM

Very Coherent & On Point Thesis! The fact that it’s prefaced with your past/present (recent) pontifications is most positively a plus!

Please consider the most plausible possibility that such pseudo anonymous posses (possi?) be the products of unintelligence gone down the Pooper (pardon my pun)…

Also curious about convenient drip drip drip leaks consistent with convenience in conviction of ‘convicts’ – just as said “covert” methods of collection come to be accepted as incorrigible all thanks to our technically illiterate courts. (Thanks for that grandfolks! 😉

Can’t blame them though; the generational-gap we spoke of earlier continues to ring true. Prolly should mentioned it prior, but it seems to mirror the parent-teen relationship…

“Don’t trust ya, gotta know whatchya upto.”

“F. O. ol’ M, u dun dig us!”

For now you may threaten your parents with the prospect of death (or a home), but know that revolution skips a generation… You have not suffered a global assault. You do not wish to. You would not like to see your parents slaughtered; you would love to see your children thrive!

Please try & sympathize darlin’$!

Leave a comment


Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via

Sidebar photo of Bruce Schneier by Joe MacInnis.