Comments

Anura October 24, 2014 6:06 PM

I route all my traffic through my home VPN… Hopefully ISPs don’t start doing this, as well, or I’ll have to find another option (probably get a virtual server somewhere). We really, really need end-to-end encryption of the internet. Of course, the goal these days seems to be to lock you out of the devices as well, especially with phones.

Howard October 24, 2014 6:33 PM

Funny story about what happens when TSA screeners come across a solid gold Nobel prize in someone’s luggage.

http://www.loweringthebar.net/2014/10/nobel-prize-flight.html

So they opened it up and they said, ‘What’s it made out of?’

I said, ‘gold.’ [What’s this GOLD MEDAL made out of? Is that what you just asked me?]

And they’re like, ‘Uhhhh. Who gave this to you?’

‘The King of Sweden.’ [See where it says “NOBEL” on it? Does that sound at all familiar?]

‘Why did he give this to you?’

‘Because I helped discover the expansion rate of the universe was accelerating.’

At which point, they were beginning to lose their sense of humor. I explained to them it was a Nobel Prize, and their main question was, ‘Why were you in Fargo?’

sena kavote October 24, 2014 7:22 PM

Cenk Uygur and Sam Harris debate / interview

This 3 hour event touches on topics lot discussed in these squid posts. You may want to hear this from start to finish ( possibly while doing something else ).

On 1:35:40 Cenk Uygur and Sam Harris talk about profiling:

https://www.youtube.com/watch?v=WVl3BJoEoAU

Funny thought is that some aspects of managing airport security checks are a philosophical topic. It’s hands-on philosophy…

Similarly funny thought is that choosing a location of virtual private server or even update download ftp mirror server for home-use Linux or BSD can be geopolitics in a personal form.

Jacob October 24, 2014 7:54 PM

To Bruce:

This month is the anniversary of the Truecrypt auditing intiative. Being a member of the technical advisory board, I wonder if you could update us on the status of Phase II?

Mark October 24, 2014 8:51 PM

what happens when the security industry makes anonymity “the product”? Forget Tor, think Facebook.

@Daniel: May I suggest using the word ‘privacy’ in place of ‘security’, then swap facebook and tor?

Vincent October 24, 2014 8:52 PM

Two phone companies that subsidize phone service for poor consumers collect sensitive information, including Social Security numbers and driver’s licenses, from up to 300,000 clients to crack down on fradulent claims. But instead of deleting this information after using it to check consumers’ eligibility, they accidentally post it online. Here’s the full story.

mathg October 24, 2014 9:57 PM

Im looking for the solutions of the exercises of the bokk I bought security engineering.

Thanks

Yetanotherrandomusername October 25, 2014 12:53 AM

Sophos: Do we really need strong passwords?
Good article about a Microsoft Research paper. For a password to survive an online attack, it needs to survive a million guesses. But for an offline attack, it needs to survive 100 trillion guesses!

@Godel:
Interesting project! I wonder what is really on that USB device. Too bad there isn’t a white paper that spells out exactly what the system is doing. There seems to be too much of a leap of faith to believe the security claims.

Daniel October 25, 2014 1:05 AM

@Mark

The word “anonymous” and the concept of anonymity is fungible–it means different things to different people. What anonymous means to the person on the street and what it means to security professionals are two different things. The choice to use the word “anonymous” to describe their product by Facebook is telling…they didn’t use the word privacy because it wouldn’t have meant the same thing to their target market. What Facebook is doing is bad because it is difficult for security professionals to promote their vision of anonymity (Tor) when that message is constantly being undercut or muddled by a behemoth like Facebook peddling a very different concept of anonymity. There is real potential for marketplace confusion among the unsophisticated.

Niraj October 25, 2014 5:49 AM

Just like the indepth audit bring carried on for truecrypt, It’s time we had a foundation to audit thorough important or widely used software especially after finding the heartbleed and other bugs.

There are also many software which claim to be NSA proof such as btsync, the foundation in the interest of security and general citizen audit those claims also.

Clive Robinson October 25, 2014 6:46 AM

OFF Topic :

I don’t know if anyone else has picked up on this,

http://threatpost.com/researcher-finds-tor-exit-node-adding-malware-to-binaries/109008

Basicaly a researcher scanning ToR exit nodes found after a very short time one in Russia that was putting backdoor and ET code into unprotected executables he downloaded.

Such Man In The Middle attacks are not unknown, we are aware of the NSA doing similar on ordinary HTTP transfers. But as far as I’m aware it’s the first time it’s been done as part of the ToR network it’s self.

As I’ve noted before on a number of occasions one of the significant weaknesses of ToR is that the network does not extend as far as the communications end points thus “end run” attacks such as this are possible in the resulting gaps.

Clive Robinson October 25, 2014 7:07 AM

OFF Topic :

It would appear Samsung Knox, is not Fort Knox grade, more Hampton Caught Maze grade…

http://mobilesecurityares.blogspot.co.uk/2014/10/why-samsung-knox-isnt-really-fort-knox.html

Basicaly they have done what most readers here know you shouldn’t, they’ve stored the password for the security apps on the phone in a way that makes them accessable. They use security by obscurity to encrypt it with AES, but generate the key from the device ID string (which any app can get) and a constant string…

So now it’s known it will only take moderate level programing skills to write an app that will reveal the password in plain text…

I’m sure the head of the FBI will be thrilled by this “golden key” solution…

Clive Robinson October 25, 2014 7:22 AM

OFF Topic:

And todays winner of the “No Sh17 Sherlock Award” goes to Google and it’s warning that users of bit.ly can use it to infect your machine with malware if you click on the resulting link,

http://www.google.com/safebrowsing/diagnostic?site=bit.ly&hl=en

Whilst the readers of this blog should know that all services like bit.ly can be used to do this, it’s nice to see it being documented like that so you can show it to less security aware individuals.

After all some people –pointy haired boss types etc– will demand evidence before they will even consider changing their behaviour… Which makes you wonder how they have managed to make it to the age they have without picking up a Darwin Award or two on the way…

Bob S. October 25, 2014 7:34 AM

@Jacob
Re: Truecrypt

I too, would like to know the status of the TrueCrypt audit. The initial phase was over quickly and essentially said there was some sloppy coding errors, but nothing that would seriously impact encryption. Then…nothing.

I’ve read a couple short blurbs about derivations coming on board, but nothing to get excited about, so far.

So, anyway, what’s the deal with TrueCrypt?

Anyone?

Sancho_P October 25, 2014 7:36 AM

@ Clive Robinson (Re adding stuff to binaries)

I found the info at
http://www.leviathansecurity.com/blog/the-case-of-the-modified-binaries/
a bit shallow as I only (my bad) understood
– it adds some stuff (but not what exactly does that stuff)
– all binaries / downloads are affected (but with exactly what – the same??? Win/*x)
This feature could be a bug, too.

Clearly the message is “It’s not safe because it’s named Tor”.

The principle is dangerous, especially the escalation mechanism hurts.

Thoth October 25, 2014 7:49 AM

@Clive Robinson
The method to sweeping these security problems you mentioned is purely simple but it simply take tonnes of efforts to execute. It’s either pure laziness or incompetency/ignorance of the users.

Phones and devices are not proven to be secure so it should be left for digital objects that are expected to be not secure. Expect contact details and messages to leak and it will leak. Samsung Knox or not, if they want to do business in a country that have wiretapping laws (most countries have), they are expected to have some form of golden key or the Government of that land might just ban the sales and regard it as contraband.

Tor exit nodes poisoning binaries. Not the first time many have been warned to access a HTTPS site (not HTTP) with Tor. And for binaries over dubious channels (Tor is an anonymous network but the data should be considered “dubious” sources because it routes traffic in it’s network as relays) or another binaries, sandbox them in a scapegoat machine properly configured to handle malware payloads.

Thoth October 25, 2014 9:27 AM

If you can’t scare people… use some good old cliche tricks…. which does not always work:

http://www.theregister.co.uk/2014/10/25/quotw_ending_october_25/

Quote:
We all now know that the beautiful dream of the internet as a totally ungoverned space was just that — a beautiful dream. Like all utopian visions, it was flawed because it failed to account for the persistence of the worst aspects of human nature. Alongside the blessings … there are the plotters, the proliferators, and the paedophiles.

I wonder why all of these spy agency people love to use such words….

Nick P October 25, 2014 10:19 AM

@ Clive Robinson

I figured Samsung Knox would eventually be hit for something at Samsung’s end. I gave them some credit for at least starting with a solid foundation: INTEGRITY RTOS or Multivisor. Not sure which but I’m sure it was engineered well. The rest… who knows. If they’d budget for it, I’d gladly do a medium-to-high assurance engineering effort with them to prevent most of these problems. Samsung makes great stuff, with Apple & Microsoft being the main competition. I’m always up for causing those other two companies some headaches. Plus, I respect a company that can remain vertically integrated and profitable in the chip industry. I wouldn’t even try.

re VISC

That’s an interesting concept. I’m sure they’re going to argue (esp in patents) that it’s totally novel. The writer first thought of Transmeta. I first thought of Schell’s GEMSOS security kernel which had physical CPU’s and virtual CPU’s. Apps were scheduled onto virtual CPU’s, which were scheduled across physical CPU’s. Aside from multiprocessing support, this let the kernel do better isolation and covert channel mitigation by working at the vCPU level. It also aided portability where apps targeted the “smart hardware” of kernel interface + vCPU instead of hardware interface + CPU.

Far as the claims, I think it’s an interesting new application of vCPU multiplexing and adaptive computing. That it works per cycle is probably the best part of it as it can in theory catch the corner cases, both in potential speedups and hangups, that do in many processors. If I was comparing it, though, I’d be doing a comparison to TRIPS architecture as they’re both trying to make better use of functional units on chip. They might even look into combining them.

Personally, I’d be looking into designs like NISC or processors with onboard FPGA’s + JIT to them. These give us potential to his the best static and dynamic performance guarantees. So far, industry using the FPGA approach do a combination of pre-built HDL blocks for performance/area with HLL-to-HDL synthesis for rapid development of the rest. The potential we aren’t seeing used is how we can do the same to automatically inject security into SOC’s or system designs the same way. A bunch of security IP’s, from IOMMUs to crypto blocks, can be developed along with templates of proper use. Then, the designer just does the HLL codes while using the security I.P. like regular coders do security/safety checks. The synthesis tools then convert that to a design with security baked in hardware up.

I think there’s potential there. I already have some I.P. on it, but lack the knowledge to fully exploit it. So, a trade secret until I decide on how to go about patenting it.

Grauhut October 25, 2014 10:49 AM

@Godel: “The more you read, the better it gets.”

Thats such an amazing amount of free lunch! And a lot of epic potential! 🙂

Jacob October 25, 2014 10:57 AM

@ Thoth:

“there are the plotters, the proliferators, and the paedophiles.

I wonder why all of these spy agency people love to use such words….”

Because it is a pissing contest among them…

Jake October 25, 2014 11:06 AM

@Godel:

sort of an interesting project, although at least two potential problems with it:

  1. it still does not seem separate your PC from the internet. So even if the web browser is only on that stick, any other software that you running on your PC can open a port to the Internet – and anything on the internet can attempt to exploit ports on your computer.

  2. depending on how the cloud storage is implemented, the stuff you save on it could be visible to the cloud storage administrators.

skeptical October 25, 2014 1:30 PM

HOW is it possible that there are so many squid stories that there’s enough for a new one every Friday??

Grauhut October 25, 2014 1:54 PM

@Skeptical: “HOW is it possible that there are so many squid stories”

Niche publishing has become incredibly cheap in the internet and there is a market for advertising on niche news pages.

In the case of squids its the food industry. As long as there is food made from squids, there will be an ad market for squid news and this will keep the squid news train running. 🙂

google.com/search?q=food+made+from+squids

CallMeLateForSupper October 25, 2014 2:23 PM

These amused me the most:

“… an online search into TerraCom resulted in a Lifeline application that had been filled out and was posted on a site >>> operated by Call Centers India under contract <<< with TerraCom and YourTel.”

Contractors; scourge of the 21st century; a way for an organization to distance itself from unforeseen events and avoid the consequences of same.

“…a lawyer for the phone companies accused the news organization of violating anti-hacking laws.”

If you can’t argue with facts, dazzle them with fancy footwork. (Or if all defensive rejoinders escape you at that moment, Tommy Smuther’s well worn argument, “uh!…uh!… oh yeah?!”, might do.)

Adjuvant October 25, 2014 3:37 PM

@Clive, Nick P: Speaking of alternative architectures, Russel Fish of Venray (whom I introduced here, and regarding which Nick P updated here) has put together a series of articles on “the post-PC world” evaluating and comparing Venray’s work, the Automata processor, and another design known as the TOMI Celeste. Enjoy reading.

Adjuvant October 25, 2014 3:44 PM

CORRECTION: The TOMI Celeste is, in fact, Venray’s newer design. I correctly remembered that Fish was comparing three alternatives, but I did not recall that the third alternative was to be ARM-based, and was only passingly mentioned in this article series with a promise of a future article discussing it. So it’s just the TOMI Celeste and the Automata that are actually explored in this series, sorry.

Nick P October 25, 2014 5:04 PM

Appreciate the link as it’s a great read. The Automata Processor I’ve posted on here. I’m particularly excited about it because Clive and I have shown that so many input problems can be reduced to automata with analysis/efficiency benefits. GOLD Parsing and Semantic Designs work shows how useful a general purpose parsing system can be. Putting one in hardware with massive parallelism and cost efficiency was a great idea. The new thing I learned from the article is it’s built on cheap memory tech like Celeste. Micron is brilliant twice in this solution.

The Celeste I’ve already talked on. That it gives you 128 cores in a memory stick for under $40k is nice. That they just reused the existing DDR communications systems was brilliant. My first consideration was combining those with a SGI- or NUMSAScale-style interconnect to build a modern version of Thinking Machines Corp’s 64,000+ core monster MPP. It wouldn’t be cheap at Celeste’s prices. Yet, seeing what he did in Genetic Algorithms, I’m sure there’s some potential uses for a system like his where the processors themselves have more muscle.

My more… realistic… use was as an accelerator for secure systems. I often use untrusted front or backends in my designs. One of my schemes was to have people represent their algorithms in a type safe language with proof carrying code of their properties. It gets checked, maybe dynamic checks added, and then compiled into fast native code that interfaces with secure interpreter. Similar to a JIT. Seeing Celeste, I’d use a security-enhanced version of Parasail or X10 for the parallel application. Then, my system would compile the data crunching kernel to native code for Celeste. Might put an I/O reference monitor there for basic mediation and tag/capability management.

Thoth October 25, 2014 7:45 PM

Talk about CALEA abuse:

http://arstechnica.com/tech-policy/2014/10/chp-officers-reportedly-stole-cell-phone-photos-from-women-in-custody/

Another reason to simply just not provide CALEA at all with robust easy to use high assurance mechanism. You give someone a golden key (trusted) or a backdoor (covert), they will use it beyond the box the Law have drawn for them (quote Keith Alexander of NSA) to do anything they want.


TAO :: CrowdFund-Zebra

Yea… that’s a name I give this TAO which is to control/intercept/coerce crowd funding sites like KickStarter to kill off any opposition movements due to crowd funding being so popular these days.

You can control the site admins or people in the crowd funding management team or you can go straight to the members who start a particular crowd funding project. They are all pretty much sitting ducks as they expose their bank accounts and identity online.

Doug October 25, 2014 9:45 PM

@Jacob, can’t fault them for complaining.

It’s inevitable for cheap tor boxes to go mainstream given amount of Snowden hysteria generated by independent press. If tor can be commercialized en mass, it’d sell like hot cakes just as attaching widgets to DOS did. The public is so worked up, by the press, for it, even a single “tor” sticker will sell stuff.

Grauhut October 26, 2014 1:27 AM

@Thoth: Imho a small open hardware company like Olimex or Cubietech should do the tor router job on kickstarter.

olimex.com/Products/OLinuXino/open-source-hardware
cubietech.com/

They know what they do. Maybe we here should tell them… 🙂

65535 October 26, 2014 1:39 AM

@Clive

“It would appear Samsung Knox, is not Fort Knox grade… I’m sure the head of the FBI will be thrilled by this “golden key” solution…”

Ha! The Director of FBI bought a pwnd device. That could be problem:)

@Jacob, Bob S. and others

“I too, would like to know the status of the TrueCrypt audit.”

So would I.

Also, what happen with that survey of AV vendors and their answer as to their cooperation with the NSA?

https://www.schneier.com/blog/archives/2013/12/how_antivirus_c.html

@ Sancho_P

}I found the info at
http://www.leviathansecurity.com/blog/the-case-of-the-modified-binaries/
“a bit shallow as I only (my bad) understood – it adds some stuff (but not what exactly does that stuff) “- all binaries / downloads are affected (but with exactly what – the same??? Win/*x)…”

I would guess it affects all supported Windows products that require updates – but it looks like the hash reveals the payload has been altered.

“The good news is that if an entity is actively patching Windows PE files for Windows Update, the update verification process detects it, and you will receive error code…” – leviathansecurity

Speaking of payload transfers and security does anyone recommend Binfer to securely transfer files?

http://www.binfer.com/

@ Fgask, Thoth and others

“this exact thing was reported to be happening in the NSA as well. So now we basically have everybody between regular cops to NSA employes doing it.”

http://sanfrancisco.cbslocal.com/2014/10/24/east-bay-chp-officer-accused-of-stealing-nude-photos-says-its-game-for-police-california-highway-patrol-sean-harrington/

[and]

“Talk about CALEA abuse:”

http://arstechnica.com/tech-policy/2014/10/chp-officers-reportedly-stole-cell-phone-photos-from-women-in-custody/

Both CALEA and the NSA need to be cleaned up. The NSA is basically setting a bad example at the top and it being propagated down the ladder. This must stop.

Figureitout October 26, 2014 3:44 AM

Godel
–Wow…that better be a joke and they better return all the money to the morons that invest in something like that. That’s not possible.

Clive Robinson RE: VISC
–Wow, sounds neat. And well, who’s fault is it if you’re chatting it up on a blog of ideas you want to implement lol? If these guys actually made a new architecture; hope they make some money.

Jacob RE: Truecrypt audit
–We need to find out exactly why the developers just blew up and said Truecrypt is insecure too. How could they remain hidden? Only started developing after hard lessons learned? Made commits via “code mules” and other means of hiding themselves? Why when MS quit supporting XP? What the hell happened? How is this still unknown? Does no one know who wrote Truecrypt (at least the hard parts…)…?

I want those questions answered too, in addition to if Truecrypt is really a risk to still use (that would be a damn shame).

Slight Warning

–Potential bug in Kali linux, probably nothing serious. Using mostly default settings (ie: timelimit of ohh say 5-10min of screen on til screen lock) you can click on the “Places” or “Applications” tabs up at top left, then wait for screen to go to sleep. Then waking up you can see screen again before typing in password for login screen. You have to click before it goes back to login screen. So just minor risk if you left someone important up, which would be dumb if you actually cared about it; not a big bug.

Haven’t been able to recreate that bug in Windows though…

Daniel October 26, 2014 10:58 AM

@Francisco

Yes, is the answer to your question. But this is data collection disguised as a security improvement. One’s phone number tell a great deal of information about one–it’s tied into lots of big databases. That information is valuable to Twitter.

Nick P October 26, 2014 11:22 AM

@ Francisco and Daniel

Both of you seem to be right. Another reason to avoid Twitter or feed it BS.

NobodySpecial October 26, 2014 12:19 PM

Security bug in chromebook ?
I use incognito mode on an ARM chromebook for secure things like internet banking or checking email.

I went to a flight booking site in a fresh incognito window (to see if search history was giving me different prices) and clicked an extra back from the flight page and got back to an earlier internet banking site – although not a logged in session.

It seems that Chrome is remembering browse history and/or caching pages for incognito sessions.

Francisco October 26, 2014 12:30 PM

@Daniel

Never thought of that as data collection… In that case it’s not just a security flaw but a really disingenuous one.

65970 October 26, 2014 1:16 PM

Anybody have opinions on relative TAO-resistance of gruveo.com (in browser not app)? P2P–sometimes.

Nick P October 26, 2014 2:18 PM

@65970

If it…

Ships via Five Eyes carriers.

Is stored or operated in Five Eyes soil.

Uses commodity hardware, firmware, or software.

Has wireless features.

Was not developed with an EAL6+ process with top coders and security engineers.

Isn’t protected from EMSEC or side channel attacks.

…then it’s probably vulnerable to Five Eyes TLA’s using any number of their methods. Almost certainly to TAO.

Sancho_P October 26, 2014 3:32 PM

@ 65535 Re: “Russian Tor exit node modifies binaries”

“I would guess it affects all supported Windows products that require updates …”

Beyond that “guess” [ 😉 ] that would be exactly my question(s):
– Only Windows? All the same “wrapper” around? (seems to be, see malwr.com)
– What if one downloads e.g. http://www.deftlinux.net (Digital Evidence & Forensic Toolkit by Stefano Fratepietro, very interesting), will it be untouched – because of being Tux?
– And what does the patch do, the malwr analysis may have crashed (???).

Mind you, the Windows PE isn’t for the average user, it’s for sysadmins and OEMs.
If the Tor exit node only patched Win PE … ouch.
Again: ”Oh, someone downloading something for Win Update, so we can use it to send them our golden key, let’s patch it now” – this would show bad intention.

“… – but it looks like the hash reveals the payload has been altered.”

Yes, the hash – if one cares to check the hash. The average user won’t.
The download itself will not throw any error of course.

But in case of the Windows update the OS will do it when one then tries to update the system / driver by the “patched” binary.

And that is where the danger really starts:
If the update is installed by an self declared “admin” (user) s/he will follow the error code hint from Microsoft and download a “Fixit” from Microsoft … but it’s also “patched” then:

“If an adversary is currently patching binaries as you download them, these ‘Fixit’ executables will also be patched. Since the user, not the automatic update process, is initiating these downloads, these files are not automatically verified before execution as with Windows Update. In addition, these files need administrative privileges to execute, and they will execute the payload that was patched into the binary during download with those elevated privileges.”
[leviathansecurity.com] [emphasis added]

Now s/he may have handed over the computer to the adversary.
Any moderately intelligent and experienced Win user will circumvent any system “Warning”.

BTW, the paper cited in the article is interesting for Tor (or similar) users:
http://dl.packetstormsecurity.net/papers/general/Software.Distribution.Malware.Infection.Vector.pdf

An ISP can (e.g. TLA forced) re – route traffic via any node, including an https proxy (no warning in the browser), thus making the hash checking questionable.
We’re back at trust – but whom? (Binfer? Why? I didn’t know them before, thanks anyway)

Grauhut October 26, 2014 6:02 PM

@Nick P:

  • If its stored on an computer owned or operated by a US company, whereever on this planet.

  • If the computer uses an OS developed by a US company… No, thats a thinfoil hat! 🙂

AiD October 26, 2014 7:46 PM

Snowden Interviewed At Harvard Law School

http://yro.slashdot.org/story/14/10/26/0023235/when-snowden-speaks-future-lawyers-and-judges-listen

We are witness to a historic first: an individual charged with espionage and actively sought by the United States government has been (virtually) invited to speak at Harvard Law School, with applause. [Note: all of the following links go to different parts of a long YouTube video.] HLS Professor Lawrence Lessig conducted the hour-long interview last Monday with a list of questions by himself and his students.

Some interesting segments from the interview include: Snowden’s assertion that mass domestic intercept is an “unreasonable seizure” under the 4th Amendment; that it also violates “natural rights” that cannot be voted away even by the majority; a claim that broad surveillance detracts from the ability to monitor specific targets such as the Boston Marathon bombers; him calling out Congress for not holding Clapper accountable for misstatements; and his lament that contractors are exempt from whistleblower protection though they do swear an oath to defend the Constitution from enemies both foreign and domestic.

These points have been brought up before. But what may be most interesting to these students is Snowden’s suggestion that a defendant under the Espionage Act should be permitted to present an argument before a jury that the act was committed “in the public interest.” Could this help ensure a fair trial for whistleblowers whose testimony reveals Constitutional violation?

AiD October 26, 2014 7:56 PM

@Thoth

‘Quote:
We all now know that the beautiful dream of the internet as a totally ungoverned space was just that — a beautiful dream. Like all utopian visions, it was flawed because it failed to account for the persistence of the worst aspects of human nature. Alongside the blessings … there are the plotters, the proliferators, and the paedophiles.’

That is an unusually stinky crock…. he sounds like a Lord Palpatine wannabe addressing congress who, for the first time, read of ‘the world that never was’, and ‘v is for vengeance’.

Especially ironic considering the organized crime that is his group, as perspicuously paltry as it is.

HK121 October 26, 2014 8:16 PM

@65970

Re: Gruveo.com

It strikes me that this website is a solution looking for a problem to solve. Strategically, Nick P is correct in so far as the NSA is concerned but if the NSA doesn’t fit within one’s threat model it looks OK technically. I couldn’t get it to work with TBB under Windows but I could get it to work with Orbot and FF under Android. I imagine call quality will vary widly using Tor, but it is possible. I’m not a fan of the fact they require Javascript and that they send all one’s metadata to Google but Tor fixes that problem. I’m also bothered by the fact that they are not more forthcoming about the encryption methods they are using. Ergo, I wouldn’t trust the service with anything too sensitive. But if they are trustworthy with the implementation…a big if…one should be able to maintain at least a certain degree of privacy with the service.

The bigger question, I think, is that if one really cares about anonymity why would one be using voice chat (which leaks biometric data) or even worse video chat (which leaks even more biometric data). Then there is the issue of both parties needing the same code (key) and so the problem of key exchange arises. So what situations would the service be valuable? I can’t think of a standard case.

65535 October 26, 2014 8:30 PM

@Sancho_P

You have a good point.

“these ‘Fixit’ executables will also be patched.” –leviathansecurity

Now s/he may have handed over the computer to the adversary.
Any moderately intelligent and experienced Win user will circumvent any system “Warning”. –Sancho_P

Yes, most Windows users will probably disable the UAC [or use an Administrator account in XP and below]. And, as far as I can tell Fixit solution downloads don’t automatically check the hash value. That is a real problem.

I interpreted Win PE as all Portable Executable files [exe, Objects code, DLL, FON and so on] not the full Preinstallation Environment bundle. Hence, I understood ALL Windows updates could be manipulated. Now I am not so sure.

I have seen a few critical windows boxes in production [Such as HR and accounting] that give the “Windows Update encountered and unknown error, Code 802000xx…” while installing Windows updates and I was suspicions of corrupted or manipulated updates.

If the packaging of malware only affects the Preinstallation Environment package the damage would be lower in scope. But, if the malware can be installed on ALL executable files it is a big problem [NSA implants and hackers].

Your other comments bring up more questions. Good catch.

65535 October 26, 2014 9:47 PM

@Sancho_P

“We’re back at trust – but whom? (Binfer? Why? I didn’t know them before, thanks anyway).”

To answer that part:

Binfer looks simpler and transfers large files [though it is a paid service].

Clients could use it for legal briefs, large HR records, medical records and so on. I could not discern the actual encryption method for Binfer so I was asking the experts on this board.

One other thing, I don’t think leviathansecurity is discussing the XPE [XP embedded, Win CE or there other small foot-print full OS offerings used in very small boxes and ATM machines]. I could be wrong.

Thoth October 26, 2014 10:18 PM

@65535, AiD, et. al.
Let’s put it in a very crude way, Governments now are runned by “crooks”. They only vcare about their own profits and pockets and not their citizens.

The trust in Governments world-wide have seen a huge erosion, from Middle East, Asia to Europe and Americas.

They use cliche terms to coerce, blackmail and threaten and if they dont succeed, they use extra-judicial means to get what they want fulfilled.

That article I posted above shows as a small example of a larger issue of Government misbehaviours and why everyone should secure their boundaries against their misuse with high assurance security.

@Samsungg Knox et. al.
Is there a specific CC EAL and FIPS 140-2 certification for this one ?

@Fransico
They are trying to use phones as authentication and tracking. Only worsens security and privacy. If you can pull a phone database, you can make correlations and bruteforce their passwords. Don’t forget it uses another provider, Fabrics, to supply that technology. Who knows if the login details would reside at Twitter or Fabrics and the security assurance and incident response they provide (plus cooperation with TLAs). Social technology (Twiiter/FB/GPlus…etc…) are meant to disrupt privacy from it’s core due to their initial design.

@all
For those who just aren’t convince that phone and metadata correlation are very powerful tools for breaching privacy, I would advise the download and trying of the Neo4J graph database tool (http://neo4j.com/) and script some quick Java codes loaded with fake identifiers and metadata. Deanonymization and identity correlations could not get much more easier to automate with tools like Neo4J and other graph databases that supports relationship mappings !

AiD October 26, 2014 10:34 PM

Reform:

I think reform discussions are good, but nobody should be very hopeful about reform. Not only has the recent disclosures not made any impact whatsoever, people would be very foolish to believe there were not – long ago – breakaways from the constraints of governmental accounting in secret operations, agencies, and divisions. Which can never be touched by any manner of reform.

There is a tie of money from government to a governmental agency. It is still legitimate and on the books. So, how do you maintain that when an agency is making its’ operating income entirely independently? This is actually a very hard question people do not ask.

The answer gets down to: how can you integrate into agencies and help ensure you control decisions made? One, money. The easier answer. Two, true capacity for integration. If you have a system for roving about agencies, departments, organizations as you will, then you can be anyone’s supervisor and operate as you please with all the authority & power you wish.

Democracy is a farce. I suppose the good news here is that everyone in the sembelance of a democratic nation at least has quite a number of entertainers beating the drums & playing the guitar for your benefit?

It is show business.

It is also true the truly bad guys do not run anything, because at their heart, they are simply much too incompetent to do so.

That is… why they are bad in the first place.

Nick P October 27, 2014 1:18 AM

@ Grauhut

“If its stored on an computer owned or operated by a US company, whereever on this planet.”

Great catch! How did I slip on that one? (sighs) The other one, though, isn’t true if the development can be reviewed for subversion. My best processes involve parties that are mutually distrusting reviewing each other. The code, documentation, repos, and procedures must be explicitly configured for it, though.

@ AlanS

It’s actually part of a larger trend that goes back many years: BYOD. The trend is you bring a device to work and there’s some built-in capability to protect work data on it. A subtrend was to use virtualization to split the phone into two or more sections. Simplest is Work and Personal. At least half a dozen companies were pushing this years back, with new ones each year or so. VMWare used the concept for virtual appliances packed into their own executing files. The Samsung Knox product seems to use an embedded version of the latter (from Green Hills) to do that or the former (BYOD).

General Dynamics acquired the main, dominant competition: OK Labs. I often referenced OK Lab’s OKL4 kernel here and the excellent tools they had. They were deployed to tons of phones. Then, they got integrated with mobile security solutions. Then, GD bought them as a mobile addition to other offerings like their TVE Workstation. Now, GD is partnering with Samsung to combine whatever I.P. they got from OK Labs with Knox and especially the awesome Galaxy phones. This should get GD plenty of great contracts on top of whatever their bribery guarantees them, with Samsung getting their cut too.

So, everything about this existed for corporate users before government sales got into the picture. Then, the two were like “hey, let’s toss these really secure phones and get some cool ones with buzzwords instead! They’ll stop enough to at least seem secure.” Yeah, let’s do it!

@ HK121, 65970

re threat model

Thing is that we’ve learned that regular black hats are good at finding vulnerabilities. They and TLA’s do similar things. The TLA’s just have more resources and potentially more exotic attacks to use. So, let’s look at this particular apps claims just on basis of front page.

There is a web version using HTML5 and some apps. The web browsers that support HTML5 get smashed regularly by run of the mill blackhats. Once they control the machine, it’s game over unless its automated harvesting and the app isn’t popular. If it’s an app, same with the platform. Assuming they can’t hack the browser/platform, now we’re moving onto the app security.

First, we must trust they’re telling the truth. There’s been plenty of spyware posing as security products. Second, they use peer-to-peer technology “whenever possible.” Can they do that arbitrarily? Third, we must trust that they implement the crypto protocol engine and others stuff in a way that doesn’t introduce vulnerabilities. Fourth, we must trust the the HTML5 portions themselves don’t let them or others cause us problems. Their use of it brings risks that don’t otherwise occur. And finally: are they skilled at secure coding, etc? If not, they probably made plenty of mistakes.

EDIT: Just before posting, I looked at their press. I found they use Adobe Flash for the camera and its communications protocol. Another weakness that could pay off for regular blackhats. That’s three risky technologies in one product.

Andrew_K October 27, 2014 6:25 AM

@ Sancho_P
Regarding the mystery vote

I really like the idea of that republican candidate just trolling. Just by saying that the voting machine counted wrong.

If they can prove him wrong, they just have shown that his vote was not anonymous. Very slippery slope, that is.
If they can’t, they can’t stop him from accusing the machine.

@ Thoth
Regarding trust to gvmt

Losing trust probably starts with lying and getting not only caught but also away with it. That’s not a completely new phenomenon. Governments are probably lying since they exist. But today we the people have much greater chances to catch them lying. Live and in color!
But none of the political caste cares honestly about being caught. And that’s when people turn away from ballots and start ignoring their rights.

Andrew_K October 27, 2014 8:08 AM

@ Daniel
what happens when the security industry makes anonymity “the product”? Forget Tor, think Facebook.

http://www.cnn.com/2014/10/23/tech/mobile/facebook-rooms-app/index.html

I think this is much more dangerous than just comercializing anonymity.

People do not know the difference between what Facebook calls “anonymous” and real anonymity.
Many persons behave very differently when they assume to be incognito. They are perhaps more honest regarding personal details, sexuality, drug consumption, and political opinions — just to name a few areas suitable for blackmailing.

Facebook (and anyone attached to it) tries to extend their profiles with even more personal details. This is classic snake oil, aiming for the naives.

65970 October 27, 2014 9:24 AM

HK121, NickP, thanks much. Calomel says gruveo domain-validated certificate, perfect forward secrecy (Key exchange ECDHE_PFS) TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256. Works good with Tor.

Good point about voice-printing/facial recognition. on the other hand voice with PFS can go away, unlike text. Flash, bad, yes. Balkan origin trust more.! No, no apps, only browser, ever, perhaps reinforced with other protections–chroot browser, virtual machine, counter XSS, else ?

So carefully? We thank you.

Larry October 27, 2014 10:29 AM

The law enforcement agents, who took that Quote seriously, are presumably working very hard, which is a good thing to speak the least. There are all forms of evils, as existence of one does not disqualify the other. For anyone who had looked evil straight in the eyes, as one will likely experience through a few short glimpses at the vast uncut internet streams, which they presumably have at their disposal, what he said will make good sense. Evil exists, just as the common man would see, if he ever so often take a few strides down Hollywood Blvd at dusk.

AlanS October 27, 2014 12:58 PM

New Neil Richards privacy paper posted to SSRN:

Richards, Neil M., and Jonathan H. King. Big Data and the Future for Privacy. SSRN Scholarly Paper. Rochester, NY: Social Science Research Network, October 19, 2014.

Abstract: In our inevitable big data future, critics and skeptics argue that privacy will have no place.  We disagree.  When properly understood, privacy rules will be an essential and valuable part of our digital future, especially if we wish to retain the human values on which our political, social, and economic institutions have been built.  In this paper, we make three simple points.  First, we need to think differently about “privacy.”  Privacy is not merely about keeping secrets, but about the rules we use to regulate information, which is and always has been in intermediate states between totally secret and known to all.  Privacy rules are information rules, and in an information society, information rules are inevitable.  Second, human values rather than privacy for privacy’s sake should animate our information rules.  These must include protections for identity, equality, security, and trust.  Third, we argue that privacy in our big data future can and must be secured in a variety of ways.  Formal legal regulation will be necessary, but so too will “soft” regulation by entities like the Federal Trade Commission, and by the development of richer notions of big data ethics.

Markus Ottela October 27, 2014 1:20 PM

TFC has now a sister project TFC-AES, that makes an attempt to use AES in GCM mode, using
Dwayne Litzenberger’s PyCrypto library. Key size is set to 256 bits, and it is preferrably generated by sampling from TFC HWRNG: After two-pass Vn whitening, the key is XOR’d with /dev/random by default. Nonce space is 64-bits and by default a blacklist is enabled: nonces are thus never repeated, even with different contacts.

The implementation is of course only initial. I’ve to look in detail to at least Salsa20 and Threefish, and Keccak based stream cipher. I’ve additionally considered tuning AES: Replacing key expansion with 14 HWRNG generated round keys would give some security through obesity, that is lost when OTP is discarted, and possibly defeat ‘related key attacks’. Number of rounds could also probably be increased to strengthen cipher security: cost in computing time could be reduced with pre-computed round functions. Even though the intuition is no harm could be done with these modifications, it will require a lot of research – after all, “don’t implement your own crypto” is the rule no. 1.

Source code of the sister project is available at
https://github.com/maqp/tfc/tree/aes

Also, the main project had a security issue. Namely, the receiver modules were not authenticating order/number of long message packets and additionally, keys of dropped packets were not being overwritten. The bugs have been fixed and a tool for RxM that overwrites keyfile data from start to current offset, is available and should be run by all users.

Gerard van Vooren October 27, 2014 1:27 PM

@ AlanS

“Abstract: In our inevitable big data future, critics and skeptics argue that privacy will have no place.”

Call me a critic and skeptic (in this specific case).

The problem is not technical, not even legal. It consists of two parts. One is very greedy companies who will keep pushing the envelope and two is the word “handy”. Most people just don’t care what happens with their data/metadata as long as they don’t know or being bothered with and as long as the communication device they are using is “handy”.

In some sense it is really selling your soul to the devil.

Terms and conditions may apply.

Nick P October 27, 2014 6:01 PM

@ Markus

Good work on putting in a standard cipher. If you’re using Python & want extra assurance, check out the TripleSec link:

https://keybase.io/triplesec/triplesec_now_in_python.html

I haven’t vetted its quality. I just liked that they combined Salsa20, AES, and Twofish. I’ve cascaded them before in my polymorphic cryptosystems. I’d replace either AES or Twofish with a different blockcipher that’s internally different. A Keccak based construction comes to mind. I was using IDEA because it’s lasted so long. Scrypt use is a plus.

Do not replace the key expansion. Don’t do anything that might negate the security evaluation. Instead, diversify on safe points: what ciphers you apply in what order, the initial counter values, how you preprocess the key with different salt/algorithms, etc. Plenty of ways to obfuscate without risking weakening assurance. And they can be encoded in the pre-shared key or OTP, derived using a CRNG.

Overall, it’s good that you’re constantly tweaking and fixing things. Keep up the good work.

Benni October 27, 2014 6:33 PM

Got an apple?

So, upgrade its operating system and then your location gets uploaded into the icloud:

http://www.washingtonpost.com/blogs/the-switch/wp/2014/10/20/apples-mac-computers-can-automatically-collect-your-location-information/

But no, not only that. Your new Apple operating system silently uploads EVERY unsaved file into the icloud:
https://datavibe.net/~sneak/20141023/wtf-icloud/

So, you write a text, and before you could even save it, apple sends it to the icloud.

Apple is an NSA prism partner. The fact that with the new Apple system, NSA is getting all this information made the chinese authorities a bit jealous. And that is why they are now doing man in the middle attacks against the icloud:

https://en.greatfire.org/blog/2014/oct/china-collecting-apple-icloud-data-attack-coincides-launch-new-iphone

So they are not only getting the contact list and messages of some Chinese iphone users. No, they are getting every unsaved file of every Chinese apple user….

god there must be many nude photos in that database…..

When apple is uploading every unsaved file, the icloud probably has become one of the largest nude photo collections on the planet…..

Too sad that neither the NSA nor the Chinese services share their intercepted nude pics with us….

Benni October 27, 2014 6:36 PM

And no, not synchronizing a contacts email adress with the icloud account does not help:

“It would appear that iCloud is synchronizing all of the email addresses of people you correspond with, even for non-iCloud accounts, to their recent addresses service. This means that names and email addresss that are not in iCloud contacts, not synchronized to your device, and only available in an IMAP-accessed inbox are now being sent to Apple, silently.”

Nick P October 27, 2014 6:43 PM

@ Benni

“god there must be many nude photos in that database…..
When apple is uploading every unsaved file, the icloud probably has become one of the largest nude photo collections on the planet…..”

Lol that’s probably true. And yet you certainly don’t see celebs doing DMCA notices on the iCloud. You know the infamous Apple Gestapo team probably has a direct line into that:

“We’re just making sure no Apple trade secrets were moved into these accounts. Just for internal securi… holy s*** she’s fine! (cough) I mean, her account is fine. No sensitive information there.”

AlanS October 27, 2014 9:03 PM

@Nick P

More on Knox. So this is what a government-certified “front door” looks like? Funny.

AlanS October 27, 2014 9:23 PM

@Gerard van Vooren

I read the argument differently. They write: “Privacy is not merely about keeping secrets, but about the rules we use to regulate information, which is and always has been in intermediate states between totally secret and known to all. Privacy rules are information rules, and in an information society, information rules are inevitable.”

I think if you get away from “privacy as secrecy” to “privacy as control of information” about oneself, which is a definition along the lines of Alan Westin’s from the 1960s, the idea that privacy can have no place is nonsensical. As they write, “Information rules are inevitable”. The question then becomes what are the rules? And under the rules who has control of what and why? And are the rules consistent with our values and the sort of society we want to live in?

I have no patience for either the technological determinist argument or the people don’t care argument. Both arguments are nonsense. This is what the people who reap the benefits from controlling knowledge want you to believe as it subverts serious discussion about what they are doing.

Daniel October 28, 2014 1:53 AM

@Alan S

Correct. Because the “privacy as anonymity” argument brought up in past threads is identical to the “privacy as secrecy” argument. The only reason one wants anonymity is to be secret. Anonymity is “the secret who” which is equal and complimentary to “the secret what”.

My major point is that when we begin to talk about privacy as control of information that this conversation must begin with data persistence (retention) regulations because data which does not exist is absolutely beyond anyone’s control. This is to say that privacy interests begin when data is retained and cease to exist when the data is no longer retained.

sena kavote October 28, 2014 3:36 AM

Dividing software to many processes

Security is just one possible advantage in dividing one program’s subsystems to many processes that communicate via interprocess communication. Most processes should be sanboxed so that they can interact only by communicating with one or few other processes.

Lets take mozilla firefox as an example for what it could mean to break a software in many smaller parts. This is just one way to do it. Imagine a network graph with this description.

Decompressing images is done in separate processes. JPG, PNG etc. have different executables and there could be more than one JPG decompresser process at one time. The raw pixel data is piped to a process that has a window and handles scrolling and zooming.

Compressed data is piped to decompressers either directly from the only internet interfacing process or from the internet process to decrypting and from decrypting to decompressing. Every cipher type could have it’s own executable.

Usually one process handles file operations.

One central process named firefox leads this herd of processes. It asks the internet facing process to connect with sites, scroll web page in the window process, ask the file operations process to save received data or something else.

Window process takes user input from keyboard and mouse and pipes it to the central process with the name firefox. ( If operating system allows, 2 keyboards on the same computer could enable simultaneous typing to 2 different places in a web page. This 2 keyboard setup would be more useful with 2 player games rather than typing. )

Javascript process can pipe raw pixel or text data to a window process, and take data from firefox.

Fonts are drawn in a window process.

The processes have some distrust of each other and check what they are asked to do. The browser can manage if one process or even more is possessed by an attacker.

These are rough outlines of a draft of how firefox could or should be divided. Many other software too could be better with this kind of separation.

Other advantages besides security against attacks directly at the software, include: stability, reduced download for upgrades if only some binaries are changed, increased trust of upgrades if they change only non critical parts, possibility to track subsystem resource use in ksysguard, htop, gnome system monitor or other system monitor, possibility to change some executables to 3rd party versions and performance increase if using with many cores.

Different versions of parts could also enable obscurity that changes every session.
If at one moment a software has 20 processes and everyone of those has 2 different versions, the software could have pow(2, 20) or 2^20=1 048 576 different forms.

Putting processes arbitrarily to 2 or more VMs or computers

If 2 processes are meant to communicate via interprocess communication, it is possible that they could be put to different virtual machines or different computers that communicate, completely arbitrarily and without altering the binaries. The OS needs something special.

In every boot and in every launch of a software, half of the processes could start in one machine and other half in second machine, when those machines are connected by 10 gigabit ethernet. The separation could be only partially arbitrary / random so that the 2 sets of processes are chosen so that the combined bandwidth of interprocess communication never exceeds the ethernet capacity and does not need super low ping.

This (half-)arbitrary division would provide some obscurity that changes automatically at every boot, launch and possibly even during running of software.

Thoth October 28, 2014 3:47 AM

@Markus Ottela

Suggested Improvement for Display of TFC Messages

Problem Category:
– User Interaction (UX)

Problem:
– Keyboard input is blindly keyed in and there is no way of displaying input.
– Output is shown separately from keyboard input.
– User has no way of correlating incoming messages from keyboard input. Makes conversation very hard to track and correlate.

Solution:
– A third data diode is place between the incoming message and keyboard input data diodes.
– Third data diode only has the capability of receiving but not sending messsages.
– Third data diode is then hooked to a EMSEC protected monitor screen.
– Incoming properly decrypted + verified messages are sent to the third data diode together with user keyboard inputs and probably even mouse inputs if necessary and formed together into a single complete conversation thread. This makes conversation more comfortable.
– Third data diode should not hold any form of keyfiles or keys of any sort to enable it’s ability to display messages.
– Data security is thus achieved by the third data diode hooked to the screen that only allows message input and not output.

Completion Solution:
– To bring the TFC solution into a single package hardware for portability.
– Triple segmented compartment with EMSEC resistance in each segmented compartment.
– Each compartment holds it’s own modules and data diodes.
– Outer casing has four ports for screen monitor, input peripheral, incoming connection cable and outgoing connection cable.
– Monitor can be embedded with the removal of external screen port to make it more secure.

Thoth October 28, 2014 3:49 AM

@Markus Ottela and Nick P
Regarding TripleSec, I think the use of AES might trigger off alarms and bells to those who are sensitive against the NIST/NSA algorithm. I would and have suggested the use of Serpent cipher in the replacement of NIST/NSA/AES.

Clive Robinson October 28, 2014 5:08 AM

@ Sena Kavote,

With regards “Dividing software to many processes”, it’s a subject that has been discussed in some depth on this blog before, by Nick P, RobertT, myself, Wael and several others.

I recomended using the equivalent of a scripting language for the applications with the scripting elements written by those who had good security skills. The running script would run in hundreds if not thousands of light weight CPU’s with their access to main memory mandated by an MMU controlled not by the CPU but a security hypervisor. This had the advantage of not just puting the script element into a “jail” but also gave it only sufficient resources to carry out the functions of the element, thus giving no room for malware to get in. The hypervisor could also randomly halt a CPU and examine it’s registers and memory to check for security and other errors. The hypervisor would also strongly monitor the IPC and also the process signiture and halt a process whan it went out of bounds and raise an exception to a higher level security system. There were quite a few other things such as “probabalistic security” that might well be of interest to you, for various reasons it got dubbed “Castles-v-Prisons” or “C-v-P” on this blog, go look it up as you might find it interesting.

Clive Robinson October 28, 2014 6:13 AM

@ Thoth,

I prefered Serpent long before the finalist was anounced as it is a much more conservative design, and thus I was quite surprised when it did not win.

From my point of view the eventual winner was not one I would have risked any money on as it was to new to untried and actually not that good performance wise when viewed from what the market place needed then and for the foreseeable future. Which might account for why RC4 and Tripple DES are still with us.

Further before the ink was dry on the NIST signiture, questions were being asked about it’s implementation issues. And as I’ve been known to say in the past –long befor Snowden or the eliptic curve random generator issues– I have a deep suspicion that the NSA rigged the contest on the implementation issue such that side channels would be virtually guaranteed in nearly all resulting implementations.

TIM October 28, 2014 7:57 AM

@ Clive Robinson

I’m a little late for a reply, but hope you read it anyway.

I read your post about adding code to downloaded binaries over TOR and an old idea I had in the past came up again.

If I would be NSA and want to deanonymize TOR users, I would work with hidden signatures in the original files to identify them.
I mean, if they would access the download-server for a special tool, mostly used of the targetted group of TOR users, and they would change e.g. a character in the helptext within the tools binary on a daily basis, then they could search in the clients for this file and identify this user as one who downloaded the file on a concrete day. Then the range of related IP-Adresses should be much smaller.
Or they change the tool by adding a new functionality e.g. try to access a server that doesn’t exist oder doesn’t respont to get the DNS metadata at the provider to get the original IP of the TOR user. Maybe they can reroute the access to specific servers over their own servers to identify, if the access comes from a TOR exit node and then manipulate the downloaded data (binary or script in website to post the real IP or specific information (e.g. serial numbers) over TOR to the before accessed server, because this must be allowed from the TOR user even if he is paranoid).

I think there are many ways to implement a signature for downloaded information and files to make the client-system more individual or identify this over TOR, too.

What do you think?

Maybe this is already reality and just I haven’t heared of this.

Markus Ottela' October 28, 2014 8:02 AM

@ Nick P

Thanks. I’m going to take your word for it regarding key expansion.
Regarding the order inside cascaded encryption, I wonder whether a cipher suspected weaker than others, should be placed inwards or outwards. I did a quick search on IEEE Library but I didn’t find papers on the subject.

I’ve to see if cascaded encryption has notable performance cost when TFC is run on RPi. I’m not sure how to got about auditing TripleSec though. It’s going to need test vectors of each cipher and they’ve to be evaluated at points where ciphers are changed.

@Toth

“Keyboard input is blindly keyed in and there is no way of displaying input.”

The figure 5 on page 9 of whitepaper is slightly incomplete, as it doesn’t display all peripherals. If you look at figures 19 and 20 on pages 18 and 19 of manual, you will see the TxM is used with a display unit. Additionally, starting from page 28 in manual, functionality of both TxM and RxM are displayed as screen shots of terminals, titled according to device.

“Output is shown separately from keyboard input.”

You see the message typing box on TxM screen, entire conversation on RxM screen. Messages that you send to the contact are mirrored to your RxM and decrypted there using a local copy of recipient’s decryption key. This decoupling is the only way to prevent network-based post install exploitation.

“User has no way of correlating incoming messages from keyboard input. Makes conversation very hard to track and correlate.”

TFC only has one flow for all conversations on each device. Both devices show to whom a message was sent, and RxM also shows from which contact the message was received from. In future, different contacts could have separate “tabs”, the user could switch between. An encrypted command would be relayed from TxM to RxM to select which conversation should be displayed. UI definitely needs more effort, but it’s not the top priority. Once I have the time to start playing with nCurses, I might try to create something more interactive. Keeping the program code minimal is one of the main goals, something users, who wish to audit the software themselves before use, will appreciate.

Regarding the proposed solution
Data diode is only a dumb repeater in the middle of an RS232 link, that enforces unidirectionality of data transmission with an optical gap, if I understood your goal correctly, having two TCB computers connect to same monitor using data diodes, isn’t a possibility. The TxM could of course have a direct, optionally encrypted, data diode connection from TxM to RxM for real time management: UI could then work mainly on RxM screen.

Single hardware package for portability
Bringing the size down for portability is somewhat useful but after all, users need to create their own implementations. Variance in end point devices broadens the required hardware attack surface: Instead of shipping interdiction of “compact, ready products”, all global manufacturers will now have to be compromised. IMO the best way to go about implementing this is, providing free as in RMS instructions, on how to shrink the design. I encourage you to make your own and share it.

EMSEC
I feel the EMSEC part is something I should not take part in. First of all, it’s hard. Secondly, what I try to argue in the paper is, the leaks have incorrectly defined end point exploitation as targeted surveillance. A naive, surface scratching dissection of the problem is available here http://pastebin.com/55sGBPBt . EMSEC however should be considered targeted surveillance. TFC project is harder to frame as a tool for the 4 horsemen, when old-fashioned ‘flower-store van’ -TEMPEST attacks remain as an option. And like Soghoian said at some conference (held in Texas?), the idea is not to blind the intelligence establishment. Until the INGSOC fills the sky with drones that illuminate mandatory retro-reflectors of any device in groundopticon, I don’t feel the issue needs attention. I hope you disagree with me and fork the project. Maybe even release 3d-printing schematics for metal case that a compact hardware design fits into.

About AES
If Rijndael was NSA designed, I’d be more worried. The algorithm was chosen through public competition. So far we’ve only seen a backdoor in Dual_EC_DRBG and it came with many warning signs. It’s possible in theory, that 9/11 crisis was not wasted and FIPS 197 was somehow affected by it (“We need to get this cipher through, otherwise the terrorists win, you want that? You want the responsibility?”). I personally don’t think this is the case with AES. Like Snowden said, there are more effective ways to get past encryption and it’s unlikely there’s a huge conspiracy of NSA knowing better how to implement the cipher and that they’re covertly implementing it only on computers that store state secrets. But like said earlier, I don’t think there’s a need to take risks: cascading cipher is unarguably more secure choice.

Thoth October 28, 2014 9:47 AM

@Markus Ottela

“An encrypted command would be relayed from TxM to RxM to select which conversation should be displayed.”

I think the TxM and RxM should still be kept separate by not allowing either of them to talk to the other whereas a third data diode to feed the keyboard input from RxM and the display from TxM with a single direction inward flow into a monitor screen (probably a conversation merging algorithm processor added just after hitting the third data diode before going to the monitor) would be a sort of cleaner way to keep any outward flow of potential data leaks or cross module interference.

Regarding EMSEC, I think it’s best to recommend users or those who want to dabble with TFC to do their own EMSEC but otherwise and have to figure a way to install their EMSEC stuff.

In regards to waiting for mass targeted surveillance to come into the norm, by that time it happens, it would be too late. In my country, we had a blogger in the beginning of this year who criticized the Government of Singapore for the opaque budgets that were left unexplained and the “first shot” was fired when within something like 30 minutes, the alarms were supposedly fired off within the internal departments of the Singapore Government (according to some reports) and heavy handed actions to silence the local blogger by lawsuits (and possibly harsher treatment) came along the way (but good thing he was not taken into the Internal Security Department or that would be the end of him).

The USA have the Constitution’s protection to a certain degree although that does not always prove useful but at least there’s something there written down. The UK (as Clive Robinson should be able to explain) the RIPA act which severely hinders free speech.

More countries are adopting aggressive styles in approach (even war-like manners) in handling citizens/civilians. I have been heavily advocating and attempting to raise awareness in the field of high assurance security (not just crypto here and crypto there).

I would highly recommend as much design for high assurance computing to leave as little rooms uncovered as possible.

Regarding AES, the design is somewhat less robust than the Serpent cipher. The amount of rounds, the circuit design of the cipher, the key scheduling … The Serpent is a good cipher that could have been a winner in the AES due to it’s better security due to more conservative designs but somehow Rijndael won. The techniques used by Serpent are more stable (conservative and old methods that have been well studied) whereas AES brought in newer techniques that have not proven itself to withstand the test of time. The biclique attack manages to defeat all three key length of the full Rijndael (128, 192, 256) but the biclique is close to a bruteforce on all keyspaces thus impractical for most people who seek to break the full Rijndael but that does not mean NSA might not have figured something out to make biclique more effective. Serpent cipher is not yet broken by the public although it is hard to guess if NSA might also have figured something out. Due to the conservative measures, Serpent would probably be harder to crack.

Nick P October 28, 2014 7:15 PM

@ Thoth

The AES algorithm is fine. Clive beat me to the real problem: implementations with exploitable vulnerabilities or leaks via side channels. The NSA was counting on this being the case. Serpent is a good choice, though. One reason I promote an AES layer is because there’s plenty of hardware acceleration opportunities for it, even open cores.

@ Markus

“I’m not sure how to got about auditing TripleSec though. It’s going to need test vectors of each cipher and they’ve to be evaluated at points where ciphers are changed.”

You don’t have to use them. I was just pointing out that cascading encryption is best thing to do next to the OTP. Use whatever good implementations you can fine with Salsa20 and/or AES candidates.

re EMSEC

“Secondly, what I try to argue in the paper is, the leaks have incorrectly defined end point exploitation as targeted surveillance.”

That’s what EMSEC will be so if your goal is making it targeted then you’ve succeeded without EMSEC. Good tradeoff.

“Like Snowden said, there are more effective ways to get past encryption and it’s unlikely there’s a huge conspiracy of NSA knowing better how to implement the cipher and that they’re covertly implementing it only on computers that store state secrets.”

It’s not only likely: it’s official government policy. The government pushes one set of security standards and development processes for the public. Most of what they push is EAL4 or less (certified insecure). Then, their own COMSEC devices for real secrets go through a Type 1 certification process that focuses on design/configuration choices, RNG’s, implementation assurance from algorithm to TCB, side channel analysis, and EMSEC analysis. Only defense contractors can buy those and even then NSA controls the key via their EKMS system.

That’s they’ve consistently suppressed EMSEC & side channel information while exploiting those same attacks against enemies shows they’re doing exactly what Thoth worries about. That their crypto certifications seems to ignore these weaknesses for implementations, unless its their products, indicates they’re leaving them in on purpose. That they promote low assurance security tech for government and commercial further assures both TAO’s and enemies’ success.

In short: they’re really that bad if it involves subversion opportunities.

Note: IPsec vs HAIPE is a perfect example of them pushing garbage on us while keeping the good stuff secret. Compare those to see what I mean.

AlanS October 28, 2014 8:36 PM

Henry Farrell savages national-security liberals critical of Snowden and Greenwald: Big Brother’s Liberal Friends.

National-security liberals have enormous intellectual difficulties understanding the new politics of surveillance, because these politics are undermining the foundations of their worldview….The last thirteen years, then, have seen a quiet internationalization of the surveillance state….This vast expansion in international surveillance terrifies Snowden and Greenwald. Both acknowledge the inevitability (and, in Snowden’s case, the desirability) of some spying, especially on hostile states. Both, however, fear the implications of increased spying for civil liberties within democracies, as these democracies’ governments spy on their own citizens and on each other….LIBERALISM, IF it is to stay genuine and relevant, has no choice but to engage with Snowden and Greenwald. The problems they identify have sweeping implications for the balance between security and liberty….National-security liberals, in contrast, start from the belief that we owe it to the world to remake it in more liberal ways and that America is uniquely willing to further this project and capable of doing so by projecting state power. Snowden and Greenwald suggest that this project is not only doomed but also corrupt. The burgeoning of the surveillance state in the United States and its allies is leading not to the international spread of liberalism, but rather to its hollowing out in the core Western democracies. Accountability is escaping into a realm of secret decisions and shadowy forms of cross-national cooperation and connivance.

Bismark October 28, 2014 10:38 PM

@Sancho_P
@65535
Binfer user AES 128 bit encryption(as per their website). It is cloudless. I actually monitored traffic with Wireshark and 1. nothing is cleartext and 2/ it appears to do what is says, ie: direct transfers of files and messages. You can tell from IP address message exchanges etc.
I have been using it for sometime and I would recommend it for anyone who wishes for secure data exchange. It is a new tech, but hey evrything has to be born sometimes. Caution: 1. for some reason they have instant messages aes encrypted by default but for file transfers you have to turn on the encryption setting in the settings. 2. people still fall for this, but download from their website directly and not from those claiming cracks full versions etc

GoodLuckWithThat October 29, 2014 10:30 AM

Verizon Launches Tech News Site That Bans Stories On US Spying

http://yro.slashdot.org/story/14/10/29/1215203/verizon-launches-tech-news-site-that-bans-stories-on-us-spying

Apparently, this is in response to people talking about their work, i.e.:

http://webpolicy.org/2014/10/24/how-verizons-advertising-header-works/

While “silence your critics” is often the objective to minimizing bad press (or news about your bad actions), sponsoring a content restricted news site, doesn’t make it a news site, it is simply a public relations channel or propaganda site. It should be an entertaining place to find stuff to debunk.

Benni October 29, 2014 10:31 AM

Russians hack the white house:
http://www.washingtonpost.com/world/national-security/hackers-breach-some-white-house-computers/2014/10/28/2ddf2fa0-5ef7-11e4-91f7-5d89b5e8c251_story.html

But no, NSA was unable to detect that. They got a tip from a foreign friendly service:

“U.S. officials were alerted to the breach by an ally, sources said.”

Which service could that be? Well, the Brits are unlikely, since they do not monitor the Americans. But there is a friendly foreign service which has an excellent spy network in Russia since the cold war, and which also monitors the US government:

http://www.spiegel.de/politik/deutschland/bnd-soll-us-aussenministerin-clinton-abgehoert-haben-a-986412.html

I wonder if they read Obamas email…

Thoth October 29, 2014 11:12 AM

@Nick P
RE: Spritz: It’s hilarious that you said…
I wouldn’t be surprised NSA and GCHQ poisoned the security market and that’s where everyone thinks CC EAL 4+ and above is “High Assurance”. In fact, I was browsing through a brochure on a datalink encryptor with EAL less than 5 and it claims to be a Government grade encryptor.

If that’s not enough, 0xhere’s something you might not be surprised but it may raise your eyebrows. Certain HSMs have a Secure Execution Engine (SEE) where users may load SEE codes into the secure environment of a HSM to execute code securely in the HSM’s secure boundaries. It all sounds good until the application of the license to activate the SEE feature. There is the SEE (EU+10 – EU countries, US, UK, Japan and a few toehr special countries) which basically means you are allowed full access to the SEE engine in the HSM for secure code execution and there is the SEE (Restricted) for Banking which means a restricted subset of the SEE functionalities can be used. Note that I mentioned the word Banking because according to some sources, application of license for the SEE functions for Foreign Governments are not allowed (weapons control) as it is regarded as a controlled item/feature to enable SEE activation. The rationality is if a foreign Government entity were to buy the HSM and uses the SEE for secure execution of codes, it would be disastrous or more troublesome (as reported by certain sources). In fact, sales of HSMs to Foreign Government entities are highly monitored affairs especially those with SEE features and also highly restrict6Five to obtain permissions from the origin country which makes/delivers the HSM.

I have met companies who tried to pitch sales to me that their FIPS 140-2 Level 1 Software Security Module (and they do not have a concept of CC EAL) are secure….

And … some of them with CC EAL knowledge tells me that the EAL levels are very subjective and low EAL may not mean it is not secure ….

I have seen security products with bloated codebases that glitches every minute and every second (exaggeration – but you know what I mean) and it is marked as secure ….

Let’s put it this way, NSA/GCHQ have managedBF to poison the entire Security Market and I would propose a hard reset as the only option to recover from this fatal poison. One way is to open up as much open source hardware and software designs built from the base as a EAL 6+/7+ high assurance C1odebase design that does not glitch every minute or second.

Secure protocols created by non-crypto/non-sec people who don’t know anything about side channels, hardware and software that are bloated with too much nonsense and9D glitches frequently, lots of hidden features … and worse of all … nasty marketing tactics (war/scare mongering).

The first step of remember is to redo security as a whole. The basic security modules that most people rely on are message encryptors, secure credential storage and secure key management that are fully open sourced (hardware and software) for everyone to use without any legal issues as one of the first few steps to reset the security industry.

Thoth October 29, 2014 11:15 AM

@Nick P
RE: Spritz: It’s hilarious that you said…
I wouldn’t be surprised NSA and GCHQ poisoned the security market and that’s where everyone thinks CC EAL 4+ and above is “High Assurance”. In fact, I was browsing through a brochure on a datalink encryptor with EAL less than 5 and it claims to be a Government grade encryptor.

If that’s not enough, here’s something you might not be surprised but it may raise your eyebrows. Certain HSMs have a Secure Execution Engine (SEE) where users may load SEE codes into the secure environment of a HSM to execute code securely in the HSM’s secure boundaries. It all sounds good until the application of the license to activate the SEE feature. There is the SEE (EU+10 – EU countries, US, UK, Japan and a few toehr special countries) which basically means you are allowed full access to the SEE engine in the HSM for secure code execution and there is the SEE (Restricted) for Banking which means a restricted subset of the SEE functionalities can be used. Note that I mentioned the word Banking because according to some sources, application of license for the SEE functions for Foreign Governments are not allowed (weapons control) as it is regarded as a controlled item/feature to enable SEE activation. The rationality is if a foreign Government entity were to buy the HSM and uses the SEE for secure execution of codes, it would be disastrous or more troublesome (as reported by certain sources). In fact, sales of HSMs to Foreign Government entities are highly monitored affairs especially those with SEE features and also highly restrictive to obtain permissions from the origin country which makes/delivers the HSM.

I have met companies who tried to pitch sales to me that their FIPS 140-2 Level 1 Software Security Module (and they do not have a concept of CC EAL) are secure….

And … some of them with CC EAL knowledge tells me that the EAL levels are very subjective and low EAL may not mean it is not secure ….

I have seen security products with bloated codebases that glitches every minute and every second (exaggeration – but you know what I mean) and it is marked as secure ….

Let’s put it this way, NSA/GCHQ have managed to poison the entire Security Market and I would propose a hard reset as the only option to recover from this fatal poison. One way is to open up as much open source hardware and software designs built from the base as a EAL 6+/7+ high assurance C1odebase design that does not glitch every minute or second.

Secure protocols created by non-crypto/non-sec people who don’t know anything about side channels, hardware and software that are bloated with too much nonsense and glitches frequently, lots of hidden features … and worse of all … nasty marketing tactics (war/scare mongering).

The first step of remember is to redo security as a whole. The basic security modules that most people rely on are message encryptors, secure credential storage and secure key management that are fully open sourced (hardware and software) for everyone to use without any legal issues as one of the first few steps to reset the security industry.

Clive Robinson October 29, 2014 11:32 AM

@ Benni,

Which service could that be? Well, the Brits are unlikely, since they do not monitor the Americans.

Err that is not true, the UK does monitor traffic for the US and has done for many many years. In fact ever since the beginning of WWII if not before. They also monitor all traffic going through the UK and to places like the Irish Republic supposadly to catch the likes of overseas operators of the IRA or any other organisation regarded as a threat to UK National Security.

Back in the cold war and later Britain spied on US citizens for the US and the US spied on UK citizens for the UK. This was so that politicos could stand up in their respective political houses and say not just deniably but honestly “We do not spy on our citizens”. As longterm blog readers will know I’ve said this for many years and long predating any of the current whistleblowers. Various journalists have likewise known this for many years and published such information. One of the biggest leaks about this 5Eye behaviour was in New Zealand where a journalist tracked down people and named them. He got the data from cross compiling car registration plates and “postings notices”. He discovered also that the director of their “signals agency” had a resident NSA person in an office immediately adjacent, and this person went to most of the senior level meetings…

What 9/11 did was “take the gloves off” the pretence and all the 5Eyes signals agencies started spying on their citizens, with the consent and full knowledge of the politicos, the worst offenders appear not to be the US but Australia in this respect with the UK running a close second.

Clive Robinson October 29, 2014 12:53 PM

OFF Topic :

More Fraud on EVM Chip-n-Pin Services

It appears that because EVM is so complicated to set up correctly banks are getting it wrong and accepting modified replay attacks even when there is not a valid EVM chip card in existance,

http://krebsonsecurity.com/2014/10/replay-attacks-spoof-chip-card-charges/

I fully expect to see more failures around Chip Cards based on EVM and you can bet your bottom dollar that either the banks or EVM will try to externalise the risk onto the customer if they can not stick it on the merchant, and will try all sorts of tricks to stop customers claiming back what has been defrauded from them by either the bank or EVM negligence.

So check your statments carefully and make sure you know how to claim “correctly” prior to having to.

Sancho_P October 29, 2014 7:14 PM

@ Bismark, 65535

Thanks, Binfer is an interesting idea and worth a second thought.
For the encryption, personally I’d prefer a program to simply transfer what I told it to transfer. The reason is that “one for all” solutions tend to be monocultures, and monoculture is the biggest danger the universe has to offer.

Don’t know why Windows comes to my mind here, though.

Nick P October 29, 2014 11:12 PM

@ Clive Robinson

That’s hilarious. It reminds me of the trend where every spearfishing attack followed by a rootkit was an “Advanced Persistent Threat.” Needed special APT antivirus, network protections, etc. Lots of fear mongering. It was security companies before. Now, it’s security researchers that would’ve called them out on it in the past. (sigh)

Nick P October 29, 2014 11:31 PM

@ Thoth

Your post was great up till this point:

“The first step of remember is to redo security as a whole. The basic security modules that most people rely on are message encryptors, secure credential storage and secure key management that are fully open sourced (hardware and software) for everyone to use without any legal issues as one of the first few steps to reset the security industry.”

The first requirement for anything you mentioned to work is system integrity. That’s the most basic requirement for a secure system. If integrity is easy to compromise, the other stuff is bypassed. That’s why the first requirement is an endpoint with assured security. I’ve got a solution now that stops code injection without a performance hit or source code. It requires a certain amount of money upfront. I’m working on getting it without loosing ownership rights. When I do, desktop protection without software subversion will be easy with certain choices made far as software is concerned.

Wish me luck as I’ll need it.

Thoth October 29, 2014 11:55 PM

@Nick P
My point of security module and all that is an abstract and you brought my point to a more detailed form. The most basic thing for any computing system is trustworthy (integrity) computation and I would agree without a doubt that trustworthy computation in a high assurance environment must be targetted first for the reset (that is what I meant in a lazy manner of me). I would like to add one more item (human element) to change besides strong integrity computation modules … which is human attitude… and this includes proper education and mindset to be developed. I feel very strongly on the needs of proper education to allow one generation of security minded high assurance people to pass onto and have successors to allow knowledge and mindset to persist for generations to come. Education is a huge thing that has been taken for granted these days.

If you need collaboration and ideas on high assurance items, you can post them here and we can all chip in.

name.withheld.for.obvious.reasons October 30, 2014 6:58 AM

A industry sponsored cyber security meeting included two panelists; Chambliss and Feinstein were empaneled at one of the sessions. Chambliss essentially stressed the immediate passing of the CIPA (was CISPA) bill–using all the theatrics and hair-on-fire rhetoric. When concerns about reforms to the FAA (section 702 and 215) Chambliss yawned, dismissed any importance and suggested that the sunsetting of these provisions will fix whatever ails the current illegal law(s).

1.) Chambliss has no concern for privacy and the U.S. Constitution and sees no urgency.
2.) Chambliss does bark for his crony MIC/IC masters, and any chance to continue to legislate stupidity or crypto-fascism cannot be passed up.

Markus Ottela October 30, 2014 7:58 AM

@ Thoth

TxM already has to talk to NH, and NH has to talk to Rx. There’s no harm in allowing unidirectional communication from ‘top to bottom’. I’m having a hard time drawing a schematic based on your description. Would you be so kind and make a quick draft on what the ideal design would look like.

“In regards to waiting for mass targeted surveillance to come into the norm, by that time it happens, it would be too late.”
I too fear that. People tend to give in when the issue is technically too complicated to handle. I suppose the blogger wasn’t being targeted over private communication but rather, publishing: Something out of the project’s scope.

On AES256
Over 14 years, the entire academic world has reduced the complexity with 1.6 bits. NSA would have had to reduce it with 166 to make exhaustive search even remotely feasible.

The question is, what type of attacks are possible against the implementation in a semantic security game where adversary in order to create an existential forgery, gets to submit n ciphertext-tags to challenger who never replies (The case of RxM), or when the attacker doesn’t get to ask any questions and has to deal with outgoing data(TxM). AFAIK with TFC, it’s a ciphertext only attack until attacker does a close proximity side-channel attack, and at that point the cipher isn’t going to protect the data anyway.

@ Nick P

Salsa20 did good at Estream and appears to be robust. I’m thinking AES-Threefish-Salsa20 but again I’ll have to look for libraries first.

I wanted to hear your thoughts on the idea I had. Symmetric ciphers do good until end-point device is compromised. At that point all past intercepted ciphertexts can be decrypted. By using manual typing or OCR, the derived secret value of DHE can be transferred from RxM to TxM. That way, the cascading encryption could use an ephemeral session key that provides perfect forward secrecy. The security of implementation wouldn’t rely on discrete logarithm problem, as there are still two independent PSKs for other two ciphers.

By pre-sharing an RSA public signing key, minimum amount of work is required to maintain perfect forward secrecy.

By leaving out the signing key, an active MITM attack against both the online conversation, and hash verification is required every time, even after a physical compromise has taken place. This is slightly more inconvenient and only works if the participants don’t have to obfuscate the fact they’re communicating, but want to keep the conversation private.

So cascading encryption adds security on cipher level, and allows combining other features to the implementation as well.

I pushed the initial PoC on the idea to Github https://github.com/maqp/DHddOCR

“It’s not only likely: it’s official government policy –”
That was very interesting and illustrative, thank you! On top of that I learned NSA has it’s own classified suite A algorithms. They’re betting that in the long run the input of entire academic community improves security less than obfuscation of the design. Is violating Kerckhoff’s principle there to obfuscate design implementations that would point out attack vectors against suite B ciphers, or are the differences merely on implementation level, hard to say.

Thoth October 30, 2014 8:47 AM

@Markus Ottela
Here is the abstract drawing:
http://imgur.com/gkYJEW0

Regarding the blogger, it’s not really within the scope of the project but it just shows how modern surveillance catches things in real time.

Theoretically AES is still fine but we never know. As Bruce always says, attacks gets better and we know very well that NSA and GCHQ have huge fundings that even the people in charge have no idea what’s going on behind closed doors. We also know that NSA attempted to slip backdoored stuff into the market and attempted to coerce companies to compromise security. You are right that in practice it is still hard to break AES but it is just a cautious and conservative move to have something just in case it breaks (that’s why Nick P suggested chained ciphers being used together).

A better idea is to have a selector to use a fast cipher suite (AES and all those fast ciphers) and a paranoid cipher suite (for those who don’t trust NIST/NSA/GCHQ). Anymore complex than this simple cipher suite selector might complicate matters though.

Bismark October 30, 2014 9:37 AM

@Sancho_P
“For the encryption,……mind here, though.”
You surely sound very intelligent. I am of average intelligence and am sorry but I did not follow what you are trying to say in the context of the info I was trying to contribute.
Hey, I like windows, osx, android and linux. There are great minds behind them all. Greater than mine for sure. I try to use the best app based on the information available to me and and the one that make sense for the job so that I can finish the job and go out to smell the roses. I do not philosophize much, specially about software. Life is too short for hypothetical thinking. Cheers.

Nick P October 30, 2014 12:29 PM

@ name.withheld

Unsurprising there. Especially easy when they have so much privacy in their discussions. I wonder what Feinstein said and if it matches her public statements.

Nick P October 30, 2014 12:55 PM

@ Markus Ottela

re cascade

That construction will probably be fine. I used to do Salsa20 first to eliminate any patterns in the data before it hit block ciphers, padding etc. Cryptographers couldn’t agree on if that benefited security. (shrugs)

re perfect forward secrecy

You can do that with symmetric ciphers that start with a preshared key IIRC. The two parties exchange a nonce at maybe 256 bit. You cryptographically combine the shared key with the exchanged secret, then feed it into a CRNG (i.e. ISAAC). The CRNG produces the new shared secret and any session keys you need. The old shared secret is discarded. Rinse and repeat.

You can diversify or cascade the various algorithms for any of these steps to throw the attacker off. Just make sure the bitsizes fit for input and output. If not, have a real cryptographer tell you how to clip or expand them securely. Things like least vs most significant bits come to mind. The basic construction, though, doesn’t have this complexity.

re your new project

You’re starting to get into risky territory here. If it’s handtyped, it’s still high assurance as one can visually tell they’re typing a key. If it’s OCR, you’ve just introduced an extra machine channel and complex library. There’s been a few vulnerabilities in OCR. So, your design goes to medium assurance at that point as part is very strong and part is an unknown with typical COTS/FOSS risk. I’d have to know more about how you do the transfer of data itself, both physically and how that data gets to your code.

That you really want limited 2-way means it might be time for you to learn about guards. A number of us here use these. All seem to be homebrew so maybe we have trust issues with defense contractors. 😉 The simplest guard is a simple machine with a serial port on each end. The dumber the serial port the better. The guard can read/write either. Guard moves, checks, or creates data based on its policy/rules. The better designs use a pipeline model with each logical part of the system in an isolated address space to keep POLA concept.

Your guard will be relatively easy to build. It can use serial ports on each side, with a data diode optionally on TxM side. I’m leaving that option out. The guard keeps checking for incoming data (status, key, etc.). The data comes in a very simple format that’s easy to parse without errors. I used to use Lisp notation: (newkey hfu4f34if43fuh4iuhugg). If it’s varying length, put the length right after the label: (newkey 2048 hfu4f34if43fuh4iuhugg). Helps with checks to prevent memory/buffer attacks. Let’s assume we’re using OKL4, OC.L4, Genode, MINIX 3, etc for the guard to keep TCB minimal and ease IPC. Here’s how your guard works:

  1. Dedicated process (and/or driver) aka IncomingApp is notified by RxM that information is incoming. Clears the buffer of fixed size, records the time, and tells RxM to start transfer.
  2. As it’s PIO not DMA, the guard’s processor controls the data movement. It can bring the data in or ignore it. IncomingApp pulls data from serial port into the buffer. Each time, it first checks to see if it has reached a time or memory limit. It continues to do this until the transfer is complete or a limit was reached. Either way, at the end it goes into an “Off” state and notifies the next component the buffer is ready. It remains in “Off” until pipeline is complete so RxM has no influence.
  3. Next component, InputValidator, has read access to that buffer and write access to another. InputValidator very carefully parses the message and converts it to an internal state. It might do a number of checks or safety protections at implementation level to prevent an attack.

3a. If it looks like a key, it re-encodes it with the same simple format. It writes it to the buffer as (newkey hef89fw8hf9ewh8fe9f) or whatever.

3b. If it received garbage, it writes (error “Received bad input”) to that buffer and might write the garbage to a different spot in memory where bad input is stored. The user or admin can look at that in a text editor later.

  1. OutgoingApp, another isolated process, moves data to the TxM serial port from the buffer InputValidator wrote to. It might receive an IPC telling it the data is ready or there could be a memory location with 1/0 it constantly checks. If the latter, it resets that location after its own operation is complete.
  2. A TxM process receives this data and passes it onto your TFC app somehow.

Interestingly, the amount of code necessary to build this guard is smaller than your TxM TFC client and it has added safety of plaintext transmission. 😉

This basic design strategy is how guards work with vast majority of protocols. The thing to remember is to ensure the data is in a format that’s easy to parse and catch problems with before it hits the guard. The guard can produce data of arbitrary complexity as its trusted. It might even be a key part of the protocol (eg mail guards that handle GPG processing). The interface to the guard also must prevent either side attacking the guard or causing a loss of availability. Here’s a good example of a IPsec VPN built with guard-style architecture, although riskier hardware.

re NSA

If you’re interested in how they do it, they have occasionally released some code or information. They released their SCIP protocol specs their communications devices use, minus any Type 1 constructions. If parts of it seem weird or complex, it’s probably because it’s designed for every medium (incl radio or satellite). They also have said their FIREFLY Type 1 key exchange protocol is based on Photuris. So, cryptographers might want to look into assessing or improving Photorius as NSA internally trusts it more than others for some reason.

@ Thoth, Wael, Clive, Mike

While looking for Photuris link, I accidentally stumbled onto this guy’s page. I recognize the last name from a few papers in my collection. I had never seen his whole list. The guy’s done as diverse of work as I have (if not more) and his stuff looks pretty awesome going from titles. Figured you guys might find something worth reading in there.

Wael October 30, 2014 1:26 PM

@Nick P,

While looking for Photuris link, I accidentally stumbled onto this guy’s page.

Thanks for sharing. Quite a bit to go through, and judging by the titles, I believe I’ll find some interesting topics…

Sancho_P October 30, 2014 7:05 PM

@ Bismark

We are from different cultures, that may cause the misunderstanding.
IMO intelligence has to do with tackling of new challenges (to find solutions).
Actually I’m not good in that and I have problems to follow others, too 🙁
But my point has to do with the contrary, with learning from the past, call it experience or “common sense”.

Monoculture:
If all people in the world would be Spanish – horrible! (I love the colorful world)
If all people in the world would use Windows, and only Windows – OMG!
If all people in the world would use the very same password – a big fail!

Now with Binfer:
If we all would use Binfer + Binfer would automatically encrypt with one and the same algorithm
– we all could use the very same password as well, because the crooks would have to address only Binfer + it’s encryption algorithm / implementation.

I’d love to see encryption separated from transmission, at least optional.

So you love the roses and I love plumeria, that’s stimulating multiculti, a dire bread to the spooks 😉

Nick P October 30, 2014 7:28 PM

@ Sancho_P

There were some interesting ideas back when “agent-oriented computing” was the rage. I recall one project tried to make it where both sides had a bytecode interpreter, exchanged the protocol engine during session agreement, and then ran the protocol through that interpreter. The idea was two fold: a session’s protocol could be adapted to the needs of its clients in a very adaptable way; the devices using this strategy wouldn’t ever be stuck with an obsolete, hardcoded protocol. Looking at your post, I think a polymorphic crypto extension of this concept would be interesting.

Benni October 30, 2014 8:18 PM

New interview with Snowden:

Has some interesting points, for example:
http://www.thenation.com/article/186129/snowden-exile-exclusive-interview#

“Anyway, it’s not true that the authorities cannot access the content of the phone even if there is no back door. When I was at the NSA, we did this every single day, even on Sundays.”

So, how does NSA crack encrypted phones even if there is no backdoor? Snowden unfortunately does not say this…. I guess they have a larger stash of 0days than I imagined…..

Snowden also says on NSA sitting on google fibers:

“Companies did not know it. They said, “Well, we gave the NSA the front door; we gave you the PRISM program. You could get anything you wanted from our companies anyway—all you had to do was ask us and we’re gonna give it to you.” So the companies couldn’t have imagined that the intelligence communities would break in the back door, too—but they did, because they didn’t have to deal with the same legal process as when they went through the front door. When this was published by Barton Gellman in The Washington Post and the companies were exposed, Gellman printed a great anecdote: he showed two Google engineers a slide that showed how the NSA was doing this, and the engineers “exploded in profanity.””

Wael October 31, 2014 12:07 AM

@Benni,

So the companies couldn’t have imagined that the intelligence communities would break in the back door, too—but they did, because they didn’t have to deal with the same legal process as when they went through the front door

One of the non-technical implications of the subtle difference between a backdoor and a front door.

Thoth October 31, 2014 12:13 AM

@Nick P & secure hardware et. al.
I would still think the only trusted input is human input.

The best way to do this is to create an oblivious and simplistic key handling mechanism without serial, network, LEDs or any compromise-able channels possible.

With the opens source hardware RNG with an internal CSPRNG (might consider Bruce’s Fortuna PRNG with modifications by Adi Shamir’s team).

To securely generate a key, run the keygen command inside the isolated key handling machine and accept a password to wrap the generated key via strong cryptographic algorithms (includes polyciphering / chained ciphering techniques).

A trusted display will show the wrapped secret key bytes (black keys) and you copy them on a paper or a few pieces of paper.

For a asymmetric key, it will wrap the private key and will let the public key be plain since public key is generally not secretive. Copy the wrapped private key on a sheet of paper or a bunch of papers.

For a secret shared key (Secret Sharing Scheme over a Quorum – K/N or M/N scheme) each of the quorum of secret key or private key would be wrapped by same password or different password which participants in a key handling ceremony can key in the password for themselves and copy the wrapped key bytes onto their bunch of papers.

All the keys would have a BLAKE2 crypto hash set to a 8 bytes or 10 bytes output for key integrity checking. The low number of byte output for the hashed checksum is to make it easy for participants in a key ceremony to copy them down onto their paper. For the paranoid, the checksum could be made longer.

In the instance of keyloading ceremony for an organisation or just a simple keyloading procedure for an individual, the user would input the wrapped key bytes from the papers that have and input their password for wrapping the key.

The above technique is a more manual variant of keyloading which the Thales HSM uses for it’s payment module for handling credit cards.

The Thales Keyloading Device (KD) uses specially formatted smartcards (can only be used in the payment module secure environment boundaries) because the formatted smartcards contain data blobs with ACLs that permits certain executing environments.

The smartcard would be brought before the HSM to have key transfer details prepared in it’s own format in the original HSM written to the smartcards. The smartcard would be inserted into the KD and you acknowledge the settings you load the smartcards that the KD have been used to acknowledge so that the actual keymats can be wrapped by an operator secret key and loaded into the smartcards. The smartcards’ wrapped key and CRC checksum are accessed again via the KD copied out and the copied bytes are transported by couriers. Import key bytes would be to load an empty formatted smartcard and the bytes keyed from the courier secured papers into the KD on the receiving end with smartcard inserted. The smartcard is slot into the receiving end’s HSM and the operator key in the receiving end’s HSM unwraps the key bytes loaded and formed from the smartcard.

My above concept is a simpler format with the use of password protected keys without all the additional complexity. In the event keys need to be securely loaded into a crypto device like the TFC module, it should have a keyfill slot that only accepts a serial port (SafeNet Luna HSM uses a serial port with a keyloading device) connected to my above simplified form of KD to unwrap keys into a secure environment for keyloading activities.

Thoth October 31, 2014 2:11 AM

@Anura
And the way now it’s used is like a all-purpose-shoot-everyone-down tool.

Fascinating…

Good intentions mostly turned evil I guess…

Bismark October 31, 2014 10:15 AM

@Sancho_P
Maybe you are right something to do with cultures. Perhaps if we all spoke the same way, it would be easier to communicate clearer ;-). The internet works because of monoculture, right?. Evry connected device uses, tcp, ip, http etc. aes is almost the defacto encrypton standard. You and i are having this conversation because of monoculture of technology.

  1. Any new tech has the potential to become a monoculture(so we should not use it with that fear?). Whether its good or bad is very individual/cohort specific and only time(usually decades) will tell. 2. You are assuming that they use same password. I found this on their site: http://www.binfer.com/solutions/tasks/secure-file-sharing “keys are stored on your computer and not ours. The keys are unique to each user and are not shared.” which kind of makes sense, is expected and quite possible with a desktop app(not so with cloud apps).

Again, for anyone concerned with security, I say dont believe but proof for yourself. Run wireshark and watch the network traffic. Run from two difference computers, send same text message and file and verify if the enrypted bytes are same. if they are then they use same password, if not then they use different. Its vey easy to verify. Why speculate? I do this for all apps I use to ensure thay are not sending out stuff without my knowledge:-)

“I’d love to see encryption separated from transmission, at least optional.” You are perhaps that 2% of tech savvy people who would understand what AES is, what software to use to encrypt/decrypt files etc. What about the remaining 98% of lawyers, architects, photographers etc who simply want to send stuff in a safe way without much trouble? Besides what is preventing you to encrypt files with tools of your choice anyways before sending them(I used PGP before this came along). I personally think, beign disconnected from cloud is a big first step. Things evolve.

Wael October 31, 2014 10:49 AM

@Bismark,

if they are then they use same password…

Not necessarily true. Better designs add non-static components such as salts, time stamps, counters, device specific parameters, and/or nonces in addition to a non-static, non-predictable session key, to prevent this sort of analysis.

Nick P October 31, 2014 11:33 AM

@ Thoth

I broke your post down piece by piece. It seems you’re making an imitation of Thale’s approach that uses simpler components that are user-verifiable (to some degree). Looks like the Thales approach is a knockoff of NSA’s approach. You might benefit from looking at NSA’s scheme directly.

My gripe with your version is it’s quite manual & you still must trust the chip anyway. If it’s too much work, they won’t use it. We saw this with PGP/GPG. TFC is interesting because most of the work is done in the setup. In actual use, it’s not all that difficult as the tech does most of the work. If we imitate NSA & Thales, we should try to do the same. Even for protecting the keys.

So, let’s start with my old banking appliance. You have a secure device that’s physically similar to an old electronic organizer with small form factor, LCD screen, full keyboard, and external connector*. The device can have an application for key management. The SOC has that stuff onboard. The user turns it on, it does trusted boot, and takes user’s password. Key management is done with a dedicated app. Backups can be made onto untrustworthy devices via encryption and signature. (Or daughtercards in a docking station.) Or you can do it onto paper. I’d say all secrets should be derived from a master secret & user password in a way that exporting one secret onto paper can recover anything else in event of device failure.

  • Clive suggested IR ports for lower ability for electrical attacks, lower EM leaks of transmission, low cost, ease of use, and elimination of connector wear & tear. Just plug an IR device into each thing you use. Seemed like a good idea but not sure how users would react.

In my old version, this device was intended to do digital signatures of financial transactions. Owner could look at screen to verify what was being signed, authorize it, and signature was transmitted to bank via untrusted Internet-connected machine. That’s one app. I also proposed building a secure file transfer tool similar to NSA key fill appliances where (a) connected machines couldn’t corrupt the transfer device and (b) what was transfered might be validated. That would be another app on my secure coprocessor device. The device might also be used for time stamping, secure remote administration, signed software releases, key exchange protocols, and so on.

My original implementation would use a PowerPC SOC with onboard crypto/TRNG, a SKPP separation kernel, high speed IO with IOMMU, and an assured pipelines model where isolated processes pass highly mediated messages. More recently, I’d probably build something similar but with the SAFE or CHERI processors. One draft research proposal of mine essentially ports JX operating system to SAFE processor or EROS operating system to CHERI processor, while giving them an I/O coprocessor, onboard crypto, a ROM for trusted firmware, & flash for untrusted firmware/OS/apps. A tagged or capability secure SOC is ideal for running sensitive apps with POLA. Might need to modify them to make sure they overwrite any memory that contained keys.

In any case, it’s hardware, OS, crypto, and interfaces need to be secure. It needs to be easy to setup and use. It must be recoverable. It can also, like HSM’s and smartcards, be general purpose enough to increase ROI by supporting more applications. It can use untrusted hardware for storage, transmission, etc. so long as security-critical part runs on it. Hardware implementation tradeoffs abound and ideally would be quite flexible. People worried about subversion can use emulation by decomposition onto diverse COTS parts (eg microcontrollers, TTL’s), other’s can do FPGA’s (esp antifuse), others can prototype on cheap process node (eg micron-level via MOSIS), others S-ASIC’s (eg eASIC), and the grand finale is an ASIC SOC on high end process node allowing ultra cheap & low power units (eg prepaid cellphone SOC’s).

Sancho_P October 31, 2014 6:38 PM

@ Nick P

“… I think a polymorphic crypto extension of this concept would be interesting.”

[Cough] As I wrote above, I often have troubles to follow, especially when “complexity” is increasing (oh, here’s that interesting word again!). At Thoth’s post i.e. I have to give up.
So I’m not sure if I understood correctly but if it’s similar to a rolling code it would perfectly fit to my favorite concept to cut a message into some chunks and send them via different channels. Each chunk (with modified chunk sequence) differently coded, that would be fun!
Though the basic problems remain, i.e.:
How to know that you aren’t talking to the adversary in the first place (or him sitting on the recipient’s side)?

Sancho_P October 31, 2014 6:52 PM

@ Bismark

Indeed the Internet is a dangerous monoculture.

Sorry for “the same password”, that was kinda joke, a picture / exaggeration.
Binfer say encryption is 128 bit AES, but who knows what and how they really do it?
The crooks (TLA) may have been in already before Binfer was born.

I did not assume Binfer use the same pwd but it wouldn’t matter in case the crooks have a golden key or know about a “feature” (say how to copy / send your stuff together with your pwd also to their “collector”). When the spooks know it today then the world will know and use it tomorrow (not all are honorable like Ed Snowden).

“Again, for anyone concerned with security, I say don’t believe but proof for yourself. Run wireshark and watch the network traffic.”

Don’t waste your time watching wireshark, probably you’d be the last to find out what’s going on.
It’s much easier and promising to “speculate”, believe me.

I’m not against an easy solution for the 98%, including me.
However I oppose the sentiment of “trust our closed and unaudited solution” because it will probably delude the 98% to buy a bridge.
Your point to encrypt externally is clever and I’d be less suspicious if they had mentioned that trick in the first place 😉

However I second their resentments against cloud based sharing as I’ve lost quite same accounts (and the associated data) in the past.

Thoth October 31, 2014 10:02 PM

@Nick P
In regads to:

“I’d say all secrets should be derived from a master secret & user password in a way that exporting one secret onto paper can recover anything else in event of device failure.”

I will say that is dangerous. If I understand your meaning correctly, you want subsequent keys (raw keys) to be non-randomly generated from a master secret key (called a module key) and if any of the raw keys are accidentally exposed somehow, the pattern for generating the other raw keys and the possibility of partially or fully discovering the module key maybe worrying although not proven yet.

I would say all the keys should be split into shares and protected. These can be loaded into MicroSD cards and later destroyed for ease (if too lazy to copy by hand). A quorum of MicroSD cards to reload the entire set of keys are required. If MicroSD is a concern, then a specially made IR or some form of light or optics based manner to transmit the data in a closed environment is required.

My model is based on the concept of a trusted and forgetful Key Handling Device (I don’t call it Keyloading Device) because it’s purpose is to load, generate and provision keys. It is more like a personal forgetful and portable HSM that makes the keys for you and help you load your keys into any other secure modules requiring you to provision keys. It can be a small form factor as small as a USB device for portability and additional features like light sensor communications.

The TFC has a transmission and a receiver module and that means keyloading must be done twice. Over a Key Handling Device of sorts (regardless if it’s your version or my version), the idea is to make it “load once, run all”. If the TFC module (RxM and TxM) were to be for the purpose of unwrapping keys and using them directly, it needs a simple and easy way though.

Bismark October 31, 2014 10:50 PM

@Sancho_P
At this point I will depart from this conversation. It has reminded me of talks I had when I was 15 years old. No offence, but it seems to be taking a direction where you would soon be proposing that we wear aluminium foil hats as the “crooks” may have already invented mind reading/control gadgets. And I get very nervous when someone tells me to “believe them” when they have no creds or authority. I belive in facts. I don’t live in paranoia. I do not speculate. Have a nice life.

Nick P November 1, 2014 12:09 AM

@ Thoth

“Use the new Squid Thread for continued discussions.”

There’s supposedly 250,000+ following the blog, with a percentage reading the comments for material like this. Best to keep the posts in one thread to help readers follow along in the conversation. You can use the “Last 100 comments,” a bookmark, or a live bookmark to keep track of updates. I typically just hit last 100 at least once a day with a Find for my name. Shows people responding to me or that they haven’t.

re design

“I will say that is dangerous. If I understand your meaning correctly, you want subsequent keys (raw keys) to be non-randomly generated from a master secret key (called a module key) and if any of the raw keys are accidentally exposed somehow, the pattern for generating the other raw keys and the possibility of partially or fully discovering the module key maybe worrying although not proven yet.”

It’s a standard practice in schemes that generate keys from a password. The schemes are good enough that attackers try to guess the password instead of break the crypto. That this is a full cryptographic key rather than a tiny password would make that a brute force attack on the key or breaking the cryptosystem. And the cryptosystem’s algorithms can be swapped for obfuscation. All in all, I don’t see them figuring out the master key by getting any of the keys it produces. That’s the whole point of hash functions and this time they might not know which are even used.

The design choice for a shared secret generating others was a practical one. Markus’s original design called for OTP’s. That meant he needed key material for every message along with impractical amounts of face-to-face meetings & key protection. My first improvement was for him to use AES or a polycipher with the key exchange done via OTP encryption of the key. That way, only 128-1,024 bits are used per session. This greatly extends the key. The next efficiency improvement was to allow people to exchange one thing (master secret), with efficient & quantum-proof crypto doing the rest. There are also recoverability concerns. So, that led to the recent construction.

If people want more trips or to risk public key, then they can use as few derived keys as they want. Tradeoff is theirs to make.

“I would say all the keys should be split into shares and protected. These can be loaded into MicroSD cards and later destroyed for ease (if too lazy to copy by hand). A quorum of MicroSD cards to reload the entire set of keys are required.”

Parts of your design confuse me, maybe your use case. I thought this was mostly two way communication with each side having key material of sorts. They could produce and backup their own keys onto paper. If a secret sharing scheme, they can get their shares. Unless you mean secret sharing, why would we split a key into several microSD cards if the security comes down to the single individual possessing them or device they’re put into? Might as well just put it on one as an attacker would be expecting several anyway or move straight to torturing the individual.

” It is more like a personal forgetful and portable HSM that makes the keys for you and help you load your keys into any other secure modules requiring you to provision keys. It can be a small form factor as small as a USB device for portability and additional features like light sensor communications.”

Your model solves some of the key management, RNG quality issues, and PFS requirements. I agree a dedicated device with easy interface would be a plus here. I’m going in a bit different direction because I think it’s got better tradeoffs. Yours would be quicker and cheaper to build, though.

Thoth November 1, 2014 3:35 AM

@Nick P
My concepts are more generalized which encapsulates TFC within it as well as possibilities of other needs.

Regarding the Key Handling Device, it is an extension of a personal key handling issue most cryptosystems face. It is made abstract in purpose so it can be used in multiple scenarios for handling keys.

The reason for me to mention secret sharing and splitting keys is to allow the use of the scheme for not just individual communications but for a group of people on each side of the channel. This is just yet another extension of TFC to allow two groups or more of people to communicate safely. Another reason is that if one of the shares are lost, it is not as harmful as losing all in one shot and as long as the attackers does not know of the other shares. Of course the person maybe tortured or killed but still if the shares are not known, the security is still not broken. One of the uses is to hide shares of unknown amount while crossing into dangerous terrains. Again, this is in anticipation of me putting into the context of keyloading and possession of keys via the mentioned personal Key Handling Device. Just yet another extension and idea on Key Management for TFC.

Most of TFC documentations do not talk much about the critical part of any cryptosystem which is Key Management and that is what I am trying to supplement. If the user simply does the default stuff (generate new key at program setup and exhange them) and does not move the keys outside the TxM and RxM, far enough to say that would suffice in most scenarios.

The TFC design is a good direction but it leaves much to be desired. It’s tamper “proofing” which in a more accurate sense in the industry is called tamper evident is insufficient and can be worked upon in future iterations if Markus et. al. is willing to continue the development and consideration of advises given by us and others. A tamper proof device is something that is 100% not able to be tampered and it does not exist (same as what is an ideal cipher that does not exist).

Put it in short, my point is to introduce to TFC a generalized method of Key Management and portability concept via a tamper resistant Key Handling Device that has the ability to extend across multiple applications and allows the use of paper-based techniques to record and transfer black keys (encrypted keys) across hostile channels (includes the use of secret sharing schemes as shown above in previous post).

To extend further your work on password-based crypto if the Key Handling Device is not in use, the S/KEY algorithm (http://en.wikipedia.org/wiki/S/KEY) would be a good place to start off for a password-based key permutation function to generate message cryptographic keys. To extend the S/KEY algorithm, I would say the use of DH KEX to generate a random S/KEY session nonce per session or the initial session secret.

If the user is talking to multiple people, the single master secret with user password that you suggests can be more complicated as you need to have a deterministic way of deriving the secrets of other users.

To add on top of the fact that Markus’s work may actually be a modification of what the Cypherpunks created of the original OTR implementation that Pidgin uses, switching to a password-and-master-secret scheme for multiple users would require a rework of the code libraries (not a negative thing) if it can be done so that the security that the OTR protocol is not in some ways compromised.

Put it simply, Nick P’s simple key management without additional device except to know a master secret and a password or my version which is via a Key Handling Device. Both are interesting approach to Key Management and others may chip in other ideas too.

Markus Ottela November 1, 2014 3:49 AM

@Thoth:

I don’t see why there should be an option for fast cipher suite: Whatever the final choise is, it has to be as secure as possible but fast enough not cause notable performance drop on RPi. Beyond that, standard computers shouldn’t pose problems.

“The TFC has a transmission and a receiver module and that means key loading must be done twice. Over a Key Handling Device of sorts (regardless if it’s your version or my version), the idea is to make it ‘load once, run all’. If the TFC module (RxM and TxM) were to be for the purpose of unwrapping keys and using them directly, it needs a simple and easy way though.”

This is true in the case of OTP, you need to manually copy the key from TxM to RxM. For the upcoming one that uses cascading ciphers, an additional data diode takes less than 0.25 seconds to transmit all keys for a single contact. The system can not have a key handling device because again, you need bidirectional channel to it and using one device opens path from low to high cascade (waterfall allegory).

Regarding OTP: Generally, there’s nothing preventing users from writing a tool that automatically after key entropy has been evaluated, copies the key on the external drives, and nothing prevents RxM from automatically scanning the directory of memory device and acquiring keys from there, so it can be made much more painless, but I’d rather the user crated such scripts as each user has their own system configuration.

Markus Ottela November 1, 2014 4:23 AM

@ Nick P:
I’ve been giving a lot of thought to the implementation. Assuming the computing doesn’t become too slow, I’m thinking the cascading encryption would have four ciphers.

Cipher cascading:

Threefish design was said to be similar to Salsa20; Twofish on the other hand uses the classic Feistel network. Rijndael and Serpent both use substitution permutation network, but as Clive and Toth talked about, and as Schenier et al. argued in https://www.schneier.com/paper-twofish-final.pdf , Serpent has higher safety factor than Rijndael. Keccak has yet another completely different structure. So it’s back to the drawing board from choise of AES: Salsa20, Twofish, Keccak and Serpent appears to be more robust cascade.

Perfect forward secrecy:

Suppose PFS is implemented by feeding the current key and a nonce to CRNG. The TxM of Alice never learns if the nonce and message didn’t make it to RxM of Bob, so RxM can’t decrypt further messages. But I like the idea, so what I propose is, Tx.py keeps re-hashing the the same key and utilizes a counter, the value of which is appended to transmitted messages. Rx.py looks at the difference in the value, and hashes the key until it’s internal counter matches that of the message. If the message was authentic, the new key replaces the old one. (Rx.py could also ask the user if it has indeed missed 100 messages and should iterate the hash function 400 times.)

This enables PFS, but it makes the keys deterministic. This means single physical end-point exploitation leads to passive compromise of all future messages. To mitigate the risk, DHE should be used to create a key for outermost layer, Serpent-GCM. Basically it’s the equivalent of generating new private key for OTR.

About DHE:

For both parties, Tx.py generates the private key, that the recivers Rx.py then receives. Both then need to transmit the private key from TxM to RxM and public key from RxM to TxM.

TxM > RxM:
Since the private key is generated in real time, it should not be transmitted via NH to RxM: otherwise encryption keypair would also need to be created prior to transmission, which is pointless. The fastest and most secure option is to use another data diode, directly from TxM to RxM.

RxM > TxM:
During transmission over internet, The public key of DHE doesn’t have to be encrypted: while the key of Serpent wouldn’t depend on discrete logarithm problem, the attacker is already in control of previous deterministic keys that would protect the key exchange.

I didn’t realize OCR had had vulnerabilities in it. I also noticed some implementation errors in the system: The OCR should transmit the DH public key, not the shared secret key; The SSK can be derived afterwards on TxM. This protects system from being subverted by compromised camera that would have a hidden memory/transmitter.

The OCR would never be an automatic loopback: The preferable procedure would be to first print the DH public key, then scan it using web camera connected (applying what you taught me) to a Guard device, after quickly observing the font looks normal. If the key is read fine, it would be forwarded to TxM. Huge thanks for taking the time to describe the guard technology in detail. I’m thinking this project isn’t the place to go about taking risks after all.

The solution is, user will have to manually type the DH public key to TxM either using base64 charset or as PGP wordlist, along with truncated hash of DH public key to ensure no typos found themselves in. As the string is long, it can be done in multiple, separately verified segments, to avoid user having to type in the entire value again.

Even I admit this is inconvenient. So it would only be used in the case user suspects end-point device compromise and wants to ensure the messages from that point on will require a targeted, active, human-driven, MITM against both network and hash verification channel.

Session key derivation:

The DHE Shared secret (key) and public DH value (salt) are fed to PBKDF2 that using high number of iterations, obtains a new pair-wise key for GCM mode Serpent. From this point onwards they key is iterated through the CRNG / hash function.

MITM detection:
There should also be another value that TxM(Alice) and RxM(Bob) device-pair calcluates.
Initially, the hash of Serpent key is used as seed of sequence S. After pair-wise key k has been generated, the value of sequence if updated using a hash function: S_i+1 = H(k||S_i). The new S is then sent to recipient as an encrypted message. If a single MITM attack against DHE has occured in the past, the TxM and RxM will have different values, and RxM of both paricipants alert at the same time they’ve been under attack. This is not bullet-proof, but at least it will require additional physical compromise before next session that’s not MITM-attacked, takes place.

Why not add DHE to exchange all keys
I don’t want to create a weak link out of discrete logarithm problem: Users are generally more safe when the implementation doesn’t appear to conveniently solve the entire problem. When users feel they are operating on lower security level, they are more prone to arrange the meeting and to exchange new PSKs for all ciphers. DHE is an extra level of assurance that needs extra work from users, and it mainly serves as a backup for the standard high assurance physical key exchange.

DHE value authentication
The hash verification places trust on user’s voice not being tampered with. This poses some problems but slight improvement to security would be, that the hash of DH public key would additionally be salted with secret information, similar to socialist millionaires’ protocol. By comparing H(‘shared secret’ || DHE_public_key), Alice and Bob can ensure, there needs to be a sophisticated AI or a human attacking the ‘shared secret’ aspect.

phew

“That meant he needed key material for every message along with impractical amounts of face-to-face meetings & key protection.”

I’ve to disagree with the impracticality: In 6 hours, you’re able to generate 170MB keyfile that gives you 400 000 messages. That’s roughly every IM message I’ve sent over the period of a decade.

“key exchange done via OTP encryption of the key”
I don’t see how this benefits the encryption protocol: Why not just use the pre-shared OTP encryption key as pre-master secret: using instead the XOR of public value and attacker compromised OTP key as pre-master doesn’t add security. I feel like I’m missing something here.

“That way, only 128-1,024 bits are used per session.”
I’d rather swap the key on message-basis than session-basis, but maybe the two can be combined: A sessionID that is able to exhaust, that tells Rx.py which key to load, and separate msgID that tells how many times the key has been iterated through the hash function.

Markus Ottela November 1, 2014 4:40 AM

@Thoth:
“Most of TFC documentations do not talk much about the critical part of any cryptosystem which is Key Management”

User should store the entire TFC application along with all configuration files on full-disc-encrypted drive and use liveCDs to operate TxM and RxM, and preferably use Tails on NH. For key transmission I would argue steganographic partition on a cascading, hidden Truecrypt volume would be the most secure option. Additionally, user should not use keys if there is a chance content of memory device was dumped by authorities for example, on an airport. Ironkey might or might not security, microSD might or might not be possible to be ‘smuggled’ through.

“[TFC]’s tamper “proofing” which in a more accurate sense in the industry is called tamper evident is insufficient.

You’re absolutely correct, it should say ‘tamper evident’.

I’ll have to read the paper through and see if there are things that might create illusion about security where there is none and add appropriate warnings. Please keep in mind physical security has been out of reach for most parts: When mass surveillance steps over your door step, it’s probably time for other actions.

Thoth November 1, 2014 5:37 AM

@Markus Ottela
In regards to OTP with XOR, I would like to caution that if the attacker knows the plaintext (like the header or some static or predictable data), he can XOR the ciphertext with plaintext to derive parts of the key or even the entire key. Use OTP key over a proper stream cipher for added security.

A good lecture on stream ciphers and OTP mechanics: https://www.youtube.com/watch?v=sRUsl0aqDDY

You might want to study the protocol of TextSecure which uses per-message DHE keys and also a cryptanalysis and improvement suggestion for TextSecure (http://eprint.iacr.org/2014/904.pdf).

There are many ways to derive OTP keys.

1.) One way is to derive a huge chunk of them and try to sneak them across to each other through hostile environments in encrypted form as you mentioned.

2.) Another way is to simply exchange and verify signing public keys face-to-face and then use a randomized DHE based per-message protocol like TextSecure to derive OTP keys on the fly.

3.) Another way is to communicate with DHE-OTP generated on the fly and then over that channel dump a load of like 1000 OTP keys and then default back to that dump of 1000 OTP keys and once both sides are close to 100 keys remaining, start generating and dumping more keys over the network.

This way the chance of being intercepted during the face-to-face meeting and being coerced into decrypting is reduced. If you are simply just comparing DSA signing keys then there is nothing much to lose since it’s public keys that can be shown in the open.

Markus Ottela November 1, 2014 6:43 AM

@Thoth:

In TFC OTP key is used exactly once. It goes so far with this, it even features a blacklist that stores hash of keys and evaluates that key is never, ever, ever, reused.

If you’re referring to “–I don’t see how this benefits the encryption protocol:–”

My point was, suppose you have pre-shared OTP key k1 with someone. Now, it is pointless to generate session key k2 and send it to recipient as c1 (c1 = k1 XOR k2). You might as well use k1 instead of c1 as session key.

The only way to generate OTP key is with HWRNG that uses unpredictable process, has negligible bias and no memory effect. Mr. Vazzana’s device comes pretty close to that but needs improving. Until you have QKD, you need to sneak the OTP keys to your contact.

Effectively, what you have is a stream cipher that has deterministic, PRNG derived keystream. Even though any length of that key is only used once, the stream is not unpredictable, so I don’t think it should be called OTP.

If you’re using DHE to exchange a session key, then what you have is two “weak links”, a computationally breakable key exchange cipher that is protecting computationally breakable pseudo-random keystream for stream cipher. Breaking DHE might be infeasible atm but again, cryptanalysis only gets better.

Even if the payload was a truly random OTP keyfile, it would be pointless to share with any encryption:
OTP keyfile has exactly 8 bits / byte of entropy, so you can’t compress it and vice versa: you can’t grow OTP keyfile from shorter key. This means you can’t retain perfect secrecy with DHE.

When DHE breaks, it doesn’t matter if attacker gets access to OTP key or plaintext. To attacker’s eyes, there effectively the same thing, you get the other by XORing it with public ciphertext.

The principle behind TFC is, keys are overwritten immediately after they’re used. There is no way user could decrypt messages later no matter what. Keys that are exchanged during meeting have yet to decrypt any data so all adversary can do is attempt to get a copy. This is a serious risk, so users should agree on code word that let’s the contact know, OPSEC has failed and new keyfile has to be exchanged.

Markus Ottela November 1, 2014 6:55 AM

To correct myself
It’s pointless to generate session key k2, and encrypt it with k1 to produce c1 (c1 = k1 XOR k2). When recipient decrypts c1 with k1, he gets k2 (k2 = c1 XOR k1), which is at best, is an equal length string that has equal amount of entropy than what k1 had. So, you might as well use k1 instead of k2 as session key.

Thoth November 1, 2014 7:18 AM

@Markus Ottela
OTP has always been a stumbling block in cryptography and most cryptographers would not go near it. They would rather prefer stream ciphers than OTP because even if a stream cipher is somewhat more predictable, it would still be much more secure in terms of security as OTP can easily go wrong (not just the random generation but also the security is based on a simple bitwise XOR).

The only advise I can give for the implementation of OTP is to make your message format unpredictable to the adversary. This will give you a much larger security margin if OTP is continuing to be the main part of TFC. One example is random bytes sitting around at random intervals.

Markus Ottela November 1, 2014 10:18 AM

@Thoth:
Stream cipher has less security through obesity: the key can be leaked faster. It’s also computationally harder to calculate and infinite times faster to break.

OTP is exactly as secure as the HWRNG that is used. Similarily, all Vernam stream ciphers do, is XOR the keystream with plaintext.

What stream cipher can do, is offer smaller key that can be used multiple times. Storage space has grown so much it is no longer an issue to transport keys for years to come.

Both keys are too complex to memorize and need to be stored in an encrypted format to drive. Similar protections can be applied to both, with equal risk of adversary copying ciphertext and attacking against symmetric cipher protecting data at rest.

“OTP can easily go wrong”.
Incorrect implementation in source code would be devastating for any cipher. I’d rather you audit the code (1 LoC for OTP) and tell me how easy that was, than worry. Then turn on debugging from the configuration section and see for yourself how the program displays hexdump of overwritten offset etc. Let the genKey run statistical analysis with Dieharder on generated keys etc.

What usually goes wrong with OTP is lack of authentication mechanism as the ciphertext is vulnerable to known PT attacks. One recent project to initially fail at that was NSAAway. TFC does one-time MAC and doesn’t have that problem.

With secure keys, making the message format unpredictable doesn’t add to security one bit. Attacker needs the key to be able to tell what it says, otherwise the probability of ciphertext decrypting to some message is the same as the priori probability of that message. Adding single bits here and there would not matter since written language has so much redundancy: “‘atta£ck at d¤awn’ || padding” doesn’t appear more secure.

Nick P November 1, 2014 12:39 PM

(Not a full reply here as my day is busy. Just quick comments on something that’s generating a lot of debate to help you two get past it.)

@ Thoth

Markus is correct about OTP vs stream cipher. The OTP is provably secure so long as it’s random and not reused.

Its main drawback was operational use. Bruce often said something like it moved burden from message exchange to key material exchange. Impractical. Plus, people often reused keys when they shouldn’t. Today, one can store a ridiculous amount of key material on a small device. Exchanging a small device in one face-to-face meeting can cover years of tiny exchanges, months of larger ones, and probably a lot less multimedia. TFC also takes steps to reduce odds of key reuse. And he took my suggestions on extra protection of keys in storage. Altogether, the first OTP chat I’ve ever recommended as designed well.

I just said he might need a more convenient option for increased adoption even if lower-than-OTP assurance on crypto. Also, I began to look at expanding his TFC to transfer more than text messages.

@ Markus

You’re right that per message or session encryption is probably OK for plaintext IM. (Although American teens are trying to push the 400,000 message a year limit…) I previously (IIRC) pointed out people will want an IM client to be able to exchange files. This starts eating through key material. The use of a stream cipher (or polycipher) solves that problem while maintaining strong security. The use of OTP to start the session or encrypt the symmetric key was a compromise between us. I then refined that scheme with a non-OTP implementation with a master key that can be expanded & convoluted for future messages with PFS potential. The benefit of this is pre-shared secret can be put on a tiny piece of paper & is easily typed by hand. This scheme also works with arbitrarily large data sets with either one physical meeting (master key exchanged) to several (one or more OTP exchanges).

Until we got to loops & guards, I was also trying to stick with your implied requirement of using simple, local hardware that someone can scavenge to avoid subversion. Each of my schemes, minus advanced stuff I discussed with Thoth, can be built on whatever devices are available locally. Even microcontrollers or pre-1995 PC’s with minimal symmetric constructions.

So, that’s where I was coming from on that. Hope that alleviates the confusion.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.