Friday Squid Blogging: Dried Squid Sold in Korean Baseball Stadiums

I’m not sure why this is news, except that it makes for a startling headline. (Is the New York Times now into clickbait?) It’s not as if people are throwing squid onto the field, as Detroit hockey fans do with octopus.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Posted on November 7, 2014 at 4:11 PM143 Comments

Comments

Sabrina November 7, 2014 4:27 PM

French DGSI (french NSA) used firefox 28 and before’s vulnerabilities to hunt “anonymous hacktivists” through Linux forums in french langage. They used captcha registration to inject javascript code that sent them back all firefox bookmarks. It allowed them to follow the forums members that had “anonymous” related bookmarks.

JK November 7, 2014 4:46 PM

White House moves to ‘kill off the password’
http://thehill.com/policy/cybersecurity/222057-white-house-official-we-simply-have-to-kill-off-the-password

Daniel did not give specifics on exactly which of the pilot programs — ranging from using a mobile device for identification to using a wearable ring or bracelet — will be rolled out. But they will be “widely available” once they are ready, he said.

Maybe this Mooltipass device is a better alternative?
https://www.indiegogo.com/projects/mooltipass-open-source-offline-password-keeper

Daaniel November 7, 2014 5:00 PM

@JK “We simply have to kill off the password,” he said. “It’s a terrible form of security.”

Of course it is a terrible form of security. It’s something I alone know and it is something that under the Constitution of the United States an American citizen can’t be forced to reveal. That makes it terrible to the government.

In the future, Americans won’t have a national ID card–that lacks ambition. They will have a national ID card/personal tracking device that they will wear around their wrist or attached to their belt.

What gets measured gets managed and if the government can’t measure you, they can’t manage you. The password interferes with their ability to manege you, ergo it must die.

Sabrina November 7, 2014 5:05 PM

French DGSI uses psychoanalysts to decrypt all informations they analyse (mainly discurses). It is one of the rare countries to still use psychoanalysis theories. They also use psychoanalysis as proactive attacks too.

If this is of interest, I do have more info.

unhappyApples November 7, 2014 5:12 PM

@Daniel

See also

https://www.europol.europa.eu/content/global-action-against-dark-markets-tor-network

says “And we are not ‘just’ removing these services from the open Internet; this time we have also hit services on the Darknet using Tor where, for a long time, criminals have considered themselves beyond reach. We can now show that they are neither invisible nor untouchable.”

Note also the footnote says “Tor is used by a variety of people for both illicit and licit purposes, a fact that has also been acknowledged in the complaint against Ross William Ulbricht, accused of being the main administrator of the original Silk Road.”

Plus

Engadget has this “Details of how the service was pierced have not been revealed (we have an idea), but The Wall Street Journal quotes Eurojust spokesman Ulf Bergstrom saying “You’re not anonymous anymore when you’re using Tor.””

however the WSJ article they reference at http://online.wsj.com/articles/illegal-websites-seized-by-eu-u-s-authorities-1415368411 is paywalled.

The Engadget link is http://www.engadget.com/2014/11/07/operation-onymous-tor-pierced/#continued

Casual Friday November 7, 2014 5:34 PM

@Pete

I think it would be possible, but it would have to be peer to peer and there would have to be passive nodes that would not appear to be participating until other nodes go offline. The .onion address might also have to change and an offline or out of band means of communicating the new info would have to be available. I don’t really think though that it is in the best interest of society to have services that can’t be taken down though the appropriate application of due process. Having bot-nets and sites dedicated primarily to illegal activity isn’t really a good thing and it hurts the cause of giving people free and anonymous ways to communicate.

unhappyApples November 7, 2014 5:41 PM

@ Sabrina November 7, 2014 5:05 PM

Use of pychoanalysis implies targeting, yes? Otherwise the volume of intercepted communications would keep psychoanalysts busy beyond the lifetime of the known universe. How do the authorities decide what to discard?

Godel November 7, 2014 6:28 PM

@ st37

It’s been suggested that any Silk Road 3.0 is likely to be a government honeypot from the start. It think it would take a particularly brave and foolhardy soul to trust an SR 3.0 at the moment.

Anura November 7, 2014 6:50 PM

I wonder if we will see a move to i2p for these kinds of services.

Also, I’m really curious to see how they claim to have found them, because 400 independent investigations seems extremely unlikely. Either they exploited a flaw in Tor, or this was the result of widespread surveillance that didn’t target any specific site in particular. If they say they found them using 400 different methods, it probably means you can guarantee parallel construction.

Bob S. November 7, 2014 7:19 PM

Is TOR Kaput?

Reckon so.

Closing down one operator by simply entering gibberish into a captcha although ridiculous is still plausible to a judge who can’t turn his Windows ME pc on without tech support.

Closing down 400 sites and arresting dozens of folks who use TOR for high publicity crimes…well the captcha scam simply won’t explain that will it?

There must be some fundamental flaw in TOR that’s been found but kept secret. I suppose if folks simply want to watch porno on TOR or even download bootleg movies that will be OK. However, it is now known NSA has authority to use their military strength cyber powers for the war on crime and apparently have found a way to win big time.

Possee Commitatus? Is that some old western movie or something?

Anura November 7, 2014 7:28 PM

I don’t know that Tor in general has a problem, but their hidden services might.

WARNING: The following is all conjecture on my part:

I’d imagine even if you can’t tell who is who, you can tell who is running a hidden service just by analyzing the traffic. Machines that act only as nodes should roughly have the same amount of traffic going in and traffic going out, users should have a higher ratio of downloads to uploads, and services should have a higher ratio of uploads to downloads. Therefore, if you want to discover a hidden service, you just need to analyze what kind of traffic it gets. It won’t tell you which service it is, but it will tell you it is a hidden service. From there, you can use other means to get into the system and figure out what it’s running, exactly.

Thoth November 7, 2014 8:15 PM

@JK, Mootlipass
From the description of the Mooltipass implementation, this stuff is a low security assurance device in my opinion. And before we go into whether Mooltipass would ever be secure enough for widespread use, It still has to raise $60,000 USD of funds for only 27 days left. I am pretty unsure if it would ever survive the funding phase anyway.

Mooltipass wants to be a hardware security device password manager but here is exactly where it would fail:
– Mooltipass mentions the using of a knife blade or sharp blade to cut two slots into the casing of the Mooltipass to gain access to Ardunio pin connectors. After gaining access to the Adrunio pin connectors, you can use it to program the Mooltipass. This very step simply dooms the entire Mooltipass beyond salvaging. Imagine someone finds your Mooltipass and cuts two holes with a knife blade and plugs connector piins in and compromise or reprograms the platform.

  • Mooltipass does not seem to have a wiping functioning on detecting tamper and even allows interfacing with other devices. That means the security of the secrets kept within the Mooltipass is already insecure.
  • Mooltipass is a multiple user environment and although there is nothing wrong with being a multiple user environment, the above two vectors of defeating the Mooltipass would make multiple user environments seriously dangerous. How are they going to ensure that the device is not tampered and the users requesting decryption of their own accounts would not somehow compromise others by using a compromised Mooltipass to listen to the decryption of other users and sending their secrets (probably by the USB cable) to somewhere else ?
  • Unknown security of kernel.
  • A note that I have not actually looked at the source code.

If the Mooltipass had not the above mentioned vulnerabilities and implements them with a secure kernel for it’s chip, I would probably be a good password security device allowing multiple user accounts (I am doubtful if you are going to share out your own Mooltipass with friends and family though).

So is it even a secure device. Highly unlikely. It would be pretty easy for HSAs to simply owned the device with ease considering how much security weaknesses it has.

If the Mooltipass wants to take the dangerous uphill road of being re-programmable or flexible for executing additional codes, they will have to follow the Thales nCipher HSM style of separarting the workspace into a native device secure workspace and a userspace and the userspace have it’s own execution, memory and RAM and have to go throguh the same checks and bounds to access the secure device workspace as though accessing from another computer (via secure API calls) with proper authentication and authorization which this is called a Hardware Secure Execution Engine (SEE). Just to reveal yet another obstacle if Mooltipass wants to have a SEE in it’s own system, it will definitely be considered a controlled secure device that may face the waves of politics and bureaucracy and have to comply with weapons and cryptographic export controls before shipping out or importing in as do most SEE included secure hardware faces the uphill weaopons and crypto control if they contain SEE environments.

Justin November 7, 2014 8:35 PM

@Sabrina

As far as I can tell from Google, Ingrid Desjours is a psychologist who used to work with sex offenders in Belgium but now writes novels in French, and Eva Talineau practices clinical psychology in the Paris area.

I am curious as to why you mention these two individuals. It doesn’t seem out of the ordinary to me that a police or intelligence agency would employ psychologists.

The FBI and the CIA hire psychologists in the U.S., too.

A-fly-on-the-wall November 7, 2014 8:57 PM

“…and services should have a higher ratio of uploads to downloads” D’oh! Sometimes security holes are soooo simple!

If you have the server monitoring to do that (not so hard if you can monitor ISPs or wire-tap “home users”), then you can just send a series of requests to the hidden services targeted for take-down, and see which servers they arrive at. Much easier than the general traffic correlation problem, since you (the TLA) control your own endpoint already.

Last week a regular here enigmatically referenced a security hole in Tor hidden services large enough to drive a truck through – or some such. Wonder if this was it?

Nick P November 7, 2014 9:12 PM

@ Thoth

The easy route might be cheap connectors from smart cards to phones, computers, etc. Then, use software on the smart cards for protecting the secrets. They have a lot more tamper resistance and side channel protection than most devices. They’re cheap too.

Buck November 7, 2014 9:26 PM

@Daniel

Nothin’ like a great feel-good story of yet another victory in the ‘war’ on drugs; especially when it also serves as a distraction from some simultaneous failures in the same…
FBI agent in misconduct case may have tampered with drugs, guns, documents say (November 5, 2014)

Federal prosecutors said Wednesday they will dismiss indictments against 28 defendants in District drug cases amid an investigation of an FBI agent accused of tampering with evidence, including narcotics and guns, according to newly unsealed court documents.

Fourteen of those defendants have already pleaded guilty and were serving sentences — one was a year into a 10-year term — and prosecutors said they can withdraw their guilty pleas and the charges would be dropped. A hearing is scheduled Friday in U.S. District Court for many of the defendants.

http://www.washingtonpost.com/local/crime/fbi-agent-in-misconduct-case-may-have-tampered-with-drugs-guns-documents-say/2014/11/05/b77fd50e-6440-11e4-bb14-4cfea1e742d5_story.html

While I’m certainly convinced there’s plenty of overlap, I sure am quite curious as to any potential correlations of violent crimes relative to ‘online’ (mail-order) vs. ‘in-person’ (assault-possible) drug deals…

Thoth November 7, 2014 9:27 PM

@Nick P
Use a Yubikey. (https://www.yubico.com)

That should handle the 2FA part.

I did attempt to suggest Yubico to allow their Yubikey to store a small secure cache of critical passwords and act as a Hardware Security Password Device besides being a tamper resistance OTP device but they do not seem to be interested (due to high costs as they claim).

For storing secrets on smartcards and so on, use the smartcard as a HSM to protect the black key sitting on the device. The smartcard decrypts the black key into red key (encrypted passwords that have been secured with smartcard private key) and it should be fine. The only thing is, how can we trust those smartcards and smartcard based HSM out there 🙂 . Of course if the bottom line is not really about HSA resistance but more on script kiddy resistance and hacker wannabes, those should be fine. Once it hits the HSA resistance (CC EAL 7+) level, then the story changes into a high assurance design.

sena kavote November 7, 2014 9:29 PM

Better acronyms re: TLA / HSA

I am ready to use HSA (high strength attacker) as a replacement acronym and word to TLA, “top black hat” and other words. TLA=three letter agency

But little searching reveals that HSA is reserved. We should try to avoid name conflicts even with completely different contexts / “namespaces”.

We should try to follow a rule of Four Letter Minimum for all New Acronyms (FLMNA)

High strength level attacker (HSLA) might be better than HSA?

To give some inaccurate estimations, one extra letter on acronym should reduce name conflicts to roughly 1/20.

3 letters can have 202020=8000 acronyms

4 letters 202020*20=160000

roughly

Changing from TLA to HSLA or HSA is not same as changing Alice and Bob to some Indian names. Firstly, Alice and Bob are not standard terms, they are just common names used in some crypto examples to indicate that now we are talking crypto in theoretical terms and no need to be confused about who they are. Makes bit faster to understand.

HSLA is better than changing from “chairman” to “chairperson”.

Getting HSA or HSLA to wide use would need at the very least Bruce or someone as prominent writing a blog post about it.

Until then better use one of these forms:

TLA/HSLA
TLA/ HSLA
TLA / HSLA

TLA/HSA
TLA/ HSA
TLA / HSA

TLA/HSLA / HSA
TLA/ HSLA / HSA
TLA / HSLA / HSA

Dud November 7, 2014 9:32 PM

Perhaps squid should be taken off the menu in South Korea, Philadelphia & Monterey Bay. There seems to be a (quote) “tsunami of thyroid cancer” in South Korea.

Quote: “Nowhere in the world is the rate of any cancer growing faster.”
http://enenews.com/nytimes-doctors-call-banning-thyroid-cancer-screening-tsunami-thyroid-cancer-stop-diagnosis-decrease-screening-need-actively-discourage-early-detection
It cannot be easy beating Japan in anything, at least on paper.

Quote: “A South Korean court for the first time has ruled in favor of a plaintiff claiming… thyroid cancer was caused by radiation from six nuclear power plants located [5 miles] from her house”

Head’s up Philadelphia. I would quote poetry, yet you likely hate Limericks and the like now.

We seem to have manifold threats to our own respective security, liberty and health manifesting right now that are on a physical layer.

Nick P November 7, 2014 11:21 PM

@ Thoth

YubiKey does make a nice gadget. I left it off because it was only evaluated to FIPS 140-2 Level 2. Too many products on low FIPS levels get owned easily. The smartcard companies mostly start at EAL5+ with at least one at EAL6. If not a smartcard, then a YubiKey might be a nice fallback option.

A-fly-on-the-wall November 8, 2014 12:08 AM

Mitigations for detecting Tor hidden services by observing ratio of overall in/out byte counts:

  1. Run “dummy clients” through the same Tor client that generate random web browsing. Enough to overwhelm the statistics. If you do enough to keep the link busy, it has the added advantage of making traffic correlation harder.

  2. Run the hidden service client through a legitimate high-volume Tor relay. (i.e. volunteer your server). Then, general Tor traffic becomes the cover traffic drowning out what belongs to the hidden service.

  3. Run a “legitimate” hidden service, and get it to be popular, as a cover for the clandestine one. Uh, what’s Facebookcorewwwi up to, anyway?

Thoth November 8, 2014 3:50 AM

@Nick P
Yubikeys are suppose to emit out configured static passphrases to replace human typing so that’s the reason why it cannot reach FIPS 140-2 Level 3 and above as it is the design requirement.

Yubico pushes out the U2F device which is simply a one-touch smartcard asymmetric signing key. Not sure if it can reach the FIPS 140-2 Level 3 or even 4 for this one since it’s role is now a signing device.

It is quite unsurprising that the ITSec market is very weak to the point HSAs have so much fun time breaking these “security devices”. We have discussed on the reasons of why this is happening in other threads.

Hopefully someone would come up with a robust secret management device that encompasses storage of secret keys, passwords and secure transmission as too many secrets are being stored in insecure format and forms.

Thoth November 8, 2014 4:37 AM

@Nick P
Most of the ideas for custom built hardware I give here are based on the easy to access Raspberry Pi board which runs an ARM processor. It seems like seL4 may have a chance of booting onto a RPi and there are people trying to boot seL4 onto RPi. So far I have not heard any official success yet but who knows if someone actually got something running already.

seL4 has a generic ARM page: http://sel4.systems/Hardware/General/

I have a RPi myself but all I need is to figure out how to interpret the above link into actual steps and procedures to load the seL4 onto an empty SD card to boot the RPi into seL4 CC EAL7+ kernel.

If that succeeds, we might have an open hardware (RPi) with a CC EAL 7+ kernel (seL4) and using a soldering iron to dislodge unnecessary hardware parts that might allow covert channel attacks (need to figure out what to detach from the board). Generally the board is pretty much fine because you have to actually make an effort to attach the Ethernet cable, audio jacks and so forth onto the board otherwise if you seal the board into a properly made external casing and make use of the GPIO pins to allow a wiping function with pressure switches to detect tamper on the casing, it should easily clear quite a few levels of CC EAL ? Adding a faraday’s cage kind of mesh for EMSEC protection might even put it to FIPS 140-2 Level 4. Also mounting some form of temperature detection, movement detection and humidity detection further increases it’s chances of reaching a very high assurance rating and FIPS level 4.

It would be nice to see a general purpose board turn into a high assurance and high FIPS machine 😀 .

Any ideas ?

Scott "SFITCS" Ferguson November 8, 2014 5:01 AM

@Pete

Is it possible, in theory, to set up a website or a peer-to-peer system that cannot be taken down?

I believe so (very difficult to take down) – provided appropriate OpSec is employed, and that you are willing to deal with a different sort of “website” and a much smaller audience.

Freenet

NOTE: what journalists call “Darknet” is not particularly “dark”. More “gray” than Dark. Freenet is several shades darker, the invite only areas are closer to black.
I suspect it’s a simple spelling error – tor is more of “Dorknet” ;p

Kind regards

65535 November 8, 2014 6:43 AM

@ Anura

“Also, I’m really curious to see how they claim to have found them, because 400 independent investigations seems extremely unlikely. Either they exploited a flaw in Tor, or this was the result of widespread surveillance that didn’t target any specific…” -Anura

You are right. The “authorities” arrested 17 people but took down over 400 DNM sites. That would about 23 sites per person – very odd. But, I would guess that some the other 400 dark markets were honey pots by NSA/GCHQ. The others sites could have been genuine drug markets – or scam sites.

1] Give that all mail and presumably all packages delivered by USPS, UPS and Fedx are scanned and record at the US Post Office, the buyers of slkrd 2.0 would be observable and probably the vendors [to some extent]. So, the game is rigged by the Feds.

2] Given the NSA/GCHQ + 5eyes play tap the backbone and pass the data around to the world in raw form to circumvent USA laws it seems highly probable that the game is rigged in their favor.

Worse, it appears that the GCHQ spies on lawyer’s conversations [and probably lawyers in the USA].

“British spies have been granted the authority to secretly eavesdrop on legally privileged attorney-client communication”-The Intercept

https://firstlook.org/theintercept/2014/11/06/uk-surveillance-of-lawyers-journalists-gchq/

3] Other observations. Some people have suggested that Benthall was just a pasty for the Feds because he was so easily fingered and identified – but that is speculation [Court documents indicate Benthall was ID’d by a beta version of Chrome 35.0.1910.3 on an out dated apple 10.9.0. [Crude finger printing method] which link Benthall to his Gmail account, twitter [with pictures], linkin, and bitcoin exchanger accounts.

I note that Google, Bitcoin exchanger(s) and others gave information about Benthall to the Feds – legally or illegally. Then it appears that Benthall was under physical surveillance at various locations, such as a hotel, his parent’s house which link him to his apple computer – and his outdated OS with Chrome beta.

“…During the investigation, the [Agent ?] has had access to the customer support interface for Silk Road 2.0, where administrators may log on… Through the access to the support interface, the has been able to observe the operating system and the web browser used by any administrator when accessing the support interface. On or about April 6, 2014, the observed that Defcon was logged into the support interface, and observed Defcon, to be using the Google Chrome web browser, version 35.0.1910 3 and a computer running the Apple OS operating system, version 10.9.0, at the time. Defcon is the only administrator whom the has observed log into the support interface with that browser and operating system combination….Physical surveillance of BLAKE BENTHALL, a/k/a Defcon, the defendant, conducted in conjunction with online surveillance of Defcon on Silk Road 2.0 by the HSI-UC, further demonstrates that they are one and the same. Specifically, on September 10 and September 11, 2014, while BENTHALL was visiting relatives at their residence in Houston, Texas FBI agents conducted physical surveillance [ BENTHALL]…IP logs obtained from Google, Inc. (Google), the service provider for Benthall Email Account1, indicate that, on or about May 30, 2014, the user of Benthall Email Account1 logged into that account from IP Address1 approximately 146 times [during the Fed’s imaging of SlkRd.2 server] noted above, IP Address1 was used on the same date to send support requests to the Provider concerning the Silk Road 2.0 Server, further demonstrating that the user of Benthall Email Account1 controlled and administered the Silk Road 2.0 Server…”

See complaint 70% down document:
https://s3.amazonaws.com/s3.documentcloud.org/documents/1354771/benthall-blake-complaint.txt

How the Feds got the exact location and IP address and imaged the slkrd 2.0 server is murky – and leads one to think there was some state actor(s) was helping in the investigation. The complaint indicates that once the slkrd 2.0 server was imaged the Feds got the private key and unlock various files – leading to more incriminating data.

@ Anura

“I’d imagine even if you can’t tell who is who, you can tell who is running a hidden service just by analyzing the traffic…”

I agree.

There is a discussion the actual probabilities of using malicious Entry Guard nodes to reveal de-anonomize the users:

“We believe they used a combination of two classes of attacks: a traffic confirmation attack and a Sybil attack.

“Could you please clarify that:

“To de-anonymize a user the malicious source must get an entry guard as well as an another one as “exit node” to the hidden service. It that correct? Then: how is the probability to get entry AND ” exit” node (a relay “middle node” instead of entry or “exit” wouldn’t help) from this malicious source?

“On July 31st, 2014 Anonymous said:

“So if I got this right, 6.4% of nodes were rogue? So that means for each conection to TOR there was a 6.4% chance you’d connect to one of the rogues, and then if you were accessing a HS, there was also a 6.4% chance the HSDir you queried was also rogue. So there’s roughly a 0.4% chance that connection is affected. BUT, if you did this 100 times over the affected period, there would be roughly a 1 in 3 chance it occured. Anyone care to chek my math?

“On August 3rd, 2014 Anonymous said:

Yes, math ok. If first calculation comes from 6.4% times 6.4%, the result is 0.4096% or roughly 0.41% (Less than 1%). And your other number is ok wich comes from 99% secure raised to 100 times (meaning the probability of going clean all 100 times), which gives 36.6%, so yeah, roughly 1 out of three guys using the service 100 times will come out completely clean i.e., undetected… “

[It’s unclear if the odds above include both the traffic analysis attack and the sybil attack or both – what are the actually odds?]

https://blog.torproject.org/blog/tor-security-advisory-relay-early-traffic-confirmation-attack

[next]

Other people note the timing of the Jan. 30 to July 4 attack on Tor by two “University Researchers” tracks well with the time of the slkrd 2.0 investigation [and note that these two researches were mostly likely funded by the Federal government – who knows?].

[from comments at Krebsonsecurity]

“…Beginning on January 30th, and ending on July 4th, an unknown group was conducting what appears to be a large scale deanonymization attack on Tor. This was probably the attack by Alexander Volynkin and Michael McCord of CERT, which was the subject of their cancelled-by-CERT lawyers blackhat talk.”- Nicholas Weaver

http://krebsonsecurity.com/2014/11/feds-arrest-alleged-silk-road-2-admin-seize-servers/comment-page-1/#comment-322893

CallMeLateForSupper November 8, 2014 7:59 AM

@EvilKiru “Hasn’t the NYT always been into click-bait?”

Good question, and I have no idea because NYT certainly is into paywall.

Occasionally my vigilance againt NYT links flags, and I click. The resulting beg screen is both a wake-up call and a “DOH!” moment.

C64 November 8, 2014 9:53 AM

@Anura, 65535

“Also, I’m really curious to see how they claim to have found them, because 400 independent investigations seems extremely unlikely. Either they exploited a flaw in Tor, or this was the result of widespread surveillance that didn’t target any specific…”

Considering that…
(ok admittedly I may be wrong with these points)
1. the servers accessible through Onion are physically connected to the regular standard internet
2. the last stretch between the server and the Onion goes through the regular standard internet (meaning that at some point those servers listen for incoming requests on the same regular standard internet as any other server)

…would it not be possible to map all DarkNet servers simply by mapping those that respond to a request on the [domain name].onion TLD.

Of course they would need to know the [domain name] part to do the check in the first place. But perhaps the bots of large information aggregators like Google already do stuff like that (since Google is into mapping the info in cyberspace anyways).

AlanS November 8, 2014 12:45 PM

@Dud

There is no epidemic of thyroid cancer (see Welch’s An Epidemic of Thyroid Cancer?). The page you link to selectively quotes articles that actually state the opposite. As with other cancers, there is an epidemic of fear-mongering, misinformation, hucksterism, and over-diagnosis. Surprise, surprise, screening for disease is beset by many of the same problems as screening for terrorists. Just say no to Pinkwashing, Movember and all the other crap that gets pushed down our throats about medical screening.

For reasoned thinking about the risks and trade-offs involved in medical screening see the work of Gilbert Welch, Steven Woloshin and Lisa Schwartz at the Dartmouth Institute for Health Policy and Clinical Practice. Here’s a recent NYT article by Woloshin and Schwartz: Endless Screenings Don’t Bring Everlasting Health. Also see their book: Overdiagnosed: Making People Sick in the Pursuit of Health.

Rick November 8, 2014 1:16 PM

And now for something completely different:

“For $25 a year, Google will keep a copy of any genome in the cloud.”

http://www.technologyreview.com/news/532266/google-wants-to-store-your-genome/

No mention of the word ‘security’ or ‘privacy’ in the article. InfoSec is like insurance: everyone wants it after the fact, no one wants to buy it, and when you need it, the consumer has forgotten where the paperwork is.

API info and pricing: https://cloud.google.com/genomics/

Also, no mention of how the NSA will partner with Google to monetize your DNA. I guess it’s assumed that you’ve already waived any claim once the data finds its way to the cloud. In addition to Orwell, Huxley deserves some praise on this forum for coining the terms ‘Alpha/Beta’ and ‘Gamma/Delta/Epsilon’. We’ll be seeing more of this.

On the positive side, there are some great potential medical benefits to researching such a pool of data. I fear the power being concentrated, though; I don’t trust a cabal to “do the right thing” with that power.

Nick P November 8, 2014 1:22 PM

@ Thoth

“we might have an open hardware (RPi) with a CC EAL 7+ kernel (seL4)”

Remember that Common Criteria EAL’s cover the whole lifecycle and evaluation. There are many components. The seL4 kernel isn’t EAL7+ as it lacks most of them. The EAL7 components it has are formal specification of high level design, formal specification of security policy, formal proof design embeds security policy, and a mapping from high level design to implementation. Where it exceeds EAL7 (EAL7+) is that it also has a formal proof that the source code itself is equivalent to the high level design. I believe NICTA has projects underway to produce some of the other components such as covert channel analysis.

You could say that seL4 has achieved and exceeded the most difficult part of the EAL7 assurance process. It’s just doesn’t meet the other requirements yet.

“It seems like seL4 may have a chance of booting onto a RPi and there are people trying to boot seL4 onto RPi. So far I have not heard any official success yet but who knows if someone actually got something running already.”

Seems like it would work. Given their instructions, I doubt it will be easy. The tools are primitive enough that it takes quite a bit of skill to get it in there. If I was toying with it, I’d just buy a BeagleBoard.

re RaspPi

The board can be used to run high assurance systems. The board itself can’t be qualified to a high EAL because it wasn’t designed to. The SOC and peripherals are all the cheap end of COTS. The firmware is probably untrustworthy. I haven’t looked to see if there is an IOMMU but I doubt it. The overall system would be EAL4 at best. With hardware, I’d call it EAL2-3.

What’s needed is a SOC designed for security applications at a low cost. I know Freescale and others have chips like this. It really just needs a CPU, MMU, IOMMU, PCI, and either a ROM or trusted boot. The extended version would have a tagging/capability engine, onboard crypto, onboard TRNG, optionally memory encryption/integrity, and optionally secure JTAG. Two could be put side by side with one an I/O processor. Otherwise, they could run together for fault tolerance.

There’s cheap I.P. cores one can license to build these. Along with what’s on the opencores web site. I’d actually recommend against ARM because it can be up to $15 mil to license the ISA. MIPS is usually under $1mil, SPARC is $99 if you build it yourself, and there are affordable SPARC IP’s (eg Gaisler). POWER is being licensed now although I don’t know at what cost. These architectures collectively have more good compiler, OS, and security research targeted to them than anything else in embedded scene. On top of that, MIPS and SPARC already have prototype tagging/capability engines from academia that might be licensed as the foundation of the SOC.

So, that’s where it should start. The upfront cost will be huge, though.

Meanwhile, I’m looking into refreshing my old physical separation scheme in the context of microcontrollers. A board full of cheap microcontrollers doing different logical functions, esp in terms of I/O. One or more master CPU’s is the compute node. A dedicated chip with I/O MMU connects the microcontrollers to main CPU & main memory. So, should provide acceleration opportunities along with isolation of devices from main CPU. I/O chip might even be on a PCI card that could be plugged into arbitrary computers. At $1-10 per chip, this might be a cheap solution if they can handle throughput with security checking enabled.

Inspiration from this project. Mine will be 32 bit naturally.

Nick P November 8, 2014 1:27 PM

@ Rick

That’s actually misleading. I was about to comment that Google was going to charge $25 to do what Dropbox would do for free (with optional encryption). Yet, looking at their page, they’re offering a whole suite of genomic processing functions. This looks less like pure genome storage and more like a free bioinformatics stack. Might be a good deal to people researching that stuff.

I agree that they’ll try to scheme out money or other selfish benefits from it, though. I wouldn’t use it personally.

Clive Robinson November 8, 2014 1:28 PM

@ Anura,

WARNING: The following is all conjecture on my part:

Not realy conjecture, ToR is susceptible to various forms of Traffic Analysis (TA) I’ve been banging on about this for so long now it feels like forever.

The military have been aware of this issue since TA was invented in Bletchly Park back in WWII, and have found various ways of dealing with the issue since then (mainly under the “broadcast” model).

However the ToR people have chosen to ignore the warnings, and have not implemented even the basic steps to prevent TA attacks. The reason appears to be one of denial, rather than logic, even after the Ed Snowden revelations back in 2013 that made it abundantly clear that the US and other 5Eyes were eavesdropping on the Internet in a way that would make ToR very susceptible to all forms of TA…

A number of people over at the UKs Cambridge Labs were looking into more network related TA back when “decloaking” hidden services via their TCP/IP time stamp deltas due to CPU clock skew was news, what they found was not encoraging for the privacy minded.

So for those thinking of using ToR to hide their activities from US politicaly driven agencies, my advice is as it has been for a very long time, don’t consider ToR to be effective.

Thus don’t do anything that would effect you detrimentally or otherwise if done using open unencrypted communications. Which means do not use any kind of PII especially that which would tie you down to a place and time (like your home / office / local post office / etc) where your vehical registration or face might get recorded on CCTV, or your fingerprints or DNA can be found on the inside of a mail box etc…

Rick November 8, 2014 2:12 PM

@ Clive R.

Re: TOR usage: “Thus don’t do anything that would effect you detrimentally or otherwise if done using open unencrypted communications.”

And that’s the unfortunate consequence of today’s battle that is currently being lost by privacy advocates. There is hardly a domain that has not been breached. What constitutes high-assurance privacy, now? Face to face in a room without any tech whatsoever? I thought about that for a minute, and then speculated…

…if we keep on the current trend, I would guess that even face to face communications will eventually be monitored by satellites (or blimps or drones?) equipped with FLIR-like technology (through any weather and even other barriers), in real time, DSP preprocessed/denoised by AI, and then flagged for human analysis if matched to a current list of “naughty behaviors” as determined by whatever political winds are blowing. We’re on a collision course with a world that is illustrated in the movie, “The Matrix”. Except WE created it.

I ask myself, how can this be reversed? Democracies and republics thrive on educated populaces. More than a year after Snowden, there is little forward movement. There is more general awareness, perhaps some public angst but not enough to create a tidal wave of change. I’m disappointed. And impatient, too.

BoppingAround November 8, 2014 5:36 PM

Rick,

Face to face in a room without any tech whatsoever?

And, probably, several miles underground.

MrC November 8, 2014 8:22 PM

Does the demise of Silk Road 2 imply a massive assault on TOR? Probably not. The criminal complaint (http://www.scribd.com/doc/245742360/Benthall-Blake-Complaint) indicates that an undercover Homeland Security agent infiltrated Silk Road 2 before it even opened for business. In fact, it sounds a lot like the undercover agent was the second DPR. So this looks more like a case of “feds selling drugs” than a case of “OMG massive TOR breach!” Could it be parallel construction? Probably not. And undercover agent is too difficult to fake, especially when there’s so many other easier lies they could tell. I’d suggest keeping an eye on the trial to see if there’s any shenanigans about trying to keep the undercover agent off the stand. (Aside 1: Mr. Benthall would do the world a great favor by insisting on his constitutional rights to a trial and to confront the witnesses against him. If this undercover agent suddenly evaporates, then it’s time to panic…) (Aside 2: While this case doesn’t look like an instance of parallel construction, the case against the first DPR smells very fishy…)

@CallMeLateForSupper:
NYT’s paywall is script-and-cookie-based. As I discovered quite accidentally, noscript and self-destructing cookies will go right past it without any special configuration — just good default security/privacy settings.

Thoth November 8, 2014 8:32 PM

@Rick
What Clive Robinson meant was TOR, despite knowing it is vulnerable to TA, chose to bury the issue. Such behaviours for running an open source security software is one of the most detrimental attitudes which shows they do not care too much in a sense. Nothing personal against those TOR guys since I do not know any of them but the attitude of pushing or burying problems in a security software can lead to more problems.

What is high assurance ? All round, well worked out life cycle and implementation. A guarantee of the security implementations (practical) and formalization of definitions (academic) where the security proofs are highly robust in face of highly adverse conditions. This includes TA, MitM, tampering with internals and many such.

Nick P, Clive Robinson and myself have posted a whole sleuth of stuff on assurance mechanisms which Nick P and Clive Robinson posted the most and probably the best stuff around here.

You can say that TOR is useful for up to medium strength adversary (MSA) or even low strength adversary (LSA) entities since it does not consider much of TA and covert channels of sorts. The machine it runs on is not guaranteed at all thus endpoint security is terrifically weak (quote Ed Snowden).

I agree with the view of education as I am also struggling very very hard to educate people inside the ITSec industry in my country to look into high assurance security instead of those pretty useless homebrew crypto that have very little effect against even an LSA. You could run a replay attack or swap their certificates out and I can tell you have their supposed security stuff goes insecure. They think crypto is magical in properties but they do not consider other impacts. They are also extremely complacent of their personal security and leaks out their PIIs on a frequent basis despute working in the ITSec industry and most of the working in the ITSec industry have very weak backgrounds. All they know are the magical properties of mechanisms but not how they work and when details are given and shown how to break their concepts, they whine loudly and become irritable and claim that I am too paranoid for my own good 🙂 .

So what is left if TOR is considered vulnerable ? We need to think along the line of a layered approach just like how TOR was designed. Like an onion. We decide what level of protection we want for different communication and assign them to different mechanisms. The most you can use TOR is up to the level of MSA. If you don’t want a high strength adversary (HSA), then you might want to start looking into Tinfoil Chat (TFC) which you can find in (https://github.com/maqp/tfc) but I would warn you that it is not a finished product which you should use at your own risk. Proper OPSEC would also help you greatly along the way.

Put it in simple sense, just figure out how much protection you need and apply mechanisms accordingly with understanding on it’s internals. For now, it’s quite implausible to run around with a high assurance solution all the time as it can be rather inconvenient (and as a result of govt agencies trying to subvert secure comms since beginning of time).

@Nick P, Markus Ottela
Does the TFC have it’s own TA prevention methods ? That is the part I am not very sure about since TFC simply sends out strings of encrypted communication out but there is no mention of TA prevention. Would TA prevention be useful for TFC ?

Thoth November 8, 2014 8:40 PM

@MrC
After the breach of Silkroad, those who buy and sell stuff there should be very cautious about not falling into the same mistake twice and be careful of clones. The most they could do is to exchange asymmetric public keys on Silkroad 2 and then do their stuff somewhere else afterwards (need to verify public keys offline or out of band or even face to face).

Most people trust a clone site as it symbolizes renewal of their fallen ideologies and that is exactly what the high strength adversaries (HSA) wants is to trick people into thinking a revival of their old thoughts are around the corner and to compromise them all. Put it in simple, do not be gullible to such trickeries until one can strongly proof itself worthy of trust.

Trust in the digital world have been greatly broken by HSAs like NSA, FBI, GCHQ, BND and the such. They actively seek to destroy any security or trust in the digital world (or even any goodness and virtue left) to the extend as we know in the Snowden leaks.

@Rick
You mentioned that even after the Snowden leaks, people do not rise against such destroying of trust and it is due to the thorough and strong efforts these HSAs have done to root out any strong movements that can happen. They have already considered such scenarios and have worked well into rooting it out.

Thoth November 8, 2014 9:03 PM

@Nick P
If they have seL4 for CHERI, it would be really useful for such security chips.

I would agree that the firmware and chips of the RPi wouldn’t be considered trustworthy.

8675301 November 8, 2014 9:06 PM

  1. Yes to Freenet. In the long run the only road. +1 toad.
  2. Tor attitude to TA is same as it is to this hack. LA LA LA we can’t hear you. “We don’t condone illegal activity.” (snort). At this point in time I wouldn’t trust them with my banking credentials.
  3. Education in American means sympathy, sentimentality not intellect.

Figureitout November 8, 2014 10:11 PM

A-fly-on-the-wall RE: tor attack
–Mike the Goat hinted at it, almost exactly what this is, attacking hosters of hidden services, and was worried of either exposing himself or his source. He’s fairly competent in traversing the ‘net and been doing it longer than me so I think he can deny it away.

http://mikethegoat.wordpress.com/about-the-goat/comment-page-1/#comment-319

Thoth RE: Mooltipass criticisms
–While I certainly want to see people pushing for greater security and critiquing projects, I think you’re a bit harsh, unfair, and attacking the wrong things.

ANY device in which the enemy gets physical access, is pretty much game over; this is another “accepted reality/truth” that I believe the security community accepts. Mitigating that involves bringing in more complex detection devices, wiping functions, or…thermite. Simple IP cameras or sending out an SMS on detection, you know those commericial products have been attacked and can’t be trusted. Instead you need a custom remote device, advanced detection, and backup power to make sure the warning gets out (In his escapades, Clive R. has mentioned using a black “exercise mat” w/ resistors to “hide” from infrared detectors, and my personal tests I was able to reduce detect space by half just w/ black material. Microwaves are superior IMO but it doesn’t hurt to have both.).

ANY chip along the supply chain can be reprogrammed if you know the procedure and can fit your malware in space constraints (my little cheap smart card reader says right in data sheet it can be firmware upgrades via USB…And notice driver installs from a company like Intel, it is an executable that then can reprogram the BIOS…); which are becoming less relevant as these chips are getting insane all the functionality and code you can fit into. Letting that sink in…no one on the planet can truly certify our current insane supply chains and mitigate physical threats (w/o guarding it and attacking invaders, killing them if need be).

It’s multi-platform, and open for more mod’s. It’s going to have holes. Maybe an idea would be, the device opens up a VM, opens up a text editor and types the PW there. Get all the passwords you need for your session while disconnected from internet, and paste them over into unsaved text file; remove device and shutdown and delete VM.

http://www.pendrivelinux.com/boot-a-usb-flash-drive-in-virtualbox/

My worry then shifts to the copy/paste buffer, how’s that memory being handled and can we delete it. Instead of directing typing into every different type of login page w/ Javascript…Virtual keyboards are also very interesting…

https://www.raymond.cc/blog/how-to-beat-keyloggers-to-protect-your-identity/

What I get from the project, is look at how a bunch of strangers, over the internet, collaborated and got a product to “market”. This development style can be infected of course, but instead of relegating ONE person to each area, you could have TWO sign off (or 3, 4, 5…); it’s still a matter of trust if you want all the so cushy and nice to work w/ features on modern PC’s. It’s open source so if you can find a malicious developer, and maybe get their face, current location, probably fake name; that’s one more malicious attacker the community and keep out of their projects.

Read some of the google group boards, the developers were hashing out problems and aware that this won’t in anyway be a “lock & key solution”. Screen captures or good keyloggers on the PC still will get your passwords.

RE: your secure build
–Yes, we need more secure OS’s that are easier to use for RasPi. Dig into it and you will find, yes embedded development is hard as hell. I’m a noob in this area, and even for a much simpler project, there’s a big learning curve before you can finally start getting the patterns. Debugging is…hell (but when you find it, it’s better than sex). Took 1 whole day for a tiny part change to find the damn code we needed to change. And that’s even w/ using tools I didn’t create, software I didn’t write, and some other proprietary secrets hiding. So while it may not ever get to EAL7+, getting a “pretty good” secure OS for the RasPi could raise a lot of people’s security level.

I guess I’ll layout a little more of what I’m chewing on (I’m taking my damn time on my secure PC build, unlike so many of my other projects that I try to hack up ASAP).

OVERVIEW: Secure PC Design

Starting w/ the electrons needed for any operation, want to have surge protection (lightning is the ultimate attacker, literally fry the PC), to an inverter, then excessive filters and regulation (I’ve got a decent amount of transformers) and testing the cleanliness of sine wave entering my PC w/ my old school oscilloscope. Clean power is critical for me as a lot of my PC will involve also ADC’s and looking for certain voltage levels.

The initial boot I want to be w/ an array of dip switches laid out and labled for hex values that will be read by first authentication chip connected to shielded LCD: http://www.4ehsbyehs.com/rf-protection/LED-LCD-screen-protection-film

http://www.lessemf.com/computer.html

I won’t go this far nor this far either haha. Besides your ears and arms are still unshielded lol. If that number doesn’t match the hex number I made on dip switch; will power down (and probably leave memory alone, I’ll keep important memory w/ me 24/7 anyway). Upon authentication, signal is sent via opto-isolated and shielded cables (or PCB, not sure yet) to turn on on-board RNG’s to start spewing noise and creating entropy. Some in boxes like this: http://www.wb4hfn.com/DRAKE/DrakeEquipmentPictures/Equip_TV1000LP_01.htm and some exposed under initial shield to add noise any processor signals leaking. Outer exterior box will be something like this: http://www.ebay.com/itm/Lindgren-Model-T-M-Copper-Brass-RF-Enclosure-Faraday-Cage-14-Cube-T-M-Test-/131184360092?pt=LH_DefaultDomain_0&hash=item1e8b327a9c that will probably have carefully drilled holes for shielded and one-way lines for keyboard and old school mouse (using the old roller, using one for one of my PC’s now and it functions beyond well enough and no laser emanations).

This will all be, all my chips will be on chip holders: http://www.technologystudent.com/images7/chp4.gif so as to make for swapping up RNG’s and chips if they go bad or the security gets compromised in an unacceptable way.

Then controller chips for the either homemade hex keyboard ( http://www.winpicprog.co.uk/pic_tutorial9.htm ) and likely encrypted keystrokes ( http://www.google.com/patents/US20100195825 ), which means heavily protecting the lines and get them encrypted as quick as possible and decrypted right before they’re needed.

That’s an overview, ignoring lots of dirty details and an actual implementation. Big theme of course for me is EMSEC, isolation, separation, Diodes/one-way flow, and complete user-control. I think all of those features have either already been done or are technically feasible. Integrating it all to run smoothly and getting the bugs out will be hard. I’m still swaying on if I want to tack on and hack away (as in cut out lots of fat) of existing OS’s, or write my own (which guaranteed will have bugs). Indecisive but I’m thinking on it any free moment I get; and I can’t really afford a whole lot of testing untested things, risking destruction. Also, I may…inch away from such a tiny microcontroller core so I can get some actual useful performance and use of the PC won’t be so painful. I’m thinking an SD card potentially for at least some normal I/O; so long as I can mitigate enough in the design and my mind that any incoming files from untrusted PC’s can’t do damage to other parts of PC.

What to use it for? Initially it’ll be for encrypting files; but I really want a truely hardened machine for programming. So I can relax when I’m programming, as it stands, I’m running off mostly LiveCD/USB systems and don’t keep nice IDE’s and all my files on disk. Also, when I go to program at my school, sometimes I log in and all the programs are gone…I have to code using MS Visual Studio (2012) which is a behemoth massively disgusting program, and I wonder sometimes if bugs are just attackers corrupting the PC.

Got some other crap to do still, just took a hit by losing a handy motherboard (had crypto-accelerated CPU), which I think it really is the motherboard now as the DC-DC converter works again (WTF), and I think voltage readings of 11V need to be 12V. Still doesn’t turn off after I turn on, have to short some pins to turn it off which I’m sure is damaging it. No POST or beeps, I think I did some irreparable damage…b/c I was too excited to try an antenna and not thinking. Almost got that antenna working on another router, which wasn’t much homebrew but the final product is much cleaner.

Also life is kicking my ass right now.

Nick P November 8, 2014 11:21 PM

@ All
re Tor

To be fair, they say global adversaries aren’t in the threat profile in their site. That’s basically intelligence services with visibility of each node. So, that’s not necessarily global as much as visible at each node. That could be done in a more targeted way. They also have updated the protocol many times to deal with all sorts of attacks. And, even if NSA can listen, there’s still plenty of people that benefit from Tor.

@ Thoth

re seL4 on CHERI

It’s a good combo. CHERI is a capability system, though. So, EROS, KeyKOS, or Hydra on CHERI would be even better as they’re capability systems. Look at those in the links I gave you. My concept was to port an OS like that onto CHERI, then port the E programming language or a capability version of Python (or Oberon or Go) onto that. You get end-to-end, ground-up security by default with ease of programming and good performance.

re TFC traffic analysis

Far as I know, it doesn’t do that. I about want to smack myself in the head for not telling Markus about this as it’s a critical feature. Traffic analysis results will be based on two covert channels: the length of the message and its timing. The timing mainly says the person is communicating. It can tell you some plaintext, but not usually that much. The length, on the other hand, might help you recover plaintext.

So, my normal recommendation for link encryptors applies here too: send messages of fixed size at a fixed rate. This is one of the reasons I ditched OTP for either OTP/symmetric hybrids or symmetric schemes. Would’ve helped me justify him doing it if I remembered it at the time. In TFC, there will be a null message that’s the same length as others which goes through encryption & onto transport stack. Other messages will be split across these. Best to put the length of the message in the header rather than an end of message character.

Good catch, Thoth!

Rick November 9, 2014 12:38 AM

@Thoth

“What Clive Robinson meant was TOR, despite knowing it is vulnerable to TA, chose to bury the issue.”

I (somewhat thoughtlessly) used his point that TOR shouldn’t be trusted as a springboard for my own tangential observation: that if TOR can’t be trusted against the HSAs, then what can the average ‘John Doe’ do? Is there a technological answer? Or a political one? Or both? I was expressing frustration over issues I am both vocal and passionate about. I didn’t intend to detract from the significance of his point that TOR ignores it’s own presumed mission statement: robust anonymity.

{The lack of forward motion since the Snowden revelations} “is due to the thorough and strong efforts these HSAs have done to root out any strong movements that can happen.”

Bruce Schneier is closely associated with the EFF. I would love to know his opinion about the observation that there has indeed been disappointingly little progress. Would he agree from his unique vantage point? Or disagree? Am I too impatient to be disappointed with the apparent apathy of the public a full year after the disclosures? Or are you more correct by saying that the HSAs (and TLAs) are actively mitigating the public response with countermeasures? i.e., infiltration of groups, sabotaging progress, weakening goals, impeding potential sources of funding, etc. You pose an interesting angle I hadn’t considered. Although, I can say from personal experience that I hear “if I’m not doing anything wrong, I have nothing to hide,” far, far too often from my own clients and associates. Even intelligent and educated clients who know better. And that is truly frustrating. I think first-world countries really need to understand that privacy is integral to society’s proper function given that power, as a rule, cannot be trusted; especially so when power’s day to day business is not illuminated by transparency to the public. Think of: NSA, deep state, and FISA courts in the USA.

Unfortunately, 8675301 is onto something revealing when he states, “Education in American means sympathy, sentimentality not intellect.” I think what he means is that logic and lucidity are not the first and foremost guiding principles (as they should be), and, furthermore, that Americans are largely asleep, for lack of a better expression. Buried in political correctness, muddy thinking, nebulous and unobtainable goals. Corrupt minds.

As for the tactical points you make about assessing the threat model and applying the necessary layers (as in an onion) to keep the threat at arm’s length, they are duly noted. Only 18 months ago I was largely ITSec unaware, but reasonably computer literate. I’m learning. However, for the average John Doe, the onion model of security/privacy will test his resolve. For most, ITSec is an afterthought. I wish it weren’t so since in the short term, the tech solutions are more viable and effective than the political ones.

Strategically-speaking, if the HSAs were to act on the data they monitor in a heavy handed way, there might be more of an outcry, and therefore, provide more traction for the privacy-minded. Perhaps that is why “parallel construction” seems to be more convenient for them when they wish to move against a target.

To achieve the anti-Orwellian meritocracy we’d all like to participate in, we need to hold HSAs accountable, and we need full transparency of their behavior to do it. We need the public to prime the pump, and pressure elected officials to enact accountability. Currently, elected officials represent corporate interests more than the public interest. A strong, effective lobby (equivalent to the NRA (National Rifle Association) in the USA, or PhRMA, another effective group that pushes pharmaceutical research) could begin the process toward achievable goals.

The following statements from those in power are arrogant, elitist, often irrational, antithetical to the goal, and demonstrate contempt for liberty:

http://online.wsj.com/articles/fbi-chief-warns-phone-encryption-may-have-gone-too-far-1413489352

http://www.zdnet.com/uk-spy-chief-throws-privacy-in-the-fire-says-its-not-an-absolute-right-7000035368/

Those holding these positions of power must be replaced to make tangible progress. It might start with the education of a nation, but what a Herculean effort! I fear we won’t see real progress for generations to come. Until then, I suppose reliance upon (layered) tech solutions will have to suffice to fight the battles.

Markus Ottela November 9, 2014 4:11 AM

@ Thoth:
“If a device is compromised, it should be assumed insecure right away in a traditional sense.” (Last week)

True. DHE would work if adversary only copies the keys and tries to remain covert. If the hardware/firmware is additionally replaced, there’s no way of ensuring internal RNG is actually working, or even if it’s actually GPIO from where entropy is obtained in future. So better replace the entire TCB and generate and exchange new pre-shared keys before continuing private conversations. There’s a chance the feature might add security and in no case does it make things worse; Even though the system isn’t designed secure against hardware compromise, the manual needs a section that advices users to replace the TCB units.

@ Nick P, Thoth

Regarding Traffic Analysis
Unless OTR is used to obfuscate the TFC message content, it’s easy to distinguish that TFC is being used. This is how a OTR-encrypted 140-char long message shows on openfire XMPP server in each case:

http://s28.postimg.org/3l3ds81ml/Plain_OTR.png
http://s11.postimg.org/70ut88m1f/TFC_OTP.png
http://s27.postimg.org/f9zsea4eb/TFC_CEV.png

Since the CEV’s encryption has to prepend nonces inside the layers, it’s impossible to produce similar length ciphertext than the original message is. In case of OTP, the message is also longer due to appended line number and base64 enconding of ciphertext. The problem only gets worse with shorter plaintexts: OTR leaves them shorter. I’m not sure what type of padding OTR uses but until it is raised to ~400 chars before encryption, there’s no guarantee TFC messages are indistinguishable. I’ve to add a warning to the whitepaper about this.

“Would TA prevention be useful for TFC ?”
My guess is that if not already, at least in future, whatever is automatically compromising end-points that use OTR to exfiltrate keys and logs (that makes TFC necessary in combat against mass surveillance), might check if NH.py is running. So hiding the presence of TFC is unlikely to be possible.

Length of message:
The plaintext is always padded to 140 char chunks prior to encryption for every version of TFC. in CEV Keccak padds the plaintext for 32-bit blocks. CTR mode Twofish and AES also pad their plaintext block to 16-bytes. So the length of original message is not obvious.

Timing:
There’s an option to use pseudo-random time interval between packets of long messages. So if message was 200 chars long, it would be sent over two Pidgin messages, separated by time interval, anywhere from 0.3s to user configured value, 13s by default. This was the best I could do without wasting OTP key material.

“–send messages of fixed size at a fixed rate.”
This is possible with CEV. My intuition says the best approach would be to have three threads for TxM. First one encrypts null messages with AES(GCM) with additional static key and adds them to send -queue. The second one reads user input and encrypts it with the cascading encryption using PFS key before placing it to send -queue. The third one reads messages from send -queue and send them at constant intervals.

On RxM, first decrypt with AEC(GCM) using the static null-message key. If the MAC succeeds and message decrypts to null value, continue with main loop. If MAC fails, decrypt the message using PFS keys to see if it was a legit message. If the MAC fails again, only then warn the user about tampered message. If the null-message were to use contact-related keys, Rx.py might have to iterate the session keys very long time to ‘catch up’.

I’ve considered the idea with AES. The issue is, I’ve already had one XMPP service provider ban my account for ‘spamming’ as I was sending TFC-messages to another account of mine, during developing. If an XMPP service provider accepts 150 000 messages daily for each TFC user, and is willing to pay for data-retention law enforced storage, why not. Additionally, if there are multiple contacts, all of them need to be constantly messaged to. So, whenever user logs in, receiving offline-messages has an impact on performance and in some cases even data plans.

Scott "SFITCS" Ferguson November 9, 2014 4:28 AM

@Thoth

After the breach of Silkroad, those who buy and sell stuff there should be very cautious about not falling into the same mistake twice and be careful of clones.

Agreed, and… (obviously an uniformed point of view of crime)

…that’s an instance of the virtual mirroring of the real world (Risk Management?).

  • You can’t trust what you buy
  • You can’t trust who you buy it from
  • You can’t trust where you buy it (locale)
  • You can’t trust communications
  • You can’t trust currency
  • Parallel construction is always a real risk (even before the NSA’s activities became widely known)

And, as a general rule – even “working deals” don’t tend to last more than 2 years.

Just some random guesses.

Kind regards

Scott "SFITCS" Ferguson November 9, 2014 4:39 AM

@8675301

Tor attitude to TA is same as it is to this hack. LA LA LA we can’t hear you.

The lure of convenience (is the enemy of security), coupled with the confirmation bias of gut instinct (“it can’t be that complicated”) and the bias of invested interest (losses i.e. past effort, loom larger than gains) == disaster??

There is the lemming benefits theory (I just made it up!) – the cautious fall less distance and land softer.

Kind regards

Markus Ottela November 9, 2014 4:46 AM

@ JK
“Maybe this Mooltipass device is a better alternative?
https://www.indiegogo.com/projects/mooltipass-open-source-offline-password-keeper

I’ve my reservations. Generally you always want to make sure the operating system is malware free. Otherwise it doesn’t matter whether you use keyboard, an external device or software password manager to input password:

Quoting the page
“A software-based password keeper uses a passphrase to decrypt your credentials database located inside a device (computer, smartphone, etc.) This means that at a given moment, your passphrase and your database are stored inside your device’s memory, a malicious program with access to both of those pieces can compromise all your passwords at once!

This is ignorant towards the fact, persistent malware that will eventually get all passwords you’re using anyway. The only secure implementation I’ve seen is my bank, that physically mails me OTP-keys.

“We are using AES-256 encryption in CTR mode, brute-forcing the encrypted credentials would take more than fifty years.”

I seriously hope it’s a lot more than 50 years. Nothing is talked about what generates the encryption key.

65535 November 9, 2014 5:03 AM

@C64

It not a TLD. It is somewhat like a hash.

“…[dot]onion is a pseudo-top-level domain host suffix (similar in concept to such endings as .bitnet and .uucp used in earlier times) designating an anonymous hidden service reachable via the Tor network. Such addresses are not actual DNS names, and the .onion TLD is not in the Internet DNS root,.. Addresses in the .onion pseudo-TLD are generally opaque, non-mnemonic, 16-character alpha-semi-numeric hashes which are automatically generated based on a public key when a hidden service is configured. These 16-character hashes can be made up of any letter of the alphabet, and decimal digits beginning with 2 and ending with 7, thus representing an 80-bit number in base32. It is possible to set up a human-readable .onion URL (e.g. starting with an organization name) by generating massive numbers of key pairs (a computational process that can be parallelized) until a sufficiently desirable URL is found.”

https://en.wikipedia.org/wiki/.onion

The “facebook on Tor” is somewhat tangential to the Tor ‘privacy-weakness’ topic. Facebook supposedly just brute forced the first eight letters of their Facebookxxxx [dot]onion hidden service.

“taiganaut

“In other news, the .onion in the article was wrong, and a Facebook employe claims they brute-forced a ton of .onions with ‘facebook’ at the start and picked the one that looked best. I’ve brute-forced .onions with more than an 8-char sequence before, it takes a couple days on a crappy GPU. Some people are freaking out that this means all hidden services are “officially broken” as they said on Hacker News…”

http://arstechnica.com/security/2014/10/facebook-offers-hidden-service-to-tor-users/?comments=1&post=27883139#comment-27883139

[and]

“AdamBB

“Since an [dot]onion URL (hidden service descriptor) is derived from 80 bits of a SHA-1 hash of a public key, the story of how they created this specific descriptor is pretty interesting. They generated a ton of keys and only kept those where the descriptor would begin with the characters “facebook”… BTW: the correct URL is facebookcorewwwi[dot]onion (article is missing the “core” part).”

http://arstechnica.com/security/2014/10/facebook-offers-hidden-service-to-tor-users/?comments=1&post=27883147#comment-27883147

[Facebook email]

http://archives.seul.org/tor/talk/Oct-2014/msg00433.html

It’s very possible that the brute-forcing of [dot]onion name of Facebookxxxxxxxx was done as reported above. But I would guess that doing so would set-off alarms at Tor. Or Tor individuals [or possibly other Tor experts] help devise the Facebookxxxxxxxx[dot]onion name. Which only brings up more security questions/issues with Tor.

Adjuvant November 9, 2014 6:14 AM

@Pete et al. Is it possible, in theory, to set up a website or a peer-to-peer system that cannot be taken down?

Secure Share: GNUNet + PSYC

I knew I was forgetting something! Here’s a scheme I came across a few weeks back during late-night browsing that looks particularly interesting — to my eyes, at least. It aims to implement distributed social networking on a completely P2P basis using GNUNet and the PSYC messaging protocol (PSYC technical) .

The project is called Secure Share,”and comes from the folks responsible for symlynX

I’ve been meaning to post this one for comment, since I seem to recall searching and failing to find any previous mention of it here.

In addition to the project itself, the site features a fairly exhaustive and exceedingly useful comparison of existing solutions for the use cases of social networking, file exchange instant messaging, asynchronous messaging/email, and telephony/video conferencing, with extensive best practice recommendations based on currently available options.

With respect to Secure Share itself, comments are welcome! Here are some highlights:

We call this “Secure Share,” a framework for sufficiently safe social interaction. It arose after realizing that there is no satisfying technology to address the issues we outlined in the FSW 2011 paper entitled “Scalability & Paranoia in a Decentralized Social Network.” Here’s what we mean by safe:

  1. updates, comments, postings, messages, files and chat are only visible to the intended recipients (not the administrators of any servers or routers)
  2. the type and content of a message cannot be guessed at by looking at its size
  3. communication between parties cannot be measured as they may have none to several routing hops in-between. an observer never knows if a communication came where it came from and ends where it is going to.
  4. automatic responses and forwarded messages can intentionally be delayed so that an observer cannot tell two communications are related
  5. communications cannot be decrypted weeks later, just because the attacker gained access to one of the involved private keys (forward secrecy)
  6. even if an attacker gains access to a cleartext log, there is no proof the material was actually ever transmitted by anyone (for a case in court mere data would not suffice, you need actual testimonies)
  7. the list of contacts is never managed on potentially unsafe servers, it is only visible to those it should be visible to
  8. the infrastructure is robust and resilient against attacks

Anybody who’s come across this before or who cares to offer an opinion?

graind November 9, 2014 6:15 AM

Re. LA LA LA Tor

The Tor team have constantly defended themselves by saying that Tor is not designed to protect against TA. This may have convinced some critics for a short while, but the situation is becoming untenable. The network is being compromised left right and center and we’re expected to sit back and convince ourselves that it’s all OK because TA doesn’t fall within the scope of Tor’s design. It’s a bit like saying “we’ve designed a really strong safe, but you can’t blame us for the fact we’ve cello-taped a key onto the door!”

tsauth November 9, 2014 6:24 AM

@graind: The Tor guys seem obsessed with latency and usability. To my understanding, this is the main obstacle (at least officially) against dealing with the TA design flaw. Sounds like a good case for a fork?

Scott "SFITCS" Ferguson November 9, 2014 6:45 AM

@Adjuvant

@Pete et al. Is it possible, in theory, to set up a website or a peer-to-peer system that cannot be taken down?

Secure Share: GNUNet + PSYC

An interesting project. However it doesn’t allow hosting a web site – and while it might be possible to implement that, it would appear that would involve hosting it yourself. In which case it could be taken down. Note: if you put up a website or blag with Freenet the site doesn’t vanish if you shutdown your computer – it’s deniably distributed across Freenet.

Redecentralize.org

More of an interest group than an application or framework. They appear to advocate ToR.

Syndie

Is interesting. It uses Freenet (and adds a layer of anonomization) for webhosting and blogs (blags) – but doesn’t use Freenet mail system.

Kind regards

Thoth November 9, 2014 7:37 AM

@Figureitout
It is hard to be not harsh when you see another pitfall someone is going to walk into. For those who are in this industry (ITSec or Mil-Industry guys), you know the familiar stale and rotting stench of this very industry controlled by Governments. Blame the EAL 3/4 encryptors and HSMs that have been dressed as high assurance devices when they are not.

It is better to realize that Mooltipass is walking into a huge pitfall and to wake everyone up and show them how to do stuff properly (and also practically) than to see people walking into more pitfalls.

@Nick P, Markus Otella
I was wondering how I am going to send a last resort message (when nasty things happen) to a partner. One idea is to reserve certain bit strings of the OTP keystream and to agree on a last resort message. By encrypting the last bit streams with the last resort message and probably doing a HMAC on the message with the last bit stream or the 2nd last bit stream, you can more securely send out a last resort message.

If my last resort message is “IAMOWNED” which is an 8 byte message in ascii, I would need 64bits of the same length key stream and let’s say I take the 64bits of key bits from the last 64 bits of the OTP keystream, I will have 2^64 bit of key mats and if birthday attacks actually have a use here, I have 2^32 bits roughly of key mats and that is simply too easy to guess for NSA/GCHQ/BND…etc.. How about if my message is something very short which the keystream and the word length is of the same length and thereby under the influence of a short word length, I have an equally short keystream and that will make a bruteforce easy.

My proposal is to copy how block ciphers work. Use a fix length of a strong length and do your XOR on it. Pad the blocks and you are good to go. We can’t assume the user is going to do something sensible to hide their TFC messages. I will recommend a block of 128 bits per down/up stream of message. Considering the keystream size would be 128 bits because the message is 128 bits. That would effectively be as good as trying to crack a 128 bit key. You may push it a little down to 96 bits but no more than that and if you want more security, a 256 bits block message per down/up stream (which is a modern 256 bit key size). Store the encrypted message blocks in a buffer and for every 3 millisecond, flush one block down stream. This will give the crypto-engine some time to do it’s stuff to control traffic flow. If you have a message at 257 bits (you will need to use 3 blocks of 128 bits) with a wasted 127 bit padded stream but the flushing down of 384 bits of messages of 3 blocks at regular 3 millisecond intervals would confuse the adversary to think there is 384 bits of messages when it actually has 257 bits. They gotta guess 2^384 keystream bits for the key bits.

@Mooltipass et. al.
My pointing out of Mooltipass problems have to be harsh otherwise the point does not get itself across. Facts are facts. Very sorry. It would be nice if the owner of the Mooltipass project knows about all these vulnerabilities and join in the discussion to make it more resilient. We have good people here who know what they are doing.

@Tor Traffic Analysis et. al.
It’s probably only a matter of time someone breaks Tor really badly and the Tor team becomes untrustworthy if that happens. New protocol comes and old protocol goes. Nature’s work I guess ?

Benni November 9, 2014 9:09 AM

The german secret service collects zero day exploits to crack encrypted internet communications. The so called office for security of information systems (BSI) had a contract with hackers in 2014, too. http://www.spiegel.de/spiegel/vorab/bnd-will-informationen-ueber-software-sicherheitsluecken-einkaufen-a-1001771.html

It says that this would be only for protecting government networks. But that is doubtful since the BSI was formerly a BND department and by law it has to support the german intelligence services http://www.taz.de/1/archiv/?dig=2005%2F01%2F18%2Fa0180

CallMeLateForSupper November 9, 2014 10:02 AM

@MrC
Very interesting, what you say re: circumventing NYT paywall, “…noscript and self-destructing cookies”. While I default to Javascript disabled, I flatly refuse cookies (from anywhere), so apparently the latter keeps me out.

BoppingAround November 9, 2014 12:08 PM

Rick,

then what can the average ‘John Doe’ do? Is there a technological answer? Or a political one? Or both? I was expressing frustration over issues I am both vocal and passionate about.

I fear that the solutions are scarce.

I am sceptical regarding political answers. You know, one law for the rich and another
one for the poor. I am unsure regarding technological answers, though it looks like t
here could be one.

Frankly, I feel that I am not even remotely qualified to make any statements. Yet I feel very doubtful the whole situation. It is quite obvious that TLAs (or {L,M,H}SAs, you choose) are not going to fade away, same for data miners, advertisers, nothing-to-hiders et al. and their illicit practices regarding user data, security, privacy etc. ‘Do something’ but is it even worth doing? Can I repair the damage done or am I trying to sting the wind that has blown my nest off?

I gotta go.

Nick P November 9, 2014 1:39 PM

@ Markus Ottela

That’s why you shouldn’t use XMPP. It’s a crappy, wasteful protocol anyway. A point to point protocol with NAT traversal is the best route. Add this on your todo list for future work as ability to beat timing channels is pretty critical to some applications. And remember that we’re not just designing a chat: each thing I suggest is designed to work with other applications with minimal modifications.

Benni November 9, 2014 4:34 PM

Ah no, the supercomputer is included in Nitidezza.
Swop is a covert access to communication systems in order to feed data into Nitidezza….

xpath November 9, 2014 7:58 PM

@BoppingAround

Hello telescreen: http://www.brennancenter.org/analysis/im-terrified-my-new-tv-why-im-scared-turn-thing

> The only problem is that I’m now afraid to use it. You would be too — if you read through the 46-page privacy policy.

Sounds creepy. But Samsung does not have the infrastructure to collect the kind of amounts of data listed in that privacy policy. Or so I would think. Maybe I am wrong but could it be that they use something like Google for the data collection?

Thoth November 9, 2014 8:28 PM

@Death from Above, Tor et. al.
I am not surprise most Tor setups are done on compromised machines which allow HSAs or even MSAs to enter into insecure system and remote control these compromised Tor machines.

The three factor for a secure system:
– Secure Hardware (very hard to do but not impossible)
– Secure Software (not so hard to do but unpopular)
– OPSEC (how many of us even bother to do that)

The three holy grail for a secure system has always been violated since day one. Not surprise that is going to continue.

this is a name November 9, 2014 9:27 PM

http://www.theatlantic.com/technology/archive/2014/11/this-cyborg-cockroach-could-save-your-life-someday/382539/

This Cyborg Cockroach Could Save Your Life Someday or
DESTROY YOUR LIFE?

insert quote from Homer on Trojan War. Catapult in dead animals,
with fleas and insects into the city.
other historical sources of insect warfare and more recently

World War II. Japan biowarfare unit 731 crimes against humanity.
Use of insect vectors to carry diseases.

not confirmed, so check original sources yourself.

and of course, the USA ‘bat bomb’ carrying incindiary devices
to set the japanese cities on fire. The key advantage is the
bat flies at night which would make it hard to
defend against versus a large bird.

Clive Robinson November 9, 2014 11:30 PM

@ JestInCase,

The common denominator for the TOR takedown(s) appears to be Bitcoin.

Yes, but as these hidden services are anonymous “goods&services” vendors, you would expect an “anonymous payment” method to be standard, and currently the only one with any current traction is BitCoin…

Also consider their could be a “share of the bounty” involved, what better way to pay off “snitches” / agents and other contractors…

Which brings up the issue of, “What if BitCoin is actually the target not the services?”…

The easiest way to stop anonymous and thus –probably– tax free services is to stop there being any viable anonymous payment system.

Finaly, yes there have been stories going around that BitCoin is not quite as anonymous as people think. I’ve not seen anything yet that is more than hearsay with no technical content, so I’ve no reason to give it credibility. That said BitCoin is quite complex, not just in it’s crypto coins but also in the backend systems so it is possible as with all complex systems that it harbours failings that could be exploited to de anonymize it’s use, whilst not making it any less secure, after all even cash is nolonger anonymous in use. That is in higher denomination notes some banks scan the serial numbers and log them against the depositer details as an “anti-fraud” measure, it is a practice that will only get more widespread with time. Which is why OpSec wise you should only use low denomination money that will change hand to hand through the till/register during a days trading rather than hand to bank at the end of the day after “cashing up”.

Clive Robinson November 9, 2014 11:48 PM

@ this is a name,

This Cyborg Cockroach Could Save Your Life Someday or DESTROY YOUR LIFE?

How about something a bit more contemporary than Homer –or the bible for that matter–, how about the film the Fifth Element?

There is a scene where a cyborg roach is sent in to bug the president, who on seeing it smashes it with the heal of his shoe…

Daniel November 10, 2014 1:05 AM

@Death from above

Thanks for that link. There is a long-winded comment to that article and while most of it is off-topic I think its premise is sound, to whit–“I think the Tor project has reached a critical point in it’s development over the last 1-2 years. I call it ‘too big to work’ in contrast to the ‘too big to fail’ theory in economics.”

I think that’s correct. Tor is at a level where either one of two things are going to happen–people are going to get serious about funding and providing the resources necessary to make it a much more secure system or law enforcement is going to continue to exploit its flaws, people will stop using it, and it will be regulated to a backwater.

What I perceive is that there is a view among some Tor developers that they can have the best of both worlds. They can create a system that is exploitable by HSA/TLAs but that protects the ordinary folk. At best that point of view is naive; at worst it turns Tor into a honeypot for geeks. I mean let’s look at this honestly: what’s the use case for Tor right now? They went after the pedos and took down Freedom Hosting. They went after the druggies and took down Silk Road 1.0 and 2.0. Anyone with a lick of common sense should be able to figure out that what the US Government can do to the pedos and the druggies they can do to anyone they feel like. So Tor is only useful if you happen to be in the good graces of the USA and if one is in the good graces of the USA already then the overwhelming majority of people don’t need Tor to begin with. The only people who have a use for Tor are those who are in the good graces of the USA but not in the good graces of someone else–and that is both a limited audience and one that is ultimately self-defeating.

Thoth November 10, 2014 1:49 AM

@Daniel
If USA/FiveEyes HSA/MSA can take it down, Russia, China and other nations can. It is only a matter of time. This good grace will not last long as the other powerful nations or rogue nations would sooner or later develop such abilities and turn into HSAs.

Tor has turned unreliable or was it unreliable against HSA from the start ? Here’s a thought I have, in a highly secured high assurance environment, a misconfiguration would usually be isolated and the high assurance security trusted environment would not be violated. If you could run a misconfigured or malicious setup and start leaking, I wouldn’t call it high assurance high security. I would call it low assurance low security. Tor uses crypto and it’s huge network of nodes to attempt to hide correlations of network traffic but it fails to take into consideration the environmental factors that includes the physical machine, the user program setup, the traffic flow analysis weakness and many more.

A more extreme but also a high assured method to end all these trouble is to build from scratch a secure trusted communications setup that handles all these problems. A misconfiguration should not allow severe leaking of internal states of system or should fail gracefully. The use of high assurance isolated kernels (seL4 and the such) with a minimal TCB would be preferrable. Owning a high assurance customized machine would be difficult and costly but the best that can be done is something along the lines of a properly configured open hardware system like the Raspberry Pi (although it is not made for security assurance). Mitigation methods to prevent catastrophic exploitation to de-anonymize programs or to compromise security within the system should also be a pririty as we have seen in the recent events. In the moment of an emergency, a self-destruct to ensure survivability and integrity of other users would also be a bonus. Traffic analysis resistance and tamper detection (at least) or tamper resistance (even better) would help a lot.

Andrew_K November 10, 2014 3:37 AM

Is there any evidence that SR2.0 was not set up by TLA in the first place?

@ Anura, on Detection of TOR users/hidden servers
What you wrote on detecting hidden services is absolute plausible. Yet another demonstration of how powerful metadata analysis is!
Anyhow, I doubt this to be the truck-sized weakness, since it is no wekness of running a hidden server in TOR, it’s a general weakness of running a server.

@ JK, on war on passwords
The password removel plans remind me of what I keep telling my clients: Tokens are fine but they can be seized or duplicated.

@ Pete, on systems that cannot be taken down
Is it possible, in theory, to set up a website or a peer-to-peer system that cannot be taken down?

In short: No.
The biggest weakness is communication between nodes. Even if you secure nodes with biggest effort, you still rely on communication lines. To take you down, just take down the whole communication system. Period.
Solution would of course be the installation of own infrastructure.
You would have to set up your own infrastructure which makes it in turn so very easy to find out who participates. Let alone the challenges related to establishing a global communication system from scratch.

@ Rick, on genome data
On the Google/Genome thingy — I wouldn’t even know why I should want to store my genome in the cloud or even analyze it there.
The idea of research usage in the cloud is painful — it is a sure bet that there will be medical stuff unwittingly uploading genome data of unknowing patients. And guess what, it will be the poor ones who have no other chance.
In this context: Never forget the growing military side of Google. Who knows how far we are away from weapons targeting just persons with a special genetic pattern? Welcome to holocaust 2.0.

@ Rick, BoppingAround, on private conversation
Regarding private conversation — it has become surprisingly hard to have a really private conversation. When I need a really private conversation with someone, we go to a nearby public bath or gym. Not for the fitness, but for the showers.
I consider them quite hard to evasdrop from scratch without greater effort: Walls are quite plain and easy to check for manipulation. No smartphones or other wearables. And the shower makes a solid background noise. Unfortunately such institutions with unixes showers are hard to find. Sauna might pose an alternative, but I don’t trust the wooden panelling and there is no background noise.
Additional plus: You do not have to enter together and the two of you training at the same timeslot does not necessary mark an event of interest. Thus: Make fitness a hobby!

@ Thoth, on how clients perceive crypto
On the disappointment of educating others on INFOSEC: Crypto is magical. We are the 2.5% (just my gut-based estimation) of users who understands what goes on before the browser shows the nice lock symbol and how wothless this is in fact. 97.5% of users just have no other chance than belive what the magician says. Of course they cannot stand a chance against LSA. We cannot and we won’t change this. We partly need to accept it.

And finally
@ All the Germans reading here
Best wishes on the 25th anniversary of tearing down the wall. It takes the heart of a lion to stand up to an organization such as Stasi. My salutes to those on the streets of Leipzig, Dresden, and Berlin in fall 1989. Enjoy your freedom*, you earned it.
— Which also can be seen as an answer form history what change it needs for education to become more educative.

  • Yes, I do realize that in Germany freedom has been restricted over the last years, too — but it’s still not as half as bad as in other countries. At least you still have notable investigative journalism. And yes, I read about BND asking for more money to break SSL and Benni’s link. Investigative journalism. I just love it.

Clive Robinson November 10, 2014 4:26 AM

@ JestInCase,

I don’t know it it is true or not but there are rumors doing the rounds that the other missing blogs and fora were collateral damage due to being either on the same server or colocated at the premises of a company that took anonymous payment in BitCoins.

There’s still a lot of unknowns about the story, and it might be that the blogs and fora were doing something considered illegal because “free speach” is not a protected right in many places.

I guess we will have to see what comes out over the next few days and weeks.

Clive Robinson November 10, 2014 5:22 AM

@ Andrew_K,

With regards the wall, it realy does not feel like a quater of a century. I was there on business and was actually quite scared that shooting would start and in effect Berlin would become a war zone.

I look across at my book case and see the “bits of rubble” I brought back with me and it gives me hope that people can peacefully throw off oppression.

With regards setting up your own anonymous communications infrastructure it may be possible using other existing infrastructure that cannot be easily switched off. WiFi for instance will as we have seen from the Arab Spring uprisings run as long as there is power. Thus if you think about pads and smart phones running Android with inbuilt WiFi they might well form the nexus of a mesh network, for emergancies and disasters as well as breaking conventional comms infrastructure monopolies. But more importantly the hardware to support “Software Defined Radio” is getting extraordinarily cheap these days, it might not be long before we see SDR USB dongles that cover from HF through UHF not just for RX but low power TX as well especialy with the ideas about “white space” utilization [1] spreading beyond not just the old analog TV Bands.

[1] For those unaware of what “white space” is all about TechRepublic did a “managment level” blub piece about it earlier this year, http://www.techrepublic.com/article/white-space-the-next-internet-disruption-10-things-to-know/

Andrew_K November 10, 2014 6:38 AM

@ Clive Robinson

Regarding independend comm infrastructure

I totally agree on the use of WiFi for local mesh networking (and I remember the according article on this site) in emergency scenarios. Phones and pads should have a prepared “emergency mode” for local area text communication. That might even save lifes in disaster scenarios.

Personally I doubt that SDR TX will be broadly available. TX devices have a great tradition of regulation. Just imagine which damage one can cause with a device that searches for signals and then just jamms the frequency.

Which frequencies are btw. used for earwigs? 😉

Thoth November 10, 2014 7:30 AM

@Nick P
Another reason why mobile payments would fail – Host Card Emulation. It presumes secure storage of personal details in cloud-based environments and users would simply login to retrieve details from online servers. Sooner or later someone needs to login and a LSA would be enough to intercept credentials and play around.

Sceptical November 10, 2014 9:42 AM

Oftentimes in my capacity of Special National Security Adviser to Blogspot and 4chan, people will come to me and ask, “Mister Skeptical, I love America and I want to be safe, but sometimes, late at night in a dark, fleeting moment of the soul, I wonder, What am getting out of the billions and billions of our tax dollars that they take from me and spend on NSA?” And I say, ever so kindly, with the total absence of priggish passive aggression for which I am renowned throughout the Internet, Do not be ashamed, my son, we all have impure thoughts. Let me explain the world to you.

I think you will also benefit from my explanation. I think we can all agree that we need the NSA to protect our critical National Security assets from imminent threats. And everyone knows that America’s single most crucial National Security asset is horny midget David Petraeus. Now NSA’s most mission-critical National Security operation is to support him in going to meetings and getting his ass kissed while he talks about the peace he has brought to the often-turbulent Arab world. Otherwise, America leaves itself vulnerable to the threat that people will make fun of him him because he can’t keep his tiny dick in his pants, and in the worst-case scenario, he may not be able to serve as President of Israel I mean America.

Now no one will deny that one of NSA’s core missions is to protect America from the imminent threat of elderly pacifists and nuns. And, as we have seen, NSA has recently identified and interdicted a potentially catastrophic incident combining these two imminent threats. NSA was able to gather and collect covert plaintext messages from the shadowy jihadi darknet known as gmail. These actionable critical threat items revealed direct cooperation between the second-in-command of an emergent group of freedom-of-expression extremists and a radical splinter group said to be linked to Mother Theresa. Thanks to NSA, the NY Police Department was able to concentrate a strike force at the intended target of a mature plot to commit Q-and-A terrorism. NSA also obtained critical medical records so antiterror First Responders would know which shoulder is easier to dislocate.

We often have our disagreements here, but today I think we can all be justly proud of the NSA cyberwarriors who put their lives on the line every day to keep us safe.

Adjuvant November 10, 2014 9:53 AM

@Scott: Correct on all points, with caveats.

Secure Share: GNUNet + PSYC: An interesting project. However it doesn’t allow hosting a web site – and while it might be possible to implement that, it would appear that would involve hosting it yourself. True as things currently stand, given this is pre-alpha software based on alpha tech. Freenet-like FreeServices are part of the proposed feature set of upstream project Gnunet. (OK, that’s weak. But after all, the question was whether this is “theoretically possible.” 🙂

Redecentralize.org: More of an interest group than an application or framework.
Yes, it’s a roundup project (sorry, I was in a hurry). Includes actual applications & frameworks which are relevant, such as MaidSafe and Drogulus Presentation: , and … hmm, in retrospect, those are the only relevant ones.

More roundups of related projects:
Cyberpunk
Alternative-Internet Crowdsouced List on Github — pretty exhaustive, from Redecentralize.org, and probably the link of theirs I should have included ab initio
P2P Foundation

And a presentation of the state of the ecosystem from the Freenet camp:
Slides

As far as in-place-and-you-can-use-it-today for distributed offline publishing it does appear Freenet and downstream is the only game in town (save NNTP).

Incredulous November 10, 2014 9:56 AM

@ Sceptical (not Skeptical)

Thanks! I needed a laugh this morning. Not the pained feeling I usually get in the pit of my stomach when I read Skeptical “posts”. (I could think of other descriptors but I don’t want to invoke the Moderator spirit, who seems to be haunting other venues.)

You were wise to alter the moniker. We don’t need a rash of impersonations, but a little sarcasm is salubrious.

Markus Ottela November 10, 2014 11:07 AM

@ Nick P

I understand what you mean: Almost any web-service today that is not about anonymous publishing, but confidential sharing and communication, can be made to work with trilateral waterfall security system. Many services however require faster data diodes, so unless COTS hardware is used, much more engineering is required.

I’m not sure if writing writing yet another client is necessary. I’m also not that confident enough about my programming skills to go about writing one for a long time, so it might be better to look at existing clients such as Ricochet, that makes a good effort on passing messages through Tor hidden services. I’m thinking pushing features such as constant message relaying to an upcoming project has better chance of success; The developers of invisible.im who now contribute to Ricochet did show interest in adding support for TFC.

Unfortunately I have to concentrate on my personal life and studies for the time being. This mainly affects adding features to TFC; I do my best to to fix any bugs found.

Nick P November 10, 2014 11:52 AM

@ Sceptical

Yeah, that is funny. The linked story isn’t. Thanks for posting it anyway.

@ Adjuvant

Thanks for the links. Much as I like Freenet, I really think it’s a bad idea to get whistleblowers to use it to send stuff to journalists. At least if it’s about the U.S. government. The Freenet userbase is smaller than Tor and the NSA/FBI operations are more effective against it than I’d like. Identification might be as simple as noticing Freenet was accessed in a cafe before a major leak, then getting the video’s & Internet traffic from that cafe. Need most journalists & many potential whistleblowers using it before it can get near safe.

And on mobile? Are they serious?

@ Markus Ottela

Oh yeah, I didn’t expect you to roll your own. I was thinking of you just using some library that did P2P without routing everything through 3rd parties. Maybe the initial setup. Of course, if your personal life is demanding your time now then that’s probably where you should place it. You’ve certainly done commendable work with TFC so far. Others can build on it.

Personally, I plan to prototype an implementation on one of the clean slate secure processors in the future. Still tryinig to acquire funding that will be necessary to get the boards and software.

JestInCase November 10, 2014 3:04 PM

@ Sceptical (or Alt_Skeptical)

Good for a laugh, but it actually tied a couple of unrelated thoughts together for me.

The reference to G-mail and NSA triggered a realization: Who among us is silly enough to believe that Google, in their quest to know all of the web, doesn’t have a couple thousand boxes loaded up with tails. There is no doubt in my feeble mind that the big G has indexed the entire dark net.

It has been insinuated before that G is, or has been, in bed with the TLA’s. If true, that could be the link (that along with the use of Bitcoin) identified the targets. Food for more thought.

Passing By November 10, 2014 3:56 PM

@Nick P

Puzzled by your comments regarding Freenet and jurnos. True that Freenet has a smaller user base than Tor. However, properly configured Freenet is much safer than Tor too. The cafe example is terrible because the flaw you note is an issue of operational security, not Freenet.

Nick P November 10, 2014 5:52 PM

@ Passing by

It’s quite simple: Freenet attempts a risky protocol on a risky VM (Java) on a risky OS. The resulting risks should be enough for regular black hats to bypass to some degree.

Far as risky protocol, Freenet has barely been analyzed compared to Tor. Yet, there’s been one attack after another on Tor. Freenet probably has quite a few yet to be discovered. Needs more work at each layer before I’d trust it against a nation state opponent.

I like the protocol, though, vs one like Tor. I used a hardened version of it a while back for lower strength attackers.

Dud November 10, 2014 10:10 PM

Quote of Sceptical: “Now no one will deny that one of NSA’s core missions is to protect America from the imminent threat of elderly pacifists and nuns.”
https://www.schneier.com/blog/archives/2014/11/friday_squid_bl_449.html#c6682570

Surely it is “Michael Walli, Sister Megan Rice and Greg Boertje-Obed” that you briefly spoke of.

Quote: “On 28 July 2012, the three activists cut through three fences before reaching a $548m storage bunker. They hung banners, strung up crime-scene tape and hammered off a small chunk of the fortress-like storage facility for uranium material, inside the most secure part of complex. They painted messages such as “The fruit of justice is peace” and splashed small bottles of human blood on the bunker wall.

Although the protesters set off alarms, they were able to spend more than two hours inside the restricted area before they were caught. When security finally arrived, guards found the three activists singing and offering to break bread with them. The protesters reportedly also offered to share a Bible, candles and white roses with the guards.

The Department of Energy’s inspector general wrote a scathing report on the security failures that allowed the activists to reach the bunker, and the security contractor was later fired. Some government officials praised the activists for exposing the facility’s weaknesses. But prosecutors declined to show leniency, instead pursuing serious felony charges.”

http://www.theguardian.com/world/2014/feb/19/nun-jailed-break-in-nuclear-plant

Am not excusing their actions, nor their lack of contrition.
Activists must obey the laws on the books, not just those they approve of, or must expect severe consequences.

sena kavote November 10, 2014 10:31 PM

re: HSA / HSLA / TLA

@Nick p

“What’s HSA reserved for?”

Putting acronym (canditate) to any search engine will reveal it’s other / current meaning. Opening a new tab or new browser window for that search should be easy enough, unless there is some security issue I m not aware.

Copy pasting that other meaning here would confuse search engines.

But I guess putting it in this form does not, so here it is:

H-ee-aa-l-th_sa-vi-ngs_aa-cc-co-unt

Distant meaning, but still makes searching more difficult.

HSA may have even more meanings and maybe will have.

Dud November 10, 2014 10:34 PM

Quote: PRESS RELEASE UPDATE: DANGEROUS URANIUM HEXAFLUORIDE LEAK WORSE THAN INITIALLY REPORTED, REGULATOR SAYS

“Emergency response and public awareness to a hazardous release from Honeywell depends on the reliable, honest and timely reporting by Honeywell. No government agencies can detect in real time an ongoing release of radioactive Uranium Hexafluoride (UF6) or toxic Hydrogen Fluoride (HF) at the facility”, stated Gail Snyder, Board President of Nuclear Energy Information Service.

In a phone conversation with Roger Hanah of the Nuclear Regulatory Commission (NRC) he conveyed that Honeywell’s Emergency Response Plan includes stationing a person in position to view and observe the incident and that the person was not originally stationed in a location that allowed him/her to see the release of Uranium Hexafluoride (UF6) from the building. An updated NRC Event Report states “the NRC inspection found that Honeywell did not recognize that the HF (Hydrogen Fluoride) released from the FMB (Facilities Management Building) warranted an emergency classification of ALERT. “ As a result Honeywell did not notify the Nuclear Regulatory Commission at that time. The Illinois Environmental Protection Agency which issues the site permit and regulates the process where the leak occurred was not notified of the incident until a few days after it happened…”

““The staggering number of mistakes, inaccuracies, changed stories, and inadequate responses on the part of both Honeywell and the NRC beg for an independent investigation into Honeywell’s ability to run so sensitive a facility, and NRC’s ability to adequately regulate it,” asserts Dave Kraft, Director of NEIS. “NRC’s existing regulatory scheme does not seem capable of protecting the public health and safety in a timely and responsible manner. Illinois’ Congressional Delegation needs to look into this matter,” Kraft states.”

http://neis.org/press-release-update-dangerous-uranium-hexafluoride-leak-worse-than-initially-reported-regulator-says/

(quoted from: http://optimalprediction.com/wp/uranium-hexafluoride-release-tonight-at-honeywell-works-metropolis-il/comment-page-1/#comment-52391 – thank you Rob)

Perhaps instead of “keeping us secure” from elderly & octogenarian nuns, they should be concentrating on critical infrastructure and reestablish cultures of safety in and around our “nukular” facilities.

Wind power, solar & other means of backup power should be installed at all reactor plants to prevent meltdown & Spent Fuel Pool fires (& the resulting land that will remain horribly & sometimes fatally contaminated until long after mankind joins the dinosaurs).

What good is security if we collectively lose our health from substances that are not only chemically toxic, or radiologically toxic, but Toxic At The Atomic Level (TATAL)???

Curious November 11, 2014 1:24 AM

Apparently the search engine DuckDuckGo removed a search feature some time ago, and now one can’t make a search with time as a search modifier. Why remove such a useful feature?

Pretty sure I am not in error here, but feel free to enlighten me.

Thoth November 11, 2014 2:36 AM

@Nick P, Markus Otella, Clive Robinson and all
While pondering on the OTP traffic analysis I brought up for TFC, I have decided to formalize possible attack vectors of stream-based cryptographic messages in the below link and solution to sidestep such attacks and created a paper and possible solutions for it.

In regards of Markus Otella asking me how to pad a bitstream (if I did not remember wrongly), the paper posted contains some plausible methods.

Link: http://textuploader.com/ovdk

Do comment on the paper. I may send in the paper to IACR as well if all goes well.

Scott "SFITCS" Ferguson November 11, 2014 3:08 AM

@Adjuvant


@Scott: Correct on all points, with caveats.

Those damn caveats. I especially hate the way they shed hair on the sofa.


Secure Share: GNUNet + PSYC: An interesting project. However it doesn’t allow hosting a web site – and while it might be possible to implement that, it would appear that would involve hosting it yourself. True as things currently stand, given this is pre-alpha software based on alpha tech. Freenet-like FreeServices are part of the proposed feature set of upstream project Gnunet. (OK, that’s weak. But after all, the question was whether this is “theoretically possible.” 🙂


As far as in-place-and-you-can-use-it-today for distributed offline publishing it does appear Freenet and downstream is the only game in town (save NNTP).

It occurs to me that PirateBay could also be used for distributed offline publishing. Within certain limitations.

Thanks for the other links.

On a slight tangent – you may find this interesting:-
A Debian package for SMTP via Tor (aka SMTorP) using exim4.

Not that I’d recommend ToR for anything other than what it was originally designed for – and then only when used with appropriate OpSec.

@Nick P

Re: Freenet and Java

I also have serious reservations about anything that uses Java. But… appropriate Risk Management and OpSec? i.e. alibi your connection times, don’t use the usual connection, location, intertube habits, or, hardware to run Freenet on if you wish to keep your activities secret. All of which is contrary to the desires of the ‘general public’ (and most? journalists).

Is it a perfect solution? No. The Long John Silver principle applies (the only way three people can keep a secret is if two are dead).

Can Freenet be improved on? Yes. I’m sure Toad would welcome your input there.
Auditing would be one improvement, though I do have some concerns about the ability to secure a development project against a determined attacker due to(?) the relationship between size of a project and the difficulty of securing it.
I haven’t given the problem a huge amount of thought though… Ideas from others on that problem would be interesting – thoughts anyone?

Will Freenet become popular? Perhaps. Maybe. I hope not.

Kind regards

Boston Scared November 11, 2014 11:35 AM

How’s that C03 recruitment effort going? There is no way in hell I’d to go to Boston to work. The Mass state police seem to think they’re some kind of Junior Delta Force. All the soldier suits and military weapons they plan to use on us civilians, they want that to be secret. If you wind up as the next Aaron Swatrz or Ibragim Todashev, there’s no telling what they’ll be shooting you with. Creepiest police state in the country.

Adjuvant November 11, 2014 1:01 PM

@Scott It occurs to me that PirateBay could also be used for distributed offline publishing. Within certain limitations.

Quite right, for publishing in general (I could cite the example of a certain large library having a name that is cognate with a certain book of the Torah, to which Internet Rule 1 — modulo referent — stringently applies outside of the Runet). The question, though, was about “Web pages.” 😉

Adjuvant November 11, 2014 1:40 PM

@Scott: To be sure, I’m being playfully disingenuous: torrents are no further from “Web pages” than Usenet is.

Anura November 11, 2014 2:09 PM

I’d like to wish everyone a happy Armistice/Rememberence/Veterans Day, originally commemorating the end of “The War to End All Wars” when the world finally grew up and we learned our lesson about how stupid it was to get involved in pointless wars.

Nate November 11, 2014 2:56 PM

Edwin: Hmm. Everykey looks like it relies on a server to store encrypted passwords, an app to talk to that server and download the passwords, and the band itself to store the decryption key for the passwords.

There’s two potential points of failure: if the server is offline, you can’t log in (though you can hopefully resort to typing the passwords manually).

The app though is what would really need security vetting. It has to know your plaintext passwords, and it has to make a connection to a remote server. Do we know for sure what it’s actually doing with that data once it’s decrypted? Can they prove that?

Scott "SFITCS" Ferguson November 11, 2014 5:46 PM

@Adjuvant

My last post on this tangential thread 🙂


@Scott: To be sure, I’m being playfully disingenuous: torrents are no further from “Web pages” than Usenet is.

Noted. Likewise the shifting from the OP’s question about websites, and your links to projects that were not, interesting though they proved to be.

Kind regards

tyr November 11, 2014 7:37 PM

Looks like the wish for better funding for Tor
worked. I’m not sure Mozilla is their right
partner but it won’t hurt as much as scraping
by on nothing but virtue.

mrWillis November 11, 2014 9:06 PM

Mooltipass? Clive mentions the 5th element, and I am spinning in confusion.

Not to pick on mutilpass here, but isn’t that a method of programming wherein each job is accorded its own segment? And isn’t that what we keep saying over and over again? Divided the work up, do your thing well, then combine into one program? Segmented programing allows easy replacement, etc.

But we keep seeing these closed source all in one solutions, or perhaps worse, open-source, but with such convuleted code, full of gremlins and nefariosness.

Thoth November 12, 2014 5:37 AM

For those who want to post secure setup ideas or have theories to publish on cryptography or kleptography, I have decided to setup a space (blog) for the publication. You may drop me notice here on Schneier’s blog or contact me to get your materials published there but I do not promise web traffic. I am planning to possibly expand it in to full fletch site and probably add in some form of secure publication mechanism if opportunities arise.

Link: http://simplecrypt.blogspot.sg

Clive Robinson November 12, 2014 9:56 AM

@ Thoth,

One immediate security fix to think about is to move from the unicast TCP transmission to something aproaching a broadcast system vi the likes of UDP, where the datagrams are encrypted and fixed length.

Thus if a node sends ten identical encrypted datagrams to ten apparently random IP adressed nodes only one of the nodes will be able to decrypt and act on it.

However the nodes act as store and forward, and wait a random period before acting on a given datagram.

Whilst far from ideal for an interactive service like ToR it would be of great benifit to non interactive protocols like Email.

With a little further work nodes could end up sending as many datagrams as they actually receive, thus putting both an interactive and non interactive protocol on the same node can also be used to help hide “hidden servers” on any given node.

Thus “traffic shaping” can if carefully used make traffic analysis considerably harder for an observer, even one who is apparently omnipotent in link monitoring.

Thoth November 12, 2014 10:12 AM

@Clive Robinson
Indeed a very good idea to some sort of scatter ashes into the wind and see who can grab the ashes kind of scenario.

I guess a more TOR like secure communications method would require a rewrite of everything that is out there. The TFC chat with it’s custom hardware and bold use of OTP that sends out fix length multicast messages in some form of timed burst (or random short burst) would be the next step ?

Anura November 12, 2014 12:37 PM

@GHI

You might be able to improve the scalability of bitmessage, but you’ll never be able to solve it. I have another system that doesn’t have the scalability issues of bitmessage, and allows messages to be sent without the sender or the recipient being known, but never implemented it because I don’t like the encryption libraries available.

It’s limited, since it doesn’t allow unsolicited messages, but the idea is you look for messages from a specific recipient by computing the identifier for the message. This is based off of a shared secret derived from a Diffie-Hellman key exchange and either a seed for the first message you exchange, or the key material for the previous message. Each message in my scheme has a fixed length, and if messages exceed that length then they are chained in order to obscure details about the message. It is designed so that actual messages are indistinguishable from completely random messages.

Uncle Bob November 12, 2014 3:24 PM

Big Banks Are Fined $4.25 Billion in Foreign Exchange Scandal
http://dealbook.nytimes.com/2014/11/12/british-and-u-s-regulators-fine-big-banks-3-16-billion-in-foreign-exchange-scandal/?_r=0

British, American and Swiss regulators fined some of the world’s biggest banks a combined $4.25 billion on Wednesday for conspiring to manipulate the foreign currency markets…

What I found interesting is below snippet from the article, as it seems like the government was listening in their chat room…or maybe they just processed all the data entered on-line retroactively?

The British and American regulators released documents detailing conversations among traders in electronic chat rooms that were filled with jargon, incorrect spelling, bad language and typos. One document showed a conversation among three traders — at JPMorgan, Citibank and UBS — discussing whether to let a fourth into their group. “Will he tell rest of desk stuff or god forbin his nyk’” asked one trader, referring to the New York office when he said he was concerned about whether the new participant could be trusted.

“That’s really imp[ortant] q[uestion]” another answered, according to a transcript redacted by the trading commission. The trader added that he did not want “other numpty’s in mkt to know,” referring to the slang for stupid person.

“Is he gonna protect us? Like we protect each other against our own branches?” asked the second trader.

Clive Robinson November 12, 2014 5:33 PM

@ Anura,

It’s limited, since it doesn’t allow unsolicited messages,

I would regard this as a seperate issue.

The reason being that it is currently considered by many to be not possible to achieve without a reliable side channel, which is generaly incompatable with an anonymous network.

I have been thinking on ways to do this using a hidden distributed directory service but it has issues which effect anonymous behaviour if a rougue node is put into the system.

It’s a tough problem to think about and I suspect that it will come down to either being not possible, or trivialy simple due to some method not yet thought about… such is the way with these things.

Anura November 12, 2014 7:52 PM

@Clive

Well, you could combine two systems. Use a bit-message like protocol to do contact requests and exchange keys. Since messages would be a lot less frequent, it wouldn’t need to scale as well as the messaging system itself. The good thing about it is that the protocols remain independent – for the messaging protocol, you can distribute keys offline without relying on the contact request protocol (mine was to be primarily based on Curve25519, but support exponentation-based Diffie-Hellman in addition, as well as symmetric secrets that can be exchanged offline). For the contact request protocol, you can use it to do anonymous key exchange for any system.

Thoth November 12, 2014 9:14 PM

@Adjuvant

Thanks. The reality rift between academic ideas and realities of deployments are huge. From what I have deployed so far, all those nice sounding theories usually fall flat before it even hits the customer’s gate.

One of the enemy of security is complexity itself as we all know and complex theories in esoteric languages for the select few would simply make them obscure and not be properly researched and deployed into real world security systems.

@Nick P, @Clive Robinson & all
If you have any precious advises, posts or papers that wants an archival, I would be glad to accept them into my newly made archive blog. Direct entry into archive as I trust your quality of work 🙂 .

Also, I am thinking of a method to validate archived work on an insecure medium (Blogspot by Google) before I can settle down and create my own archival medium. For now, we can just use PGP/GPG signing of papers and posting your public PGP/GPG pair here. A secure backup would also be useful. Currently still planning the steps out.

If we are to simply sit here and watch (do nothing) and let the status quo of ITSec continue, it will one day consume itself and all the great ideas debated here would unlikely appear. It do be a waste….

Thoth November 12, 2014 9:33 PM

@Uncle Bob
These traders have very little understanding of OPSEC so it’s not surprising they got caught. It is rather unsurprising and at the same time unnerving to know the fact the extend of spying even on traders in private conversation or should I say, the extend of surveillance to every aspect of one’s life.

Security should be made simple and convenient with a certain assurance level according to it’s role.

For those who want basic security, they can take the first step by using RetroShare and to know how to store their keys securely by storing the crypto keys in a portable SD card which will be kept away unless needing it’s use and nothing else kept in the SD card. Another way (if they are willing to learn and willing to spare some money) is to get a CC EAL 5+ and above smartcard or smartcard HSM module. There are no promises that the smartcards or smartcard HSM have Government traps in them but if properly deployed, they would be secure to a certain degree. One common method is to get a few smartcards from a few unique vendors and each of them generating a keypair and a secure message encrypted in a cascading manner by all the smartcards in one’s possession and if any of these smartcards are honest, there is at least one layer of protection.

Current most PKCS11 enabled software may not automatically allow cascading crypto-device protection as I explained above unless it is done by a custom script or manual effort.

Clive Robinson November 13, 2014 12:02 AM

@ Thoth,

With regards the FBI’s disconnecting a service to get access to a private dwelling (what a hotel room/villa is when occupied), I guess they are relying on the fact they are allowed to “lie to suspects” to get away with what most would consider an illegal entry.

Now if you or I did such a thing we at the least would be guilty of a couple of crimes. The first would be theft, by interfering with a “service” that is under a “supply contract”, thus denying the customer the rights and privileges pertaining to that contract. The second being impersonation to gain advantage which most would consider fraud.

So we –or the judge issuing the warrant– should have asked about the “crimes” committed to obtain the “probable cause”, and if the supposed evidence was “fruit of the poisoned tree”.

But there is an underlying issue here that people should ponder on, especialy if they ever end up on a jury. Firstly the –suposed– suspicions of the hotel technician, this sounds altogether like a put up job, in that I suspect it was not the Hotel initiating contact with the FBI (hotels routinely ignore crime committed by high paying customers), Thus I suspect it was the FBI contacting the Hotel and very probably “leading the technician” by the way they asked questions, in all probability with an implied threat or two. Which would render his testimony as not freely given but under duress, which is another murky area under law, bordering on eliciting false testimony by extortion. Thus all the steps the FBI took to getting the warrant were in all likelyhood perverting the course of justice, which begs serious questions about all the evidence they present, and thus renders it untrustworthy at best. So you should ask yourself if the FBI should profit by a whole series of crimes or illegal acts, personaly I think they should not, they have more than sufficient powers, that they do not have to commit illegal acts to get a conviction. The FBI agents involved are in effect being lazy, and bringing the organisation they work for into significant disrepute and thus should have their employment terminated on either count as any normal employee would expect to happen.

Thoth November 13, 2014 2:05 AM

@Clive Robinson
I agree with your statements.

I always wonder why the good old tactics of tracking a single target using a single target persistent threat model with a legal court warrant authorizing wiretapping and the immense investigation powers they have instead of deceitful tactics that may lead to illegal possession of possibly distorted evidences would be their favoured tactic.

The only answer I can come up with is because the powers that be are the big boys and they can do whatever they want and they can use the thumbs to squeeze us like ants to death and hell if they want to. We are just being toyed by them. They simply want more. They are never satisfied. And… pathetically lazy. This applies to NSA, CIA, GCHQ, BND and so on and so forth.

Thoth November 13, 2014 6:58 AM

@sandra
Would it be possible for us be able to request for code reviews to ensure the assurance levels of these security software suites. Many security software have shown to be vulnerable to software and hardware based attacks and it would be nice if we can be assured.

Clive Robinson November 13, 2014 9:36 AM

OFF Topic :

First off attackers have finally woken up to using “Email Drafts” on the likes of Googles servers,

http://www.theregister.co.uk/2014/11/06/hackers_use_gmail_drafts_as_dead_drops_to_control_malware_bots/

I’m realy surprised it’s taken this long, I worked this out as just one of the ways to exfiltrate data some years ago when I worked out how to use Google and blogs to do “one time control channels” for bot nets.

And an update on BadUSB. It appears that around half the USB controler chips are vunerable and half may not be,

http://www.wired.com/2014/11/badusb-only-affects-half-of-usbs/

Thus your chance at having one connected to your computer is actually getting close to a certainty –ie up above 96%– for the average user with five or more USB devices and close to zero chance of the average user being able to identify them…

BoppingAround November 13, 2014 11:46 AM

re: BBC article on privacy from another comment thread here

I have just found similar mentionings from 1999.

http://www.businessweek.com/1999/99_14/b3623028.htm

In a November Louis Harris & Associates Inc./Alan F. Westin survey of 1,000 adults, 82% complained they had lost all control over how their personal information is used by companies. Three out of four said businesses asked for too much information.

Looks like nothing has changed much for better over the last 15 years. Try not to mind the dates mentioned there while reading the article.

Nick P November 13, 2014 12:05 PM

@ Thoth

“If you have any precious advises, posts or papers that wants an archival, I would be glad to accept them into my newly made archive blog. Direct entry into archive as I trust your quality of work 🙂 .”

I appreciate it. I have a list of many key designs and essay’s I’ve posted here. Just links with headings in a text file. I’ve been meaning to convert it all into local HTML files for my own blog, site, etc. It’s just gotten… so… big. I can send it to you for your own personal reading, if you want. The design file, the essay file, or both.

“Also, I am thinking of a method to validate archived work on an insecure medium (Blogspot by Google) before I can settle down and create my own archival medium. For now, we can just use PGP/GPG signing of papers and posting your public PGP/GPG pair here. A secure backup would also be useful. Currently still planning the steps out.”

Mike the Goat and I already co-invented one here. Originally, I wanted a compact signature for the content of blog comments so we didn’t have that PGP mess all over discussions. Mike and I worked out some specific designs. Mine would put it in a META tag or HTML comment that came before BODY or SCRIPT tags. The browser would parse the file, SHA-2 it, verify the signature, and only then send it to rendering/JS engines. Mike forked the concept into his own called Blogsig here. It seems we’re converging on how to represent it (parsable HTML messages), although he’s the only one actually implementing it. He deserves extra credit there.

The content itself can be created on or generated by as secure a machine as one likes. The reason for this concept is that it’s better to authenticate content than links, esp with corrupt CA’s etc. The author of the content would also be trusted (the user). That means you can throw together (EAL3-4) the signing portion. The verification tool needs to be medium-high robustness in how it handles the possibly malicious data. Both would be small with the key crypto primitives already in NaCl library. A data diode, pump, CD-R’s, etc can be used to push content out of an insecure, but validated, machine to the untrusted network or servers.

Apples November 13, 2014 6:40 PM

http://www.marketwatch.com/story/americans-cellphones-targeted-in-secret-spy-program-2014-11-13

quote

The Justice Department is scooping up data from thousands of cellphones through fake communications towers deployed on airplanes, a high-tech hunt for criminal suspects that is snagging a large number of innocent Americans, according to people familiar with the operations.

The U.S. Marshals Service program, which became fully functional around 2007, operates Cessna aircraft from at least five metropolitan-area airports, with a flying range covering most of the U.S. population, according to people familiar with the program.

Planes are equipped with devices–some known as “dirtboxes” to law-enforcement officials because of the initials of the Boeing Co. BA, +2.11% unit that produces them–which mimic cell towers of large telecommunications firms and trick cellphones into reporting their unique registration information.

The technology in the two-foot-square device enables investigators to scoop data from tens of thousands of cellphones in a single flight, collecting their identifying information and general location, these people said.

People with knowledge of the program wouldn’t discuss the frequency or duration of such flights, but said they take place on a regular basis.

A Justice Department official would neither confirm nor deny the existence of such a program. The official said discussion of such matters would allow criminal suspects or foreign powers to determine U.S. surveillance capabilities. Justice Department agencies comply with federal law, including by seeking court approval, the official said.

endquote

More at the paywalled WSJ, according to the link.

Thoth November 13, 2014 9:01 PM

@Nick P
You can send me a text file list of all the links so that I can spend more time reading than searching. Thanks. Looking forward to more…

Nick P November 14, 2014 11:40 AM

@ Thoth

Put a disposable email address up on your new blog so I can send you the files. Additionally, you’ll have my main email address and can send me your preferred email. Others here could do the same thing if they chose.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.