Friday Squid Blogging: WTF Evolution Features a Squid

I have always liked the “WTF, Evolution?” blog. Consistently funny, but no squid. But now they have a bit on the bobtail squid.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Posted on June 27, 2014 at 4:55 PM146 Comments

Comments

Jacob June 27, 2014 5:57 PM

Just to put on-line service providers’s claims regarding privacy protection in a proper perspective, this is from Facebook’s Deputy General Counsel, writing about a NY court demand to hand over ALL past and present data of 381 people:

“We fought forcefully against these 381 requests and were told by a lower court that as an online service provider we didn’t even have the legal standing to contest the warrants. We complied only after the appeals court denied our application to stay this ruling, and after the prosecutor filed a motion to find us in criminal contempt.”

The authorities were looking for disability fraud. Only 62 were eventually charged.

Again: “as an online service provider we didn’t even have the legal standing to contest the warrants”

http://newsroom.fb.com/news/2014/06/fighting-bulk-search-warrants-in-court/

Mike the goat June 27, 2014 6:44 PM

Jacob: this is a very worrying trend, but unsurprising given how much people volunteer to the world on social media. The privacy features that Facebook added years back have only made the problem worse, as it gives people a false sense of security – “only my friends can see these posts; and I trust my friends” &c. Not only would the NSA likely have an almost real time feed of most social network updates, but people have been repeatedly caught out when FB decides to change some aspect of their privacy settings and unfortunately everyone’s settings get reverted to defaults. This is pretty poor, but has happened at least three times in the past decade.

Nick P June 27, 2014 7:00 PM

@ Robert

The preoccupation with squid is a mystery each reader must uncover themself. Far as why one every Friday, that’s a tradition and unique opportunity: readers are free to post random security stories and discuss them in the Friday Squid threads. It’s a nice break from other blog posts where we stay on topic.

Marc June 27, 2014 7:33 PM

@Robert – Were Bruce ever to stop Friday Squid Blogging, it would be a calamity.

Spaceman Spiff June 27, 2014 11:04 PM

@Robert
We all need some interests outside of our profession, just to stay sane. Bruce’s is squids… Me, I like to play old-time and bluegrass mandolin (not well – not badly, but who cares?) and collect classic (pre-1940’s, especially silent) movies. Just found and watched “Wings”, a 1927 classic silent about two friends, and competitors for the same woman’s love, who flew and fought together as part of the AEF (American Expeditionary Force) in WW-I France. A great film!

Spaceman Spiff June 27, 2014 11:10 PM

PS. This film also starred the classic flame, Clara Bow. The film won an Oscar for Outstanding Picture.

Thomas June 28, 2014 12:06 AM

@Robert
I don’t understand what this squid blog thing is for…
The first rule about squid blog …

@Nick P

Everything old is new again…
Apparently there was a hilarious twist on this ages ago exploiting Windows explorer and samba shares.
If you deleted a file from windows explorer it would ‘shell out’ to do the delete:
system(“del %s”, filename)
If you create a file on a Linux box “-f c:\” (or somesuch) it would end up with
del -f c:\
and nuke your C drive (note that “:” and “\” are perfectly legal characters on Unix for a filename). Easy fix if you have the source…

Also note that the obvious counter to this devastating(ly retro) attach is to separate parameters and arguments with “–“;
“rm *” is dangerous, “rm — *” is not.

Gerard van Vooren June 28, 2014 12:35 AM

@ Jacob

Have you seen “The Social Network”? To be short, Facebook doesn’t care about its users.

These messages (IM?) are from Mark Zuckerberg:

Zuck: Yeah so if you ever need info about anyone at Harvard
Zuck: Just ask
Zuck: I have over 4,000 emails, pictures, addresses, SNS
[Redacted Friend’s Name]: What? How’d you manage that one?
Zuck: People just submitted it.
Zuck: I don’t know why.
Zuck: They “trust me”
Zuck: Dumb fucks

If there is one surveillance company, besides Google, it is Facebook, and because they don’t care about their users, you better think twice what you put on Facebook. I don’t use Facebook (and no smartphone either). But still I am sure that Facebook has a nice profile of me. One I didn’t ask for and am unable to prevent (besides not using the internet at all).

Figureitout June 28, 2014 1:05 AM

Nick P RE: Unix being insecure
–And how many computers are influenced by Unix…? I recall good ole OpenBSD using pretty much all the same Unix commands, even Windows Powershell surprising took up some Unix commands. Apple computers have “a bit” of a basis in Unix, don’t they…?

So, create an entirely new OS w/ commands that go thru Assembly down through the compiler to binary that hopefully matches the chip you’re using lol. It’s not going to happen as no one rich enough is sponsoring it yet and we’re all stuck working on other jobs and not what we want to… a completely secure modern computer.

Thomas
The first rule about squid blog …
–First rule of Fight Club, you know what happens. :p

Gerard van Vooren
–Glad you don’t use facebook or smartphones. I won’t ever again have a facebook, but I may get a smartphone for business purposes…Something that kills me daily is my use of google…I just use it, can’t stop…it’s too good of a search engine…evil f*cks.

65535 June 28, 2014 1:39 AM

@ Mike the Goat

I agree with you.

“…this is a very worrying trend, but unsurprising given how much people volunteer to the world on social media. The privacy features that Facebook added years back have only made the problem worse, as it gives people a false sense of security – “only my friends can see these posts; and I trust my friends” &c. Not only would the NSA likely have an almost real time feed of most social network updates…” – Mike the G.

Yes, people have become lulled into a false sense of security with “Trust my friends” and other “private” FB settings – which are not private and change with FB’s advertising needs. FB may advertise privacy and even argue in court about privacy but FB doesn’t deliver privacy.

Stepping back and looking at the situation, it appears that the government is trying to build a number of legal precedents for bulk warrants with gag orders. This is very troubling.

The current case appears to have some merit because it involves a core group of Social Security mentors who helped create false disability cases for a number of other people (at a cost of $400 million).

http://manhattanda.org/press-release/da-vance-106-defendants-including-80-nypd-and-fdny-retirees-indicted-social-security-d

It’s noble of FB’s general counsel to fight the bulk warrants. But, I am not so sure FB did it’s best to defended it customers.

“We complied only after the appeals court denied our application to stay this ruling, and after the prosecutor filed a motion to find us in criminal contempt.” –FB general counsel

http://newsroom.fb.com/news/2014/06/fighting-bulk-search-warrants-in-court/

I am not sure if the actual FB lawyer would be exposed to a jail time, the FB executives, or FB as a corporation. I am not sure if anybody would be exposed to jail time. That clarification would be interesting.

That said, Twitter had about the same problem with the “Occupy Wall Street” warrant where Twitter capitulated.

http://www.nytimes.com/2012/09/15/nyregion/twitter-turns-over-messages-in-occupy-protest-case.html

Again, the government has won at least two significant bulk warrant cases which could be construed as legal precedent for further bulk warrants involving social networks. That is a ugly trend.

@ Benni

Your links and post show the BND using dirty and collusive tactics to further their own utilization (and budget resources). When combined with the NSA it appears to be a toxic combination. This is unacceptable.

Wael June 28, 2014 1:44 AM

@Spaceman Spiff,

Speaking of movies, The Pentagon Wars is an educational and funny movie at same time. The Bradley “vehicle” took 17+ years and $14+ Billion to develop and test. True story. While I understand the amount of research and development can consume this much time and money, this next acquisition is more unbelievable:

Facebook paid $16 Billion to acquire WhatsApp! Why is WhatsApp worth this much to FaceBook?
Apparently because: WhatsApp has roughly 450 million people who use its service each month, Facebook said.
They pay around $35 per user. Seems to be a good deal… A lot of information to collect for $35/user isn’t too bad!

If you need to become financially independent, then create a service that has a user base of 200K+ users, and show a monotonically increasing user base. Then you can make a few million dollars and retire. The simpler the application, the better. YouTube and WhatsApp are not exactly “rocket science”.

Gerard van Vooren June 28, 2014 2:00 AM

@ 65535

“It’s noble of FB’s general counsel to fight the bulk warrants. But, I am not so sure FB did it’s best to defended it customers.”

When you use Facebook you are a user, not a customer. The customer is the one that pays. They are the advertisers (and NSA). That’s why I said that Facebook doesn’t care about its users.

Chris June 28, 2014 2:26 AM

@Benni thanks for the link, intresting

@Figureitout, you absolutely should try https://www.startpage.com since it uses Googles Search Engine in the backend but without the profiling part.

It have been said before however this would be a good startingpoints for Google Searchengine replacement, in case you missed it.
Getting rid of the searchengine problem is solved in my opinion.

-https://www.startpage.com
-https://www.ixquick.com
-https://duckduckgo.com or http://3g2upl4pq6kufc4m.onion
-https://metager.de/en

The only services from google that I use today and quite rarely since I havent really found
any good alternatives yet are: Any replacent alternatives are highly welcomed!

-Youtube
-Google Translate
-Google Maps

//Chris

Chris June 28, 2014 2:35 AM

Hmm forgot that actually the Google Translate is solved also
There is a shell script that can run pull the translate from Google Translate
that can be TORified
//Chris

Jacob June 28, 2014 3:44 AM

I thought that it was obvious from my post above that, based on that particular court ruling, any on-line service – be it FB or Lavabit, has no standing to contest a request for full customer data and their private comm by the gov.

And it need not be NSL or the omnipresent “terrorist” catch-all word – a fraud investigation would suffice to demand and get private comm of six times the number of criminals actually caught. This is not targetted killing – this is high-altitude bombing with substantial collateral damage.

That FB is evil in general and can not be trusted with any user data is irrelevant to this case. The important issue here is the court opinion regarding data deposited with any on-line vendor.

Again: “as an online service provider we didn’t even have the legal standing to contest the warrants”

OSMAND June 28, 2014 4:25 AM

@Chris: “Google Maps”
On the phone I use OSMAnd (which is offline). On the desktop there is openstreetmap.org. But none of them has any information about traffic congestion 🙁

Clive Robinson June 28, 2014 4:33 AM

@ Jacob, and others with FB interest,

I don’t use nor have I ever used FB and generaly shun sites that have any links be they on page or financialy etc to FB.

Thus you would think that FB know little or nothing about me.

You’d be wrong, although exceadingly camera shy, I’ve had to attend official and social events where I’ve been photo’d without my consent (I never give it for me or family) and these have ended up either on FB or linked to FB thus FB now has a profile on me from linking such info.

Now I don’t know about others but I consider their actions to be fraud by the legal meaning of gaining pecuniary advantage by deception. Would I be able to sue, no because I have no standing as I am neither customer or user and US law has no protection on PII except for tbose like FB that collect it…

So I suspect FBs legal council was just going throughthe motions, not forthe sake of users or customers but to protect what they regard as their IP….

mike~acker June 28, 2014 5:49 AM

the new “electronic mastercard visa” system makes the same error that troubles the magnetic stripe: it sends customer account data to the merchant in the clear

Suggested Reading: http://files.firstdata.com/downloads/thought-leadership/EMV-Encrypt-Tokenization-WP.PDF

if there is malware running in the merchant’s POST the customer is toast.

Corrected Process:

Fixing the Point of Sale Terminal (POST)

THINK: when you use your card: you are NOT authorizing ONE transaction: you are giving the merchant INDEFINITE UNRESTRICTED access to your account.

if the merchant is hacked the card numbers are then sold on the black market. hackers then prepare bogus cards — with real customer numbers — and then send “mules” out to purchase high value items — that can be resold

it’s a rough way to scam cash and the “mules” are most likely to get caught — not the hackers who compromised the merchants’ systems .

The POST will need to be re-designed to accept customer “Smart Cards”

The Customer Smart Card will need an on-board processor, — with PGP

When the customer presents the card it DOES NOT send the customer’s card number to the POST. Instead, the POST will submit an INVOICE to the customer’s card. On customer approval the customer’s card will encrypt the invoice together with authorization for payment to the PCI ( Payment Card Industry Card Service Center ) for processing and forward the cipher text to the POST

Neither the POST nor the merchant’s computer can read the authorizing message because it is PGP encrypted for the PCI service. Therefore the merchant’s POST must forward the authorizing message cipher text to the PCI service center.

On approval the PCI Service Center will return an approval note to the POST and an EFT from the customer’s account to the merchant’s account.

The POST will then print the PAID invoice. The customer picks up the merchandise and the transaction is complete.

The merchant never knows who the customer was: the merchant never has ANY of the customer’s PII data.

Cards are NOT updated. They are DISPOSABLE and are replaced at least once a year — when the PGP signatures are set to expire. Note that PGP signatures can also be REVOKED if the card is lost.

Transactions are Serialized using a Transaction Number ( like a check number ) plus date and time of origination. This to prevent re-use of transactions. A transaction authorizes one payment only not a cash flow.

~~~

mj12 June 28, 2014 7:54 AM

@Figureitout

Powershell surprising took up some Unix commands.

Aren’t those just aliases to Powershell cmdlets?

Something that kills me daily is my use of google…I just use it, can’t stop…it’s too good of a search engine…

Heh. It was exactly the opposite for me when I abandoned it – the results were too shitty. Makes me wonder.

@Chris
Speaking of alternatives to G Maps… OpenStreetMap? They aren’t perfect but frankly, so aren’t the Google Maps. At least for my town.

AlanS June 28, 2014 8:04 AM

Surprised no one has commented on this week’s SCOTUS opinion on warrantless cell phone searches incident to arrest, given the court’s decision that digital searches are different and their thinking on these matters suggests the court may reject arguments the government has used to defend mass surveillance. Links to discussion here:

SCOTUS & Cell Phone Searches: Digital is Different

See also links at the bottom of the above commentary to additional commentary by Dan Solove and Orin Kerr.

A Failure of Protocol

Also, Smith v. Maryland turned 35 this week (the third part doctrine case). The court rejected arguments based on this case in reaching the above decision. Smith v. Maryland is also used to defend bulk data collection. The rationale for this week’s decision suggests the administration and the NSA may have some work to do to convince SCOTUS that they are not violating the 4th amendment.

SCOTUS quote:
“the Founders did not fight a revolution to gain the right to government agency protocols.”

So much for minimization.

Herman June 28, 2014 11:02 AM

I’ve been wondering where this squid meme on Bruce’s blog came from, but one of my uncles sometimes wrote letters with a porcupine quill and squid ink.

Pretty soon, kids will have no idea what a porcupine, quill, ink, or paper was.

MikeA June 28, 2014 11:39 AM

Two unrelated comments:

1) It’s not at all clear that the various TLAs actually give a darn what SCOTUS says. The problem of inadmissible evidence only arises if/when one comes to trial.
If one is in Gitmo or dead, or even a senator being blackmailed into approving something shady, admissibility is moot.

2) Squid blog? Why not? although I admit that I first read the link on that page as being to thecephalopodphage.org

Skeptical June 28, 2014 12:19 PM

@Jacob re Facebook:

I agree that this raises some interesting issues, but viewed closely and in context there’s nothing especially pernicious or novel about it.

A court reviewed an application for search warrants, each naming a particular individual and describing what was to be searched or seized. It reviewed the evidence submitted in support, found that it met the standard of probable cause, and issued the warrants.

They were issued as part of a lengthy investigation into a large scheme to defraud the government. The scheme involved an attorney who, allegedly, aided large numbers of retired police officers and firefighters in fraudulently claiming full disability by faking psychological conditions such as PTSD (the claims would point to 9/11 as a cause).

You can imagine the difficulty of the investigation. Some of the clients may be complicit in the scheme; others may be actually disabled; many may fall into a gray area. Which psychological claims are not only false, but fraudulent? How does one begin to identify them? Of course, there are other challenging aspects as well: extreme caution and diligence is required if you’re going to prosecute, in NYC, a large number of retired NYPD and FDNY claiming PTSD due to the events of 9/11.

Because the investigation was ongoing, and because of concern that evidence would be destroyed, and individuals would flee, if those being investigated became aware of the investigation, the court further ordered the warrants sealed and prohibited Facebook from alerting the individuals named in the warrants.

Facebook lacked standing to challenge the search warrants because it did not have an expectation of privacy with respect to the users’ designated private information. I also presume that Facebook did not know why the warrants were issued when they challenged them (i.e. Facebook did not see the evidence which the judge considered when deciding to issue the warrants), which makes Facebook’s actions appear somewhat strange to me.

The users could certainly challenge the search warrant when prosecuted, however, which is normally how search warrants are challenged. In the U.S., if the government shows up at your door with a search warrant, you have to let them in; you are given the ability to challenge the search warrant in court after the fact.

Is that the best way to do things? I think it’s a question with room for reasonable disagreement, but this isn’t an instance of the government pushing the envelope of the 4th Amendment or attempting mass surveillance.

Incredulous June 28, 2014 1:24 PM

@ Skeptical

I haven’t noticed your posts in a while. It’s good to see you are still around.

I think you are right that the Facebook warrants are less concerning than dragnet searches. At least there were warrants naming individuals. It wasn’t a political crackdown. There was a real crime involved and the targets of the probe, police and firemen, are hardly powerless. These groups generally receive preferential treatment and the police seem to have no problem invading other people’s privacy, so let them have a taste of their own medicine. I do wonder if the warrants will eventually be overturned because of their privileged status.

As people have noted, you have to be an idiot to put any sensitive information on Facebook or use it to log in on other sites. Sure, all your friends are on it. But, as Mom said, would you jump off a bridge if all your friends did? Unfortunately, probably yes. We are a social species and social learning is a powerful force in our thinking. It takes a lot of effort and focus to choose to act differently than your peers.

Nick P June 28, 2014 1:55 PM

Re High assurance (EAL7) development done easier

This paper is a must read for anyone wanting to know what high assurance looks like today. It’s an excellent methodology that combines ease of use use, tight integration between each step, auto test/code generation, verification of plenty attributes, usefulness, and done with minimal labor.

A good EAL7 to imitate or build on. They also use a verified processor with built-in separation kernel. Extra points there. Only things missing are secure I/O and compiler but I’ve already posted solutions to that.

Mr C June 28, 2014 2:12 PM

@ Alex:

It’s a wrongly decided case, but not as bad as the press makes it out to be, for two reasons. First, the decision may not be final because the Fifth Amendment issue can, and probably will be, appealed into the U.S. Supreme Court, and there’s a decent chance they’d agree to hear the case. Second, the court got it wrong on a relatively narrow point, so one could avoid this defendant’s fate, at least in most cases, with some common sense. The second point deserves some explanation:

The 11th Circuit’s decision in In re Grand Jury Subpoena Duces Tecum Dated March 25, 2011 (U.S. v Doe), 670 F.3d 1335 (11th Cir. 2012) [<a href=http://www.ca11.uscourts.gov/opinions/ops/201112268.pdf”>http://www.ca11.uscourts.gov/opinions/ops/201112268.pdf] is the case to read concerning encryption and the Fifth Amendment. It’s the highest court to address the issue (as yet), the opinion is very clearly written (you don’t really need to be a lawyer to follow it), and it’s right. To summarize for those who don’t care to read:

The big picture rule is that the Fifth Amendment gives you a right not to perform a particular act if that act would be (1) compelled, (2) potentially incriminating, and (3) testimonial. The first two prongs are pretty straightforward. “Compelled” means compelled. If you’re dumb enough to voluntarily enter your password for the cops, then you’re out of luck. It’s usually a given that whatever the cops want from you is potentially incriminating. (The only major exception is when you’re being compelled to produce proof of someone else’s alleged crime.) “Testimonial” is where interesting stuff happens. The paradigmatic example of “testimonial” is oral utterances in natural languages, like “I killed Jimmy Hoffa.” Clearly, giving up a password is not like that. In and of itself, the password conveys no meaning. However, the act of divulging an encryption password can still be testimonial because it implies certain statements — “File X exists”; “File X is located on that particular computer”; “I have the ability to decrypt the copy of File X on that particular computer”; “I possess a copy of File X”; “I control a copy of File X”; “This is an authentic copy of File X”; etc. So, generally speaking, the act of divulging an encryption password is testimonial.

But there’s an exception. The exception is called the “foregone conclusion” doctrine. Under the foregone conclusion doctrine, an act like divulging an encryption password is not testimonial if the cops already know from other sources everything that would be implied by divulging the password. This strips away the “speech” aspect and leaves only something akin to a demand for the key to a filing cabinet (which the cops can unquestionably get a warrant for). I’ll give two paradigmatic examples: (1) The cops catch you with the computer on and see your kiddie porn on the screen. In this case they already know the file exists, it’s on that particular computer, you have the ability to decrypt it, etc. See In re Boucher, No. 2:06-mj-91, 2009 WL 424718 (D. Vt. Feb. 19, 2009). (2) The cops wiretap a phone conversation in which you say that you have a particular file stored encrypted on your computer. Again, they already know it exists, it’s on that particular computer, you have the ability to decrypt it, etc. See U.S. v. Fricosu, No. 10-cr-00509-REB-02, 2012 WL 182121 (D. Colo. Jan. 23, 2012). In order to invoke the foregone conclusion exception, at least according to the 11th Circuit, “the Government does not have to show that it knows specific file names… [but] it still must show with some reasonable particularity that it seeks a certain file and is aware, based on other information, that (1) the file exists in some specified location, (2) the file is possessed by the target of the subpoena, and (3) the file is authentic.”

I am now finally concluding with the summary of U.S. v. Doe and returning to the Massachusetts case in the article. That case basically got everything right, and then they fudged on the foregone conclusion part. The defendant did something really stupid — he told the cops a bullshit cover story about what was in the encrypted files, and the court rounded that up to admitting what was in there. Details: Defendant was running a fraudulent real estate scheme using two shell corporations he created. He told the cops that he did real estate work for the companies and kept his communications with their (fictitious) Russian owners encrypted because the Russians like encryption. The cops actually learned nothing from this statement — they ended the conversation with the same suspicions that something relating to the fraudulent scheme was in the encrypted files that they started with, but gained no clearer picture of what that something was because they knew he was full of shit about the Russians. (I.e., since they knew the communications with the made-up Russians didn’t really exist, they still could not name or describe any specific file they thought was saved there.) The court got this one wrong because the cops really did not have a reasonably particular idea of what was in the encrypted files. But note that the defendant dug his own grave by telling them the cover story. If he had just stayed silent, it would have been clear that the cops had no idea what was in the encrypted files, and there would have been no statement for the court to misconstrue into a basis for invoking the foregone conclusion doctrine.

So, Mr. C’s take-home lessons:
1. STAY SILENT. Don’t admit that there is anything other than empty space in the encrypted volume. Don’t admit any of the other implied statements that give divulging a password its testimonial nature (existence, location, possession, control, ability to decrypt, authenticity). Don’t say anything that could be deliberately misconstrued as admitting any of those things (including a bullshit cover story like the one in this case). Really, just stay totally silent.
2. Use an encryption scheme in which an encrypted file is indistinguishable from empty space (e.g., truecrypt, LUKS). Your lawyer can plausibly argue that there are no files there at all.
3. Don’t divulge the encryption key unless and until you see a written order from a judge compelling you to do so. If you do it upon a request from the cops, no matter how much they badgered and threatened and even beat you, it’s a safe bet they will promptly turn around and say you did it voluntarily.
4. Do not get caught with the computer on. Anything the cops see on the screen, they can force you to decrypt later.
5. Don’t give anyone else the password to decrypt something that may incriminate you but not them. They can’t invoke the Fifth Amendment against divulging the password. Likewise, think twice before accepting the ability to decrypt something that incriminates someone else but not you.
6. Don’t tell anyone that you possess an encrypted copy of File X (and especially not over telephone or e-mail).
7. Unlocking biometric “security” features like iPad’s fingerprint scanner is not testimonial in nature. You have no Fifth Amendment right to refuse to put your finger on the scanner.
8. Despite my instinctual feeling that a hidden volume (a la truecrypt) should never fool anyone, reading these cases gives me a sense that it actually might fool most cops/prosecutors/judges. (Although you might be violating the court’s order depending on how it’s phrased…)

(Disclaimer: The fact that you are reading this does not make me you lawyer. All of the foregoing is general information and discussion, not legal advice for you. If you act on it and things blows up in your face, that’s your own damn fault. If you want legal advice, go pay a lawyer to listen to the specific details of your situation and offer advice tailored to your specific situation and jurisdiction.)


@ AlanS:

It’s an excellent opinion. It’s lucid and well-written and shows that the Court really “gets” the issues. I really couldn’t be happier with it.

As for the commentators who think they see hints that the Court is ready to strike down the NSA’s domestic spying: I think I see it too, but… The Court is made up of nine people who each have their own opinions and who often compromise and sometimes change their minds. It’s hard to predict what they’re going to do. For instance, although I correctly anticipated the overall result, I expected this to be a 5-4 decision written by Kennedy with a shrill pro-authoritarian dissent by Scalia and another dissent in which Thomas says something weird and stupid. Instead what we got was a unanimous decision written by Roberts with a concurrence in which Alito says something weird and stupid. If I was that far off in my predictions about this case, perhaps it’s unwise for me to follow on their heels with predictions about the next case. Perhaps it’s unwise for those other commentators too.

Chris June 28, 2014 3:50 PM

Hi i have been here now a couple of years mostly lurking, reading bilding my own opinion etc.
However: first of all i am not american, i am finnish origin and live in south east asia.
Incase someone bothers:
Some things that strokes me are:
-I can see how snowden can be seen as a hero, however if you go through his
“revelations” i have seen VERY little things that wasnt allready known, just saying…
-More over after the revelations very little is focused on how you can actually protect yourselfe, beeing a long time hacked that is number one priority:
-For windows users: First thing “I would do” would be to create a user called install
that has all rights, aka admin, the normal user however; should under NO CURCAMSTANCIES
have EXECUTE rights to more than where there is installed files: ESPECIALLY!!! NOT
under the TMP or TEMP environment variable paths, if you search for it you find nothing.
And low and behold alot of software will stop working, in UNIX this is common sence.

Then I would probably use Sandboxie, even though its now owned by the bad governement
there is some prior hashes that should be ok. Sandboxie is a good thing.
Part from windows which is difficult to protect anyhow, but to get an idea
I havent had any antivirus protection softwares in my virtual machines since around 2008
and i do think there is alot of evidence that they are clean.

I would also want to point to the fact that using vmware or any hypervisor has an option
refered to as non-persistant mode: USE IT!!!

Then when we come to the snowden effect, my recommendations would be:
-DNS Protection at least via TOR DNS: block all dns at outgoing firewall as a minimum
-A sencitive computer dont have DNS at all! use hosts files and stick to it
-TOR if you look at the legislation it often states that the host country is not allowed
to collect data on its own people:
Now with that in mind create a squid/privoxy network chain that automatically routes
all countries through tor towars that particular country:
-Use TOR torrc block rules to make sure that at least all FIVE EYE countries are blocked
-Make Sure that your geoip files are infact accurate (Pain in the fucking ass!!!)

I can come up with more stuff but what I WANT to hear is more defence not that its happening. its been common knowledge since at least for 100 years that there are spies.
We need practical solutions to problems that actually works
//Chris

Chris June 28, 2014 4:11 PM

Hi I forgot to mention two very useful technologies:
One is called Bittorrent Sync
-All I can say it is a Dropbox Killer, but not only that it gives
you the right to own your own data and backups
I made as a first POC a 30GB starnetwork with 5 computers on 3 different countries
that backs up all my relevant data, and even in alfa it just worked. Thumps up!
-Second is filesharing allthough it can also be used with Bittorrent Sync using a onetime key
there is a new software out there called OnionShare, it do need TOR and especially if you use UNIX/LINUX the controlports cookies auths etc need to work but its really nicely done
It kindof creates a hidden http share using tors hiddens services, and you click and point to the file you want to share and it gives you the onion addess that you can email to your file
Very nicely precented in a graphical view for people unfamiliar with hidden services and tor.
I would think though that since tor is needed as a backbone it might fail for the big picture
at least as of now, but that could be changed if you put tor in there in the install package with the righ params etc—
//Chris

Benni June 28, 2014 6:43 PM

new press release from de-cix

It basically says that BND is looking forward to read more emails from interesting american targets….

http://www.de-cix.net/news-events/latest-news/news/article/de-cix-drives-critical-infrastructure-to-keep-new-york-city-connected/

DE-CIX drives critical infrastructure to keep New York City connected

DE-CIX participates today in a panel discussion that focuses on the importance of robust communications infrastructure in New York City. Titled “NYC as a Connectivity Hub for Technology,

“Our DE-CIX New York Internet exchange is enabling efficient carrier interconnection in a city that is the gateway to North America for global content providers. The DE-CIX New York network currently provides up to 2 Terabits per second of capacity. By the year 2020, DE-CIX New York will be one of the world’s five largest Internet exchanges and will keep the Internet working smoothly in one of the most important cities in the world.”

this here was their earlier press release
http://presse.de-cix.net/press-releases/pressemitteilung/article/statement-zu-den-medienberichten-ueber-den-de-cix-vom-25-juni-2014/ saying that

„We exclude that any foreign or domestic secret service had access to the internet node operated by us or to the fiber networks associated to that node in the years 2004-2007.”

This is technically in fact true. The german law indeed requires every provider to create a handover point, where BND can copy, without having physical access to the network of the provider itself. The german government says: http://goo.gl/jsh7BZ “Hierzu fordert der BND gemäß § 2 Abs. 1 S. 3 G10 in Frage kommende Telekommunikationsdienstleister auf, an Übergabepunkten gemäß § 27 TKÜV eine vollständige Kopie der Telekommunikationen bereitzustellen.” in english:

“For this, BND demands, according to article 2 paragraph 1, sentence 3 G10, from the telecommunication providers in question to provide a complete copy of the telecommunication data at the handover points according to article 27 TKÜV.”

with “telecommunication providers in question”, the german government means all companies that have an “Auslandsbrückenkopf” or “abroad bridgehead”, i.e all companies which connect to a foreign network, must provider a complete copy of all communications to BND….

De cix had admitted before to a german computer magazine that some of its communications are monitored by BND

http://www.heise.de/newsticker/meldung/NSA-Abhoerskandal-PRISM-Internet-Austauschknoten-als-Abhoerziele-1909604.html

At least a portion of the traffic through the Internet Node De-CIX is diverted for the BND and other authorities. This was confirmed by an expert from the environment of the German exchange node with heise online. de-cix is forbidden to reveal how and to what extent German services have access to the data streams in Frankfurt, according to the law on the restrictions of post and telecommunication secrecy (G10-law), said Klaus Landefeld, Director of Infrastructure and Networks at the internet provider association eco that operates de-cix. From the perspective of the provider association it would be very unfortunate that the politicians left us alone the gag clause, Landefeld said. “How should De-CIX and eco react if such numbers are published,” Landefeld asked now. An official version is lacking, “We are forced into an egg dance.”

That telecommunication providers are forced to provide a complete copy of all communications, leaving the questions which data are copied to the authorities, this was admitted by the german government. This is non classified information, since it is even stated as a law in germany.

But now why on earth issues de-cix a press release, where they only state:

„We exclude that any foreign or domestic secret service had access to the internet node operated by us or to the fiber networks associated to that node in the years 2004-2007.”

I mean, they really could have added that they are required by germany’s g10 law to provide a complete copy to BND.

The only reason not to mention this is because they either
a) see their business model being threatened (which may be a reasonable assumption), or
b) that they are under the payroll of the german secret service BND (which may also be reasonable).

I wonder now what is true a) or b), or a) and b)?

Skeptical June 28, 2014 7:52 PM

@Mr C: As for the commentators who think they see hints that the Court is ready to strike down the NSA’s domestic spying: I think I see it too, but… The Court is made up of nine people who each have their own opinions and who often compromise and sometimes change their minds. It’s hard to predict what they’re going to do. For instance, although I correctly anticipated the overall result, I expected this to be a 5-4 decision written by Kennedy with a shrill pro-authoritarian dissent by Scalia and another dissent in which Thomas says something weird and stupid.

Scalia’s approach to the 4th Amendment has often resulted in his vote being cast against the government. This is the same judge who found the government’s use of thermal imaging of a house to be a search and who found the use of a dog to sniff a front door on a porch to be a search.

He attempts to follow a textualist approach, which results in sometimes rigid, sometimes jarring, but logically and philosophically coherent jurisprudence.

I write this as someone who disagrees with Scalia strongly on many things, and sometimes finds his opinions infuriating, though always intelligent and with a good seasoning of witty snark.

Now, as to what this case betokens for Smith and the telephone metadata program… I’ll go ahead and stake out my view: not much.

The question in Riley was whether an action that was clearly a search also required a warrant. Under the “search incident to arrest” doctrine, some searches are reasonable even though conducted without a warrant and therefore allowable under the Fourth Amendment.

The question in the NSA cases is mostly about metadata, and whether the collection of it constitutes a search in the first place. That’s a very different question, and a difficult one. This case by contrast couldn’t be much simpler for the Court.

Here is the legal crux of the Court’s opinion in Riley: Absent more precise guidance from the founding era, we generally determine whether to exempt a given type of search from the warrant requirement “by assessing, on the one hand, the degree to which it intrudes upon an individual’s privacy and, on the other, the degree to which it is needed for the promotion of legitimate governmental interests.” Wyoming v. Houghton, 526 U.S. 295 , 300 (1999).

The factual findings that, when combined with that principle, result in the Court’s unanimous conclusion, are really strikingly obvious. Smartphones (can) contain lots and lots of private information – the types of things that are at the heart of the Fourth Amendment’s protection. Neither the safety of the arresting officers nor the need to preserve evidence required an inspection of the contents of the smartphone in this case.

This was therefore an extremely easy case for the Court, which did not present any of the crucial questions that would arise in a case as to whether the collection of metadata authorized by the FISC violated the Fourth Amendment.

Secret Police June 28, 2014 8:38 PM

Re: Bittorrent Sync… it’s proprietary closed source software. I use Tarsnap and paid $5 in bitcoins, still haven’t run out of room https://www.tarsnap.com/

Tarsnap has actually been audited and source is open. You can torrify it if you want. There’s also Ciphertite which was developed by OpenBSD devs https://www.cyphertite.com/ they have a free tier of 8GB.

DB June 28, 2014 8:44 PM

@ skeptial

The collection of metadata (and content too) might constitute a seizure… and the searching of it a search… at least in my opinion that’s how it should work out. I am not holding my breath that many others will agree with me though, certainly not our courts, congress, or president that are too freaked out by the bogeyman and not wanting to be seen as “soft on crime.” 😉

65535 June 28, 2014 9:26 PM

@ Benni

Interesting post.

“We exclude that any foreign or domestic secret service had access to the internet node operated by us or to the fiber networks associated to that node in the years 2004-2007… I mean, they really could have added that they are required by germany’s g10 law to provide a complete copy to BND…. I wonder now what is true a) or b), or a) and b)?

It could be a + b + * (the wild card being some ulterior motive). Btw, how does the BND tell the difference between and banking transaction and a secret service communication – the data header in the packet?

@ Nick P

Nice wild card tricks.

Keep up the good work on a secure HW and SW platform – we will need one.

@ Gerard van Vooren [nice handle].

I will try to make a clear differentiation between a “user” of FB and a “costumer” or “paying” customer of FB.

I will go one step further and use Bruce Schneier’s observation that the “users” of social sites like FB have become the “product” of said social sites [to be bought and sold].

As Clive has noted you don’t have to be a user of FB to be in FB’s data base. I have found that out the hard way. In fact, there are plenty of clever user’s of FB who just data mine [for free].

Moving to the recent court decisions brought up by Jacob, Alan S, Alex, and others:

I get the feeling that the lower courts are embracing bulk warrants and wiretaps at an alarming rate. How those cases will play out in the high court is a different story. Five-to-Four decisions are very possible – but which way they will fall is the question.

My guess is until the high court justices actually feel the clammy grip of surveillance of them and their families [and the associated risks] they will be apathetic – and probably side with government mass surveillance. Their reasoning could be that “business does mass surveillance” so why should not government? That is just a guess.

Thoth June 28, 2014 10:13 PM

@mike~acker
Your idea of using a customer smartcard (like a JavaCard or is a JavaCard) to host cryptographic computations and using the merchant terminals as transportation tools are a good idea. What is missing is how is the customer
going to authorize the transactions ? Is the customer going to type into the keypad of a merchant terminal or does the smartcard contain an embedded screen and keypad built into the tamper evident smartcard itself. By the way I noticed that some banks are already issueing cards with screen and keypad built into their credit cards as 2FA tokens but I can’t remember the issuer for now.

@Nick P
Regarding the problem of UNIX commands, the problem probably lies in improper handling of wildcards. If the implementation took care of handling the wildcards to have limited access, it should be fine. Proper permission delegation should do the trick.

@Chris
In regards to Bittorrent Sync, I am doubtful of it’s security it advertises. It’s method of broadcasting it’s presence by doing a PEX peer discovery via ‘SHA1(secret):ip:port’ is like saying to the world, here’s a hash of my secret. I personally feel that hashing and sending the secret key or sending an encoded secret key over an insecure network is just a bad idea. It would be better to derive a session key from the secret key that can be discarded. Also, broadcasting over trackers is like asking someone to cache your hashed secrets on a public machine and exposing your IP and port is like an open invite for probing. What they could have done is a key exchange for a session key first and then over the secure network via the encryption of network streams using the agreed session keys on network or sensitive properties like the port range to use. The above is to assume the PEX is done over insecure network. The closed source technology and lack of actual documentations on it’s inner workings makes it hard to tell if it’s really trustworthy and secure. There is another technology called Maidsafe that has published papers on it’s technology that does the same thing as well. I have not done extensive reading on Maidsafe thought.

@Secret Police
I am not familiar with Tarsnap or Cyphertite. Do they support splitting up copies of encrypted data into multiple locations for recovery and security ? Most backup and recovery mechanisms simply just encrypt data and store together. If they could do a quick first pass encrypt then split the ciphertext (maybe add a secret sharing quorum for recovery) and then do a second pass encrypt and distribute the two pass ciphertext, it would allow recovery of one of the quorums are lost and any quorum in wrong hands would be useless to decrypt.

Regarding FB and other social technologies, is it the inevitable path that privacy is rarely the concern during development ? I guess so. I was talking to a friend who is trying to create yet another social technology and both of us are working in the data security and cryptography sector as our day job and when I asked him about the ciphers and privacy tools he’s going to embed, he simply stared at me as though it was something foreign. The take away thing from this is, not everyone really cares about security and privacy anymore. People expect privacy to come later and convenience to come first.

Recently I have requested a community open source database maker to include a new SQL data type for their database called Password type to ease the use of PBKDF2/BCRYPT/SCRYPT for normal developers because cryptography modules is a difficult task to most developers. I also requested a specialized authenticate() SQL command to authenticate the Password type. Although the responsibility of password security lies at the developers side and not the database makers’ side, but too many developers are just getting it wrong and a native database Password data type with a command to authenticate would be much easier. The database makers left the request aside anyway. Data security is a very difficult task and privacy and security are always compromised just to get things done.

AlanS June 29, 2014 12:08 AM

More on Riley case and SCOTUS:

Agreed that we don’t know where SCOTUS will go on metadata and bulk surveillance. But they appear to have decided that digital is qualitatively and quantitatively different and expressed the view that minimization procedures are insufficient.

There are hints here that they might take a similar approach to metadata and the third-party doctrine (Smith v. Maryland). Just as what people carry on their person is very different from 20 years ago, metadata is very different from what it was when Smith v. Maryland was decided 35 years ago.

Also consider the Jones tracking case from a couple of years ago.

Anyway there are plenty of indications in recent cases that the government will be forced to make a much stronger case for bulk surveillance activities than it has made so far.

Iain Moffat June 29, 2014 5:52 AM

@Nick: As a linux user and unix admin since the 1990s I have fallen foul of most of the wildcard traps by accident or typo. I used to use CAD systems implemented on Apollo Aegis/Domain-OS (pre POSIX) at university and in my first job. Aegis was designed with full knowledge of the good and bad sides of Unix about a decade later and included some learning which seems sadly to have been lost in the transition to HP-UX after HP bought Apollo. The Aegis shell was rather more safe, the key design difference being that wildcards were expanded by the program (being passed to it by the shell as arguments) rather than by the shell. So a UNIX “rm ” in the example where there is a file called “-rf” sees arguments “-rf file1 file 2 …” passed from the shell, whereas Apollo “dlf *” sees just “” and expands * to the list of regular files (including the unwanted ‘rf’) in the current directory itself. There were actually separate file and tree deletion commands (dlf and dlt) to provide further protection from unhappy accidents with wildcards.

Aegis also supported proper access lists as well as owner-group-world permissions. When you get your secure processor built it will need a secure CLI and I would recommend the Apollo manuals at bitsavers.org as part of your research.

Nick P June 29, 2014 10:17 AM

@ Iain

Thanks for the information. Certainly does seem like a safer approach. I like how forward thinking and careful they were. I’ll definitely keep their manuals and stuff in mind.

Another thing I considered is the CMW’s like Trusted Xenis and Argus. The people making these tend to look for every spot that might have problems. TXenix, for instance, eliminated Setuid vulnerabilities entirely while maintaining legacy compatibility. Many potentially good ideas.

Of course, I gave up on securing a modern UNIX after I read the UCLA Secure UNIX paper. That was such a simple UNIX, yet had so many issues. Securing a modern one to high robustness would kill performance, compatibility, and the bank account of the managing organization. So, my approach was isolation kernels with Linux in user-space.

Re secure processor

Feel free to help on such projects as your past experience makes you quite beneficial. I’ve posted many papers on tagged, capability, virtualized, and encrypted architectures. Picking the right features to put in the chip for effectiveness and cost is the toughest choice. It’s why I have 3-5 designs concurrently on paper at any given time rather than one or two running on my desk. Decisions, decisions. 😉

Benni June 29, 2014 11:09 AM

Merkel has bought a mobile from blackberry with a secusmart crypto chip.

Spiegel revealed before that NSA can read blackberry:
http://www.spiegel.de/international/world/how-the-nsa-spies-on-smartphones-including-the-blackberry-a-921161.html

Now there are the first reports from german NSA officers that they can read the communications from the new blackberry+secusmart mobiles in the german government

http://www.heise.de/newsticker/meldung/NSA-soll-auch-Merkels-neues-Handy-abgehoert-haben-2242848.html

The million dollar question is now how the nsa does this?

Figureitout June 29, 2014 1:34 PM

Chris
–Yes I’ve tried those all, appreciate the kick in the rear, I need to kick that habit. They look for TOR users so they probably look for these services riding on their search engine and obfuscating the identity of the searcher. Once they have everyone hooked and killed off all competition you know they’ll start charging for searches and get your data too. Then youtube…

RE: Practical defensive steps
–They are always needed and I scream about it too. Good pointers.

mj12
Aren’t those just aliases to Powershell cmdlets?
–Yeah, they are. Just didn’t expect it when I tried it.

RE: switching away from google
–I wonder how you did your searches then or what alternative you use. And if you’re wondering, no I’m not connected to google. Applied once.

Nick P June 29, 2014 2:07 PM

@ Benni

“The million dollar question is now how the nsa does this?”

There’s so much attack surface in a smartphone I’m surprised they don’t assume all kinds of people are hacking them. Main chips, microcode, firmware of chip, firmware of devices, privileged software, critical libraries… the list goes on and on. The encrypted phones typically minimize that a bit instead of eliminate it. And they can be subverted via human or extra-legal methods if a black box attack isn’t available. Blackberry also uses centralized security architecture, is hosted in the U.S., and has plenty of features (read attack surface). So many possibilities there. It’s always been hard for me to understand why foreign companies or governments trusted them for private communications.

The solution the German government went with is here. I’m not sure what it offers as I can’t read the German. That it leverages Blackberry is already a risk factor for me. If I were German govt, I’d have handled it the way the French did: a simple, secure phone designed by a local defense contractor with limited distribution. Germany already has one long-time leader in encrypted phones (Cryptophone) and a skilled security engineering company with phone experience (Sirrix). The former was smart enough to harden the phone OS and release crypto code for peer review. The latter employs Christian Stuble whose done excellent work in reducing TCB of systems with microkernels.

The short cut to private communications is for German government to give a large amount of money to these two companies to build a secure phone. If not, I’d have gone with this Cryptophone as the default option. It’s the most similar to a Blackberry, but with low attack surface at OS level due to years of hardening. If they were about to make a huge purchase, they’d also have leverage to negotiate some custom improvements they wanted as part of the deal.

Instead, the German government seems to be buying an enhanced version of an insecure, possibly subverted platform. That doesn’t seem smart to me.

Nick P June 29, 2014 2:30 PM

@ Iain

I was just skimming the “AEGIS Overview” document here:

http://bitsavers.informatik.uni-stuttgart.de/pdf/apollo/

I didn’t realize the Apollo systems used distributed operating systems. As I read it, I was thinking of Tannenbaum’s Amoeba OS more than a UNIX OS. The document was also very clear on the what and the why. I probably would’ve enjoyed working on one of those systems. Mine were… monolithic and boring.

Note: I also found the VMS Internals books on the site that I might otherwise had to buy. Thanks for the link!

Clive Robinson June 29, 2014 3:07 PM

@ Nick P,

Instead, the German government seems to be buying an enhanced version of an insecure, possibly subverted platform. That doesn’t seem smart to me.

It probably is not, so you have to ask who gave “Mummy” her advice, or did she just shop online as it were…

One of the biggest causes of insecurity in systems is “legacy behaviour” from users, followed closely by “not reading the manual” and thus getting the worst not the best out of any given soloution.

The fact that Mummy had a blackberry and now has a different blackberry is a fair old “tell tale”. A decision probably “greased along” by the “Obama Berry” story from a few years ago when he first became president and showed himself to be a “crackberry” addict.

The simple fact that people need to get very firmly in their heads is “Secure Phones are rarely secure”, because you need both ends to be secure, and unmonitored as well as having a guarentee that there are no vulnerable middle points….

It takes a lot of self dissapline and skill by both end users to get it right, which is probably why Obama and Co “Go around the world in a skif” these days.

For the rest of us it’s best to assume all our calls texts and smart phone usage is unsecurable and act accordingly. Which is what I tend to do as ingrained habit, much to the anoyance of others 😉

Mike the goat June 29, 2014 4:12 PM

Nick – re commercial applicability of a ‘hardened PC’: what are your thoughts on the marketability of such a hardened machine? Obviously this is the elephant in the corner of the room, and such a system is only ever going to be born into existence if a business case for it can be demonstrated. My naive opinion is that there would be plenty of interest, even if only from the paranoid upper management of fortune500s who are concerned about industrial surveillance and IP theft.

Mike the goat June 29, 2014 4:21 PM

Gerard: This is perhaps the most worrying thing about FB. If I recall correctly there was a case where an undercover police officer was unmasked as a result of Facebook’s facial recognition system some time ago when they first introduced the feature (it has been significantly watered down to prevent it being used to essentially identify unknowns).

DB June 29, 2014 5:05 PM

@mike the goat

IMO there is probably no business case for a secure computer for the average user (including so-called “paranoid” upper management) unless it can run all their legacy apps. And that’s the whole problem right there. Running old insecure apps blows away security.

However, if it can be made openly enough and without too much overhead, there may be plenty of interest and a market among hobbyists. It would be a very small market at first and grow over time, after all, this is a new animal. Years later as it grows it can become more mainstream.

Iain Moffat June 29, 2014 5:31 PM

@Nick P: Those of us who had used Apollo for years saw the UNIX migration as a huge step backwards – not just in security – the seamlessness of the network integration knocked spots off what Sun were doing at the time – the Apollo Registry compared with NIS/YP and the networked file system compared with NFS for example. The preferred language was PASCAL rather than C as well. Apollo did make mistakes in usability/security tradeoffs but the basic architecture was a step forward from UNIX or NT at the time.

Another machine from the same generation worth looking at would be the the contemporary HP9000 series 500 using their proprietary FOCUS stack-oriented CPU described at http://www.openpa.net/systems/hp-9000_520.html and linked HP journal articles, which featured separate core CPU, IO processors and memory processors on a shared arbitrated bus, running UNIX as a userspace process on top of a lower level OS that managed I/O and used hardware memory protection. This matches quite a few of the features on your list, but seems to have been killed off in favour of PA-RISC and native UNIX.

Hope this Helps

Iain

Skeptical June 29, 2014 6:31 PM

@Incredulous: I haven’t noticed your posts in a while. It’s good to see you are still around.

Thanks for the kind words. I’ve been continuing to read when I can, and I’ve been enjoying the posts and discussions.

We are a social species and social learning is a powerful force in our thinking. It takes a lot of effort and focus to choose to act differently than your peers.

Very true. And how your peers behave can also affect one’s economic incentives to be more or less cautious with one’s personal information. For instance, as a particular site becomes important to “networking” with others in one’s industry, the cost of not participating on that site might increase.

@AlanS: But they appear to have decided that digital is qualitatively and quantitatively different and expressed the view that minimization procedures are insufficient.

I don’t see the “digital is different” theme quite as much as others. The important point for the Court was the nature of the information we often store on our phones. I don’t think this has any implications for how they would consider the telephone metadata program. This was a fairly seamless application of existing law to newer devices.

What was one of the questions Scalia rhetorically asked during oral arguments? Something about it seeming absurd that the police, by arresting someone for not wearing a seatbelt, could thereby acquire the power to read that person’s diary.

The minimization procedures offered by the government here are insufficient and unnecessary given that this is a search and the Court’s analysis of the government’s need to search under the “search incident to arrest” rationale and the privacy interest given what we store and access on our phones.

In the context of a question about the power government derives from a broad subpoena by aggregating and connecting lots of otherwise disparate data points, minimization procedures will likely assume greater weight in the final decision.

Bob S. June 29, 2014 7:43 PM

1.The NY prosecutor should have simply offered FB $5 bucks+/- a head for the data; or tell them they were building a new marketing app and needed access to the whole enchilada (for free).

  1. Only the stupidest crooks are posting their criminal stuff online anymore. They would have got caught easily enough without a multimillion dollar pixxing match with FB.

  2. How did the prosecutor know to zero in on a few hundred people out of thousands upon thousands of disabled? It seems to me the innocent targets have a rights violation complaint. (Then again, don’t we all?)

Clive Robinson June 29, 2014 7:47 PM

@ Mike the Goat,

The fortune 500 are like aging managment stuck with the tools they learned to use some years ago and frankly don’t make that good a market.

The sort of companies you should be thinking of are newer engineering companies, that have both fresh IP and still young involved senior staff. They are more likely to want to use security hardend systems without legacy compromise.

The problem is the old difference between short term and long term thinking in senior managment. Those with a short term view realy don’t care about security except as a PR excercise for stockholders, and how much it eats into profit. Those with longerterm view realise the price of APT and see the need for spending on security. The problem is even with a longerterm view they probably have VC drivers working against them on security costs, Angels want profits to ascend as rapidly as possible so they can cash out quickly.

Mike the goat June 29, 2014 8:07 PM

Clive: I guess they often change their tune when their company is humiliated and stock prices take a hit as a result of pwnage. Slightly o/t but I was looking back at Cloudflare’s blackhat presentation detailing their 300gbit/sec DDoS last year as a result of a DNS amplification attack launched from a network that presumably didn’t filter source IPs that didn’t belong to them at their edges. Really interesting stuff… turned out to be some fifteen year old from London? 🙂

Anony June 29, 2014 9:07 PM

I have been searching around for a while to find a central location where I can read through and / or download the Snowden documents, but I can’t find such a website. Can anyone confirm if there is one or several websites where I can access pretty much all the documents? It is my suspicion that maybe the documents are spread over many individual news articles, and therefore are not readily available for people wanting to do analysis against the whole (publicly available) set of information. If there is no current central repository for the Snowden documents, and if I therefore wanted to create on to help individuals and journalists to find such documents, does anyone have recommendations for doing this, such as to only use cloud servers in a specific country or company?

I know how to browse online anonymously and send email anonymously, but for example, if I wanted to work with files and connections anonymous, do you have any recommendations? For example, if I set up a site online with these documents and I wanted to connect to my server anonymously (not just make the content of my connection anonymous which is simply ssh), how would I do that?

65535 June 29, 2014 11:21 PM

@ Benni and Nick P

“There’s so much attack surface in a smartphone I’m surprised they don’t assume all kinds of people are hacking them… Blackberry also uses centralized security architecture, is hosted in the U.S., and has plenty of features (read attack surface).” –Nick P

I agree.

“The BlackBerry Dispatcher maintains an SRP connection with the BlackBerry Infrastructure over the Internet. The BlackBerry Dispatcher is responsible for compressing and encrypting and for decrypting and decompressing data that travels over the Internet to and from the devices.”- Wikipedia

[and]

“The BlackBerry Controller monitors the BlackBerry Dispatcher, BlackBerry MDS Connection Service, and the Enterprise Management Web Service, and restarts them if they stop responding.” – Wikipedia

https://en.wikipedia.org/wiki/BlackBerry_Enterprise_Server#BES_Components

“The million dollar question is now how the nsa does this?” –Benni

Good question. I assume it can be done a number of ways including high access rights and privileges to a CALEA loophole.

I looked into installing BES 4 to 4.1 on Exchange. It required fairly high access rights/privileges to the global catalog (although the global catalog has changed during Server 2003 through 2012 evolution this is compounded by different Exchanges releases during the same time – compatibility issues and so on). BES appeared to require a Kerberos token [or newer token] with high access rights if my memory serves me correctly.

[Installation via administrator]

2] “In the BlackBerry Enterprise Server installation files, double-click setup.exe. If your operating system is Windows Server® 2008, run setup.exe as an administrator.”

[Mirroring data base could be an attack vector]

9] “In the Database mirroring options dialog box, if you want to configure database mirroring, select the Add support for database mirroring check box and type the name of the database server that hosts the mirror database in the Name of the mirror database server field. The setup application does not create the BlackBerry Configuration Database on the mirror Microsoft® SQL Server®; it adds a registry key to the computer that includes the name of the mirror Microsoft SQL Server.”

http://docs.blackberry.com/en/admin/deliverables/25747/Install_the_BES_software_1334412_11.jsp

[list of topics on messaging]

http://technet.microsoft.com/en-us/library/gg670940%28v=exchg.141%29.aspx

[CALEA loophole]

‘When the administration floated the proposal in September, The New York Times revealed that among the “fixes” sought by the FBI and other intrusive spy satrapies, were demands that communications’ providers build backdoors into their applications and networks that will give spooks trolling “encrypted e-mail transmitters like BlackBerry, social networking Web sites like Facebook and software that allows direct ‘peer to peer’ messaging like Skype” the means “to intercept and unscramble encrypted messages.”’

http://dissidentvoice.org/2010/11/the-fix-top-fbi-officials-push-silicon-valley-execs-to-embrace-internet-wiretaps/

The attack surface of Blackberry is high.

Nick P June 29, 2014 11:25 PM

@ mike the goat

re Marketing Secure Computers

There have been attempts to market such a machine. The first were companies trying to leverage the assurance of “Trusted” operating systems. Here’s an example that’s marketed to protect web applications back in the mid-1990’s from a number of problems. In this case, the business case is loss avoidance by choosing a product that avoids most common risks.

The next business argument I saw was for intellectual property. This is a general security argument that applies to more than OS’s. Rome argues CMW’s could allow mutually untrusting parties, like at a tech incubator, use the same machine without compromising trade secrets. Arguments are made for databases, medical records, and security-critical services (eg Kerberos) that need extra protection.

Better security is an argument on occasion. 😉 Let’s take KeyKOS, for example. It was highly secure (B3-equiv), enforced POLA throughout, ran on existing hardware, isolated critical apps, virtualized legacy apps, and provided built-in persistence of system state (unique). It was much easier to argue that such a design would be safer or more reliable, along with being usable for the business. Interesting enough, the clean interfaces, separation of concerns, and minimal shared state that gave rise to KeyKOS’s advantages led to yet another that’s a selling point itself.

Extensibility and maintainability you might not have seen coming. A system that’s modular, using information hiding, avoids pointer magic, limits damage, uses clean interfaces, and so on is much easier to maintain and extend. A few distributed OS’s, such as CTOS, bragged on how easy it was to add new features. SPIN is probably the champion here as they got most of the benefits by writing it in Modula-3, then added type-safe loading of code into running kernel for performance. I know AIX and Erlang feature that kind of thing as a selling point so it should benefit a secure system’s business case.

Reduction of admin work and downtime. IBM tried to make a business argument for SELinux focusing on reduced administration and downtime associated with vulnerabilities. I don’t have the link on me. Their argument was that the mandatory access controls contained many types of vulnerabilities. Dealing with each vulnerability would’ve taken administration time and risk. The exploitation of these vulnerabilities, if resulting in loss of control, would also contribute to downtime. So, they basically just ran the numbers showing what the situation cost with and without MAC. And then use that as an argument for SELinux.

Note: They make similar arguments for mainframes and IBM i with quite a bit of success.

Reduction of amount of equipment. Green Hill’s loves this argument with their INTEGRITY-based virtualization solutions. They use one of their RTOS products, esp EAL6+ INTEGRITY-178B, as a low-TCB base layer. They then add virtualization, isolated networking, trusted path, etc. Their argument is that you can replace many separate pieces of physical equipment with one device that securely virtualizes one. Each PC has a physical, software, and administrative cost. The solution is worth the money if it costs less than what it replaces while doing what it says it does.

Variation of above: BYOD. The idea here is people bring, manage, and support their own devices. A piece of mobile security software, often virtualization, is used to create two instances on the phone: work and personal. The software provides a brick wall between the two, along with features like remote wipe for business side. I’m not saying that I buy the BYOD promise. I’m just saying that it’s infeasible without secure devices. Therefore, BYOD-enabling security software can argue that they save the company whatever their mobile costs are. There’s a significant number of companies working this angle.

Compliance. There’s compliance issues in healthcare, payment industry, federal, financial, and so on. They typically include INFOSEC requirements, especially anything that helps an audit. I know that many companies market their security software or systems for this. I’ll add that making things easier to audit, like some hardened OS’s do, could be an advantage in one of these arguments. The extra integrity protection is definitely an argument.

Anything legal and recorded. Certain secure solutions have very strong integrity arguments (eg X9.95 timestamping). The use of one of these to support a claim before a suspicious party, such as a court, might bolster the argument. You just need to hire an expert that reviews the solution and tells the judge he’d never doubt what it said. Then, your evidence looks a bit better than it did and maybe even better than other side. I can see benefits in contract negotiations, voting, and so on.

Reducing inconvenience. I save this one for last as I have mixed feelings on it. The idea here is that typical systems create many burdens for users trying to avoid problems and the eventual compromises are time-consuming. The system, if quite seemless, would avoid most such issues. The main actions would be protected by default with little to no effort on user’s part. The basic protections it has prevent, contain, or enable speedy recovery from most attacks. If not too expensive, this kind of benefit might be justified. I believe it’s been a selling point for Macs, browser sandboxes, and quite a few other successful products.

So, there’s some off the top of my head. They can be used individually or in combination. Should help sell a solution. Truth is, though, that secure systems of any kind are a niche market of niche markets. Good luck is all I’ll say. NSA and such give a nice business boost but it’s still barely a market. Wait, that leads to the final selling point.

Those pesky, SIGINT-loving, intelligence services. “Are you worried about NSA listening to all your diplomatic phone calls? Is your software backdoored? Is your network their property? Do you have an aversion for Hellfire missiles? Are you concerned that every competitor’s product is certified by NSA (EAL2-4) to be easily hacked? Well, you’re in luck because we have a product certified to their highest levels that can protect you from them! And for only 5 easy payments of $9,999…”

Nick P June 29, 2014 11:29 PM

@ Iain

Yeah, that HP machine design is great for the time period. It used Channel I/O I keep bringing up, too. Thanks for the link.

@ 65535

Nice points on the risk.

Clive Robinson June 29, 2014 11:35 PM

@ Mike the Goat,

    some fifteen year old from London?

Not me Gov, honest 🙂

I’m old enough to be a fifteen year olds grand father 🙁

By the accounts I saw at the time the attack was not that sophisticated, but then with the underlying infrastructure faults it
did not need to be, and we are still vulnerable…

There is some interesting documentation around that shows that IP and later TCP were known to be bad choices at design time, with more robust sytems already designed. But TCP/IP happened for political reasons… and now we apear to be stuck with it and it’s failings…

Briefly the original ideas for networking started at MIT, RAND and the UK’s National Physics Laboratory (NPL Teddington London) in the early and mid 1960s. MITs main interest appeared to be the sociological effects, RANDs nuclear command and control and neither of them progressed very far, however NPL developed a practical system and identified many issues and solutions to them and had a working system. In 1967 the ACM held a conferance, and as far as can be told this was the first time the three groups became aware of each other. The NPL work re-kindeled the MIT and RAND groups that had in effect lost any drive two or three years previously. It was the NPL work that showed the practical way forward was with packet switching and they had a quite advanced solution. Through RAND the US DoD realised they had issues and through ARPA at the end of 1968 issued a contract to BBN.

Work at BBN was initialy slow and the NPL work was taken forward by the UK GPO (later BT) to the European CCITT where it started the ball rolling on what later became the ITU-T X.25. To play catch up BBN adopted this early work from NPL including the later interfaces to PDP computers (BBN eventualy spun it off and it became Sprint’s commercial service).

However this is where the political problems started with ARPA being unhappy with a European Solution, thus BBN came up with a stripped down barely working system called the Network Control Protocol or NCP, it quickly became clear it had significant limitations and in 1974 it got replaced by TCP.

However TCP was host to host communications and was seen as overkill by node operators, thus in 1978 TCP got split into host level TCP and network node level IP and it became known as TCP/IP.

However this effectivly backwards development caused a whole load of issues and was documented in the 1980s initialy by John Nagle and later by Van Jacobson and Michael J. Karels. All of which resulted in “bolted on” “quick fixes” that only addressed some of the issues.

Other issues were certainly known about in academic circles but everybody kept quiet with their fingers crossed hoping that nobody would notice. Well eventually attackers did, and we are still stuck with the problems or kludgy work arounds.

Some of these issues had been noted back in the early days of NPL development and the solutions fixed not just individual problems but whole classes of them. The result was even though X.25 had problems it was a more robust system from the get go and arguably still is, even though it is now considered very niche in usage.

Both X.25 and TCP predate the ISO OSI model, but it is fairly clear from which it gained it’s ideas. The problem with the NCP-TCP-TCP/IP transitions is the lack of distinct layering and this is a real issue, which even the IETF has admitted in a more recent RFC with the now informous “Layering considered harmfull” section which has been seen by some as a way to sweep many real issues under the carpet with fingers crossed that attackers don’t have a peak for fresh ideas. It has also been claimed very recently that the almost backwards development that gave rise to these issues is down to the malign influance of the NSA… Personaly I’ll stick with “stupidity over malign intent” untill some one provides a smoking gun, not the current “I think I might smell gun smoke if I sniff hard enough, and if I sniff loud enough others might smell it as well” wishfull thinking.

Over at NPL they are stull miffed at TCP/IP, and to make it worse they can claim to have independently invented what we now call HTML befor Tim Berners-Lee’s 89 memo to CERN to rework his ENQUIRE system… Such is life for computer inovations in the UK…

Mike the goat June 30, 2014 4:14 AM

Clive: it amazes me that basic network security policies aren’t implemented on carrier grade networks. Now, granted – you may have a customer who actually has a legitimate need to spoof; back when I worked at an ISP we had several quasi-multihomed customers who needed to do some weird stuff, but of course you create rules and then add exceptions where necessary to try and prevent spoofing from occurring. RE X.25: I liked X.25 and its circuit model and I guess a lot of the concepts effectively entered modern networking via frame relay, then ATM, etc. That said, IP has been an absolute boon and it’s its flexibility that has allowed it to continue to remain relevant. I think in the next few years we will be hearing more and more about SCTP and streaming/multicasting in general, as it makes sense for things like IPTV. If we had built the Internet on something inferior then we’d probably have a really fragmented internet where translating routers are used to tunnel between incompatible technologies, and it would all be a bit of a disaster.

Re UNIX: I will have to be the voice of dissent and declare my love for UNIX, and more specifically the UNIX philosophy. Were big concessions made? Yep. The examples that have been given aren’t all that compelling and could easily affect most other operating systems, e.g. wildcard expansion, etc. As with any system, you need to know the idiosyncrasies of the system and how this affects the security of your system before you start developing – and this goes for any platform, any OS, anything at all. There are lots of things I hate about the “UNIX way” but I haven’t seen anyone really do much better. Then, you’ve got the learning curve…

Clive Robinson June 30, 2014 5:20 AM

@ Mike the Goat,

I also like *nix and it’s philosopy –especialy the command line–, but I also know it’s a “Child of the Sixties” that like Peter Pan failed to grow up with the changing world.

The question is of course how it should grow up and still retain it’s charecter. This is what Dave Cuttler tried to do for Microsoft and arguably produced something a whole lot worse in NT.

Of course what we all avoid talking about is “resources” which were at one point or another a significant limitation on what could be achieved. And in one respect we have reached the end of the smooth ride of Chuck Moore’s view, we have seen the end of the plain and that path has become a rocky road. Thus we have had to change course and stop with the single track thinking.

The future is parallel computing not just at the silicon level but off the motherboard and out into local area such as clusters, and yet wider still into what the “cloud” might one day become.

However we need to accept that we need to not just come out of the womb but also be weaned off of our existing tools of which *nix is just one.

BUT importantly, we also need to ensure we don’t throw the baby out with the bath water, which as @Nick P, points out on the odd occasion we have done with security, where we appear in general to have moved backwards since the Seventies.

AlanS June 30, 2014 7:01 AM

@Skeptical

See link to opinion I posted above.

On pages numbered 17-21 (starting p20 of PDF) they say cell phones are really computers (ie digital computing devices) and are different because of the immense storage capacity, variety of information stored on them, and pervasiveness.

On pp. 21-22 they go on to discuss the additional complications of remote storage (“cloud computing”). It is here they reject “protocols” to minimize the extent of what is searched: “the Founders did not fight a revolution to gain the right to government agency protocols.”

The part that has a bearing on metadata comes up on pp.23-24. The defense of metadata depends on Smith v. Maryland i.e. it is just pen register data that has already been shared with a third party and is not private. But here they reject this as “logs contain more than phone numbers”. If metadata is to be interpreted as what its meaning was 35 years ago, the government has a problem as their interpretation is much more expansive. The problem with Smith v. Maryland is that it belongs to a different era (pre-Internet, pre-cellphone).

Nice that at the end they relate everything back to Otis and Adams and the start of the revolutionary war,  ie that what is at stake is a fundamental value.

Nick P June 30, 2014 7:41 AM

@ Clive

re concurrency

Evaluating his four items. Non-blocking I/O is definitely a requirement as it’s proven it’s value from 60’s mainframes onward. User-space concurrency can help, yet systems in the past showed putting it into kernel or hardware works too. Easy messaging has proven to be important. Advanced scheduling is vague so I can’t comment on it. All I’ll say about scheduling is being able to meter workloads and prevent them from halting total cluster operations is quite important.

So, nice points but I’m not sure about his conclusion. Tanenbaum might’ve said you just need the right OS and techniques. Most of the older distributed or clustered OS’s were built using old languages without a runtime. And nobody ever accused SGI or Cray of failing to scale. 😉

re Blackphone

Seems they did a good job far as a private Android distribution. I expected they would. The review in the article actually says nothing about how secure the phone is. The attack vectors have to be enumerated, the countermeasures (if present) enumerated, and reviewers must go through the lists to see what might work. One also must analyze anything unique to Blackphone, from changes to OS to their software, to find potential issues there. And even if that’s all good, there’s still the whole subversion risk: U.S. company, located in D.C., and affiliated with another that brags about their Navy SEAL’s on staff. Couldn’t imagine any NSA influence happening there… 😉

mj12 June 30, 2014 8:32 AM

@Figureitout

RE: switching away from google
–I wonder how you did your searches then or what alternative you use. And if you’re wondering, no I’m not connected to google. Applied once.

Dogpile, Yahoo, Scroogle, something else I don’t remember. Now it’s mostly DDG, Startpage, sometimes Blekko. I’m also considering YaCy.

CallMeLateForSupper June 30, 2014 11:19 AM

@Nick P (Blackphone)
“The review in the article actually says nothing about how secure the phone is.”

Very good point that many tend to overlook in this age of breathless speculation. The device was, after all, pre-production, and no review of that says anything about production devices… what paying customers will get. Personally I don’t give a whit about what company X might be working on or might announce or might build; what is important to me is a) what is and b) the extent to which it meets claims.

Many years ago I looked on in amused disbelief as “my” company announced its new PC that could also run code which ran on its mainframes. Customers would, in effect, have a mainframe on their desks. Revolutionary! Think of the possibilities! The hoop-la ended abruptly when the staggering performance hit of softwareemulated* mainframe instructions became clear. The finished box worked as advertised but was impractical.

Iain Moffat June 30, 2014 2:23 PM

@Clive: I once had the honour of talking to Donald Davies who led the NPL work and I think – certainly in the context of useful applications – they had about a 10 year start on ARPA (Voice over packet working in 1970 at NPL with design starting in 1966 vs. the Network Voice Protocol RFC in 1976).

I have worked with X.25 and IP (and IP over X25!) since the 1980s and my take on it is that IP won because of the open-source BSD code base, decentralised administration and development of free-to-use supporting infrastructure like DNS while X.25 was still largely used within corporate and telco networks and its growth was limited by expensive terminals and centralised switching. The ease of setting up an IP network with international connectivity over leased lines in the early stage of telecommunications liberalisation perhaps was an even bigger factor after 1990.

Technically the biggest deficit in my view is that CCITT X.25 does not adapt well to broadcast networks because of the need to setup a call through a switch even for a single request/response pair of packets – the Amateur Radio AX.25 protocol addressed this to allow peer to peer connections and added a UDP-like unnumbered information frame type to complete the design in that respect.

Some aspects of X.25 such as separating the country, network and subscriber fields in the address would actually have made address management and routing easier if they had been followed in the IP world (the Amateur Radio community actually did that with their network 44.0.0.0/8 – and from memory the total was 14 decimal digits organised as 4 for internetwork routing and 10 for local address, the only real error in the days of monopoly national telcos being to assign 3 country and one network digit in the inter-network routing part; 10 networks per country seems a little small with hindsight but I dont think 10 digits of end user addresses would have been used up even in the USA yet had X.25 taken off!

Iain
(feeling a bit nostalgic)

Incredulous June 30, 2014 3:01 PM

Talking about Facebook: Today it was reported that Facebook ran a study on manipulating the emotions of 700,000 users without their informed consent.

The study involved filtering posts with either negative or positive content from people’s feed. It found that filtering positive content led to more negative posting and vice versa.

What didn’t seem to be considered is the effect on vulnerable people of filtering positive content. What if the Vegas shooters were in the positive filtered group? Or other bad actors? People who rely on Facebook for feedback and emotional support could obviously be affected in pernicious ways.

Facebook claims that it doesn’t need consent since their users have already consented to whatever they choose to do to them. I am not sure how the academics behind the study managed to avoid institutional review. According to the standards I was taught in studying experimental psychology the researchers and their institution should be censured by their professional organization for failing to get INFORMED consent. I hope it happens.

Hopefully this gives pause to Facebook users: You and your friends are meat to Facebook, not human beings. Joining Facebook for personal social reasons feeds their bad acts. They are a pernicious organization. Joining known harmful organizations to improve your social life is sad, and ultimately, morally wrong.

Would you join a racist organization because they threw the best parties? I hope not.

Anura June 30, 2014 3:44 PM

Interesting thought: as inequality and poverty grow, businesses can use the media to show more and more positive news, ignoring more and more negative news. Use propaganda to make people feel better so they don’t revolt. I imagine it can only delay it, not stop it entirely. I mean, if you end up going into a depression it can be kind of hard to keep people positive.

Clive Robinson June 30, 2014 3:59 PM

@ Ian,

“Feeling a bit nostalgic” is one of those pleasures that like fine wine needs a few years 😉

The big problem I had with X.25 was more political than technical, telco’s had the “our network our bl**dy rules” mentality that ment you had to do the whole “pony and trap” test house routien at great expense before you could connect to their network even via a 3X PAD. And each telco would ensure that they had some “feature” that was different to everybody else just so they could keep things closed.

The advantage of IP were the likes of Demon in the UK that took a totaly different aproach, and actually supplied open source software (oddly based on AX25 code) and just let people connect without grief or hassle, usually within a day at the most. BT at that time were taking upto three months to connect “leased lines” to their existing network. Then in London some of the new cable companies were putting in 1Mbit connections in a day or so for the same price as BTs cheapest (9600bit) service and less than the eye wateringly expensive 2B+D ISDN that they would fault if you did not make chargable calls on every couple of days (a company I worked for found their “back up” lines never worked when needed and BT would trot out some meaningless pap every time, until one of their engineers gave me a word to the wise).

So the telcos with their monopoly behaviour killed X.25 and IP based ISPs using freeware software stole their closed market away from them…

It was stupid of them but the likes of BT believed they “owned” communications and could do what they pleased. I remember being present at a meeting involving BT senior execs when a Reader from Kingston University told them that from a practicle view point their value was fraudulant, as BT were claiming the cost of “copper in the ground” as their major asset, and ignoring the fact it would cost more than twice the book value to dig it up… It was not long after this that they started working out ways to give that copper real value, with “eventually” the most prominent way being “last mile ADSL”. And even to this day BT still over value the copper in the ground…

Iain Moffat June 30, 2014 4:34 PM

@Clive: Strangely BT are plagued by people digging their copper up and selling it, before everyone has stopped using it. I guess the 2x book value estimate included a provision for health, safety and compliance 😉

Iain

georgio June 30, 2014 4:55 PM

Bruce,
A story you haven’t covered and I would like to hear your opinion is the story of Nokia being extorted for millions in 2007 in order not to leak Symbian encryption keys. How would the extortionist come about those keys:
– did he brute-force them ? did he acquire them from some insider ? where they leaked by mistake in some piece of code release by Nokia or a third party.
If they were brute-forced, how comes no-one else was able to discover them ?
What is the most likely scenario ?
BTW, with symbian all but a dead platform now, are the keys now floating around on Internet somewhere ?

Buck June 30, 2014 5:59 PM

So, the FBI has unfettered access to query on Section 702 data with U.S. Citizen specific selectors? Not only that, but they have been doing it more than anyone can count…

Spy Agencies Disclose Data on ‘Backdoor’ Spying of Americans

“When the FBI says it conducts a substantial number of searches and it has no idea of what the number is, it shows how flawed this system is and the consequences of inadequate oversight,” Wyden said in a statement. “This huge gap in oversight is a problem now, and will only grow as global communications systems become more interconnected.”

https://www.nationaljournal.com/tech/spy-agencies-disclose-data-on-backdoor-spying-of-americans-20140630

Don’t worry, it’s all perfectly legal! 😉

Nick P June 30, 2014 6:09 PM

@ CallMeLateForSupper

I agree that what’s delivered matters more than what’s promised. That’s also why I promise nothing as I can’t be sure it will be delivered. The ruining of false hopes is also something that can destroy reputation. People don’t like to be burned twice, even if it was half the media’s fault. The best way to do a secure cellphone development is to determine what market wants, tell nobody about the development, get the thing built, test it with various types of users under NDA, and then release it. The first user experience should be untainted by hype and early bugs are knocked out. This is how I usually did secure developments. (Drops box) “Here it is, here’s it doing it’s job, here’s the mgmt overhead, and here’s the price. Whose interested?”

re mainframe on a desk

Was it Hercules-based? There was one company that basically repackaged Hercules for a similar purpose. They were smart enough to market it as a way to reduce (not eliminate) mainframe use by moving old, slow, backup, or peak use services over to the emulator. The Hercules system was quite reliable in practice and mainframes are ridiculously expensive. So, it got quite popular. That was until it was sued into the ground by IBM. Seems like your company tried to do a full replacement, which predictably didn’t work.

Benni June 30, 2014 7:07 PM

I have noted before in the thread about NSA-BND cooperation, that during a lawsuit in germany, it came out that the german secret service BND monitors the communications between germany and 196 countries:

https://netzpolitik.org/2014/trotz-vorlaeufigem-scheitern-der-klage-in-leipzig-neue-erkenntnisse-bnd-ueberwachte-2010-196-laender-auch-die-usa/

which are here visualized in a nice graphics:

https://twitter.com/SZ_Investigativ/status/478419159387095040

The court noted that among these monitored countries is the US and the UK. Then it was revelaed later that Switzerland is also among the countries monitored by BND:

https://twitter.com/oliver_zihlmann/status/480463731361853440/photo/1

And now the Washington post reveals from Documents of Edward Snowden that NSA is just monitoring…..

…..

193 countries!

http://www.washingtonpost.com/world/national-security/court-gave-nsa-broad-leeway-in-surveillance-documents-show/2014/06/30/32b872ec-fae4-11e3-8176-f2c941cf35f1_story.html

How dissapointing. These are three countries less than which are monitored by BND!

And that is typical. Compared to the true German spirit of thoroughness, foreign authorities are so lazy, imprecise and forgetful… It is simply unbelievable.

From this slides: http://www.spiegel.de/media/media-34025.pdf

It becomes clear that whereas BND agents struggle each day to read much of the intercepted mails, in order to find out what is NOT of BND’s interests, the agents of NSA seems to employ an imprecise automatic scheme for this.

This is simply terrible. Imagine, how many potential terrorists NSA does overlook, when it does discard the communications of 3 entire countries that BND agents routinely read.

This is an extremely dangerous situation. What if the next 9/11 is planed in these 3 countries that NSA does not monitor?

Fortunately, we have German agents on board who safe the day for that case.

The heroes of the German secret service BND are reading even those emails that the sloppy NSA spooks do regularly miss.

Nick P June 30, 2014 7:48 PM

@ Benni

“It becomes clear that whereas BND agents struggle each day to read much of the intercepted mails, in order to find out what is NOT of BND’s interests, the agents of NSA seems to employ an imprecise automatic scheme for this. This is simply terrible.”

That makes no sense at all. The NSA collects over a billion pieces of intelligence a month on India alone according to one leak. They’re doing such massive collection throughout the globe. It’s physically impossible for an agency their size to read over every piece of information they collect. I’d be surprised if they could analyze even 1% of that volume. It’s also a waste of resources. They use automation as much as possible to help their analysts focus on what’s probably most relevant. This is also status quo for data mining in both business and academia. There’s also huge German vendors and research efforts leveraging the same principle.

However, if German intelligence is really trying to have people do what the NSA’s machines do, then that throws their efficiency image right out the window. Mainly because it’s impossible and they should know better. NSA’s approach is better far as surveillance states go. The best approach, which I’m not sure if either are doing, is to combine great SIGINT, HUMINT, and hacking into a consistent and long-term collection process. In my scheme, the HUMINT and high priority target organizations would be the focal point. The electronic side would aid their targeted activities or get its own targets via their activities, from information gathering to implanting critical machines. Over here, NSA is almost all SIGINT and CIA does HUMINT… poorly. BND’s SIGINT strategy is unworkable, but they do HUMINT much better. The two agencies actually make a good fit for partners as each one’s strength is the others’ weakness.

The one thing I think we can agree on is that both countries’ SIGINT efforts are vastly wasteful. There’s better uses of the tax money for both offense and defense.

Nick P June 30, 2014 8:36 PM

History and categorization of I/O handling methods

http://people.cs.clemson.edu/~mark/io_hist.html

The link was a good read as clean-slate efforts might benefit from rethinking I/O. It’s no secret I prefer coprocessors such as Channel I/O. Most are going with interrupts. Some homebrew chips will typically be PIO. In any case, I figured people might find useful a categorized list of how previous systems handled I/O.

Once again, Burroughs B7700 was ahead of me:

“Reserved locations exist in main memory that define head and tail pointers to I/O device request queues and I/O completion block queues. Queue manipulations by the CPU and I/O modules were atomic actions. Any IOM could handle any device, but a start I/O instruction issued by the CPU begins IOM processing on a specified device queue. IOM processing continues until an error, interrupt, or empty queue. The CPU polls the completion block queue, or, optionally, interrupts could be generated on completion of each request.”

That matches one of my recent designs almost exactly. The design called for two processors, one compute and one I/O, to run in parallel with memory being the main way things were synced. This allowed me to keep compute processor (running trusted code) totally in control of execution, ignore the effects of interrupts on most actions, and determine how asynchronous the execution was for timing channel issues. The key difference is my design calls for strong memory isolation between the two’s working memory, with specific shared regions for I/O.

Benni June 30, 2014 8:55 PM

“That makes no sense at all. The NSA collects over a billion pieces of intelligence a month on India alone according to one leak.”

This is similar with BND. As with raw data, just one BND surveillance station (and BND has many of them, several dozend only in in germany), intercepts 62.000 emails per day http://www.spiegel.de/media/media-34037.pdf

For these intercepted communications, BND is then using a wordlist and an address list.

It searches, if for example an email contains the word “bomb” or some other technical description, and whether the email comes was sent from an interesting address. Then human agents from BND have to read the emails.

In fact, this procedure is completely wasteful and useless.

For example, in 2010, according to the german government, BND agents had to read

http://www.spiegel.de/spiegel/vorab/anwalt-klagt-gegen-durchleuchtung-von-e-mails-durch-den-bnd-a-960203.html

37 million emails.

And BND proudly found that only 12 out of the 37 million emails were relevant for BND.

This slide here http://www.spiegel.de/media/media-34025.pdf just confirms that NSA thinks BND agents read too much email and that NSA thinks this violates the privacy of email users.

But I think, the german agents just want to be torough and precise. Like any usual german authority. It would not surprise me, in fact, if they print their yearly stash of 37 million emails out on paper, and rubber stamp them with a red mark, noting that “has been read by BND. Was found to be not of interest” just in order to ensure everything being correctly done.

Clive Robinson June 30, 2014 9:10 PM

@ Iain,

About a year ago I had a chat with some BT engineers –who were actuallt repairing the damage from a cable theft,– about how the copper gets stolen and it’s real Bonny and Clyde “snatch and grab” style. But it only works against a small fraction of BTs in ground copper.

Essentialy what the criminals do is find sections of cable that are in shallow surface pipes in long straight sections of road, where the cable is not joined or spliced. They tie a rope carefully around the live cable that is also tied to the tow hook on a Range Rover or similar vehical that used to be favourd for the likes of ram raiding. They then cut both ends of the cable with bolt cutters and drive away in a straight line pulling the cable out of the ground. They then drive off and pull in the cable into the back of the vehical as they go… Apparently the technique was first seen with cable theft along railway tracks but has moved to BT cable as all the low hanging fruit has got ripped up and prevention methods added (a bit like those living in out of the way villages learning to lock their doors after they have been burgled).

But from quite a few peoples perspective, the theives have got worse by ripping off plaques etc from War Memorials, graves, municipal article and even short sections of water and gas pipes from the outside of peoples houses…

I’m assuming that as we hear less about cable theft going on either preventions are working or it’s nolonger as news worthy. However for railway cable theft I’m thinking the former rather than the latter as people tend to gripe vociferously when their journey to work gets significantly disrupted and that always remains topical as it’s an easy way to “bash politicos” etc.

Speaking of which… guess what I’ve been told from other sources of information? Apparently a significant part of the copper and aluminium problem is the likes of the “Vampire Squid”. They have bought up bonded metal stores and are exploiting rules to charge much larger storage fees. The knock on effect of this is the cost gets added to the ingot price, but the deliberate delays caused by the effective hording has created an increased scarcity price rise which has attracted in other metal speculators who have moved out of the traditional precious metals markets… thus the price of copper and aluminium has been inflated and made cable and similar thefts more profitable.

Iain Moffat June 30, 2014 9:28 PM

@Nick: You might want to consider true dual port memory as a further step in I/O isolation – e.g.

http://www.idt.com/products/memory-logic/multi-port-memories/asynchronous-dual-port-rams

There is a functional description at:

http://www.idt.com/document/dst/713040-datasheet

I have seen it used often when PC ISA cards had a separate (often much more powerful) CPU on board in the 1990s.

They can be asynchronously accessed provided both sides are prevented from writing the same location at the same time, and quite large (512K x 36 for the 70T653M). Ideally you need to partition the space into CPU writable and IO writable segments and define control locations for the host and IO CPUs to communicate. Some of them have hardware support for semaphores and interrupts in both directions.

As with main memory the CPU writeable dual port space can be partitioned between processes by the MMU.

Iain

Iain Moffat June 30, 2014 9:33 PM

Clive

It is still going on if you read the BT service status pages. It’s nothing new – I can remember my dad telling me of copper theft from church roofs in the 1930s.

Iain

Benni June 30, 2014 9:33 PM

Interesting is this NSA slide:
http://apps.washingtonpost.com/g/page/world/list-of-foreign-governments-and-organizations-authorized-for-surveillance/1133/

where they get the authorization to spy on the european central bank, world bank group, international atomic energy organziation, opec and so on.

Also the european bank for reconstruction and development or the financial action task force, or the opec, or the inter american development bank are funny targets.

I mean, institutions like this here:

http://www.ebrd.com/pages/workingwithus/projects/what.shtml

what the heck? This has nothing to do with terror. But this is mostly concerned about development projects in the poorer regions of europe.

What interest does the US government have in development projects in poland, for example?

Does the US government think it will fall down so quickly that it could soon be itself a client of the european development bank?

this does not make much sense to me.

Benni June 30, 2014 10:01 PM

This is also interesting:

http://apps.washingtonpost.com/g/page/world/fisa-amendments-act-of-2008-section-702-summary-document/1141/

How NSA tries to detect whether you are an US person:

“These steps are designed to prevent NSA from collecting domestic communications as well as any targeting of persons who are inside the United States.

NSA looks at registers of roaming cell phones, IP addresses of Internet communications and similar technical information, but in many cases, it will be the content of a communication that indicates that a target has entered the United States.”

Do I understand this right? So use a proxy in the US and the stupid NSA analysts following “the book” think they have an US person. That would be great! Perhaps getting the green card helps too…

and then there is this part: “but in many cases, it will be the content of a communication that indicates that a target has entered the United States.”

Finally, this reassures me that NSA agents have to read email content too sometimes…

That’s fine, they would be really sloppy compared with BND counterparts otherwise…

Thoth June 30, 2014 11:08 PM

Here’s some thoughts I have done recently. It’s rather abstract though and constructive comments and ideas are welcomed to help me with the design.

An Overview and Abstract Design for Secure Secret Storage on USB Devices/Portable Devices.

  • A USB/Portable device for Secrets management.
  • Secrets defined as passwords, passphrases, PINS, crypto keys, crypto certs.
  • Device must be encased in tamper resistance/evident transparent casing.
  • Read/Write/Core Server LED lights. 3 of them.
  • NVRAM (~4GB) to store user Secrets (userspace) and Processing Unit and Core bootloader.
  • Core bootloader and Core Server software stored in NVRAM (systemspace).
  • Core bootloader contains client software to interact with device while core server contains server software to respond to requests (similar to JavaCard design).
  • Core bootloader restricts and controls all I/O. All I/O are disabled except those via client software to server to prevent reading or writing. Directly trying to read data would not be effective and neither trying to directly write data.
  • Client softwares are always stored within NVRAM’s systemspace which include a bootable ISO workspace or normal executables for different OSes. No installation required. Bootable ISO worksapce in instances where unknown host computers are used to access secrets.
  • Core bootloader would dynamically create virtual volumes (like Truecrypt) and place the client softwares and ISO within the volumes for user to access on computer.
  • Upon receiving power, core bootloader would boot first and then the core server.
  • Userspace data are encrypted via PBE key derived from user password which is not stored.
  • Client software allows interaction with system properties, secrets and perform system update. In the ISO workspace mode, the client software is embedded into the bootable ISO OS for convenience.
  • A key pair derived from DHIES scheme for signing and verifying system updates are used to sign and verify trusted updates. One of the keys are kept encrypted as a secret in the userspace protected by user password like any other secrets. The secondary key is held by the update server (run your own or use others). Users would have to use their password to decrypt the key on the device to allow copy of update to be performed.
  • User secrets may individually be protected by a separate password which would encrypt the particular secret again under the separate password for higher security margins.
  • Client software does not contain logic codes for critical operations. It simply shuffle requests back to the device between the user and the device’s core server.
  • When a user login via the client software, metadata of secrets (secret’s title, subject line, timestamp, comments ..etc..) are partially decrypted and sent to client software for user to view. The actual secrets (password, certs, keys) will not be fully decrypted and loaded to the client software yet until the user specifically calls for a particular key/cert/password to be released for viewing or using. Once the user either closes the window or selects another password to view, it is wiped off from client software’s host computer’s memory.
  • If a secret is doubly protected as mentioned above, user must send the secret specific password to decrypt it again.
  • Secret sharing is enabled across multiple devices but individual secret password must be enabled to ensure none of the devices know the secrets and a K/N quorum of secrets can be specified to ensure survivability in case of catastrophy.
  • Client software to core server protocol uses some sort of PAKE scheme for session keys and authentication. I have not thought of which one to use yet.
  • Wiping secrets when intrusion is detected can be implemented via wrong login attempts. Possible support for trap feature to detect forceful breaking of tamper resistant/evidence casing and triggering wiping can be implemented.
  • To safely open cover, the DHIES key must be used via the client software to disable tamper detection before opening cover. Essentially, the DHIES key is simply used for maintenance work and never for userspace secret encryption. (optional)

Nick P June 30, 2014 11:10 PM

@ Benni

Ok, so you’re not saying you thought the German approach to SIGINT was a good thing. You’re just saying what mindset and tradeoffs you think they were making? And you also think their approach is a waste of time/resources? Am I understanding your post correctly?

@ Iain

Thanks for the tip! Clive brought these sorts of things up in our timing analysis discussion where I was looking at replacing cache with scratchpad. So, that’s two votes for me to look into them. I’m guessing two things:

  1. You’re saying to use this instead of main memory (DRAM) to store I/O operations or status? As something in the SOC they’re both connected to?
  2. The MMU’s have to be between the processor and the SRAM in this design.

Interrupts “in both directions” is also a new concept to me. How does that work specifically?

Chris July 1, 2014 12:29 AM

@Thoth
Hi thanks for the pointers regarding Bittorrent Sync
Yes its closed source and its annoying:
However still using ENCFS here so I cant see how it could be worse
than Dropbox, and it works quite well for me.
Going to have alook at that Maidsafe thingy…

@Secret Police
And the same here checking both Tarsnap and Ciphertite
//Chris

Iain Moffat July 1, 2014 3:49 AM

@Nick: The dual port RAM has one port (set of address, data and r/w/ce lines) connected to each processor so appears as part of the memory map in each. For safety it’s best to decode the high address bits to enable writing so (for example) the lower 512 words of a 1K device are writeable by the CPU and the upper 512 by the IO processor. This avoids both sides writing the same location and the need to design exception processing hardware for the DP RAM “busy” output. A few bytes within each section are reserved for communication and the rest are buffers. The OS on the core CPU and the OS on the IO processor need to have a protocol for

1) allocating space in their writable area to an i/o request
2) telling the other processor
3) finding out about a request from the other processor
4) telling the other processor that a request has been serviced

Some DP RAM devices are able to assert an interrupt (hardware signal) on one side when the other side writes to a reserved location – this may be used to relieve the processors of polling the communication locations if fast response matters more than deterministic timing. The interrupt service routine will look at the reserved inter-processor communication locations, identify the locations in the other side’s writeable space containing new data, and deliver it to the process that is waiting on that I/O. In a multitasking system with hardware enforced memory separation typically the CPU side would have separate write and read segments for each process – I would probably not use interrupts unless the system process switch timer was too slow and rather deal with pending I/O by switching to the I/O handler between user processes (more the CDC6600 IO barrel processor idea with fixed time slots driving address decodes) to ensure that hardware enforces separation. This is definitely a speed for security trade-off though.

CallMeLateForSupper July 1, 2014 6:47 AM

@Nick P (mainframe on a desk)
“Was it Hercules based?”

It was in fact the PC-XT/370, from the folks at “Sue ’em into the ground!”. 🙁

A piece of trivia one would be hard-pressed to unearth: the code name of the project was Percheron.

CallMeLateForSupper July 1, 2014 7:52 AM

@Nick P (re: Burroughs mainframes)

I did double-takes several times while reading your description of Burroughs’ I/O processor separated from CPU. It sounded very much like classic S/360-370 (and I recognized IBM-speak: “channel”). Also “start I/O instruction” (assembler pneumonic “SIO”).

It was all near and dear to me many moons ago. I’ve been retired longer than I worked, and I do not find many occasions these days to talk “channel” (though two nights ago a former colleague reminisced about testing a channel-to-channel adapter (CTCA)). Thanks for the memories.

Yes, I agree that having an independant, intelligent I/O processsor is a good way to go, from a performance perspective. In truth, I knew no other way during my working life. Did not work with any other computer design until a couple years after, when I began futzing with microcontrollers on my own; it was readily apparent that becoming “I/O bound” was simple to arrange!

Benni July 1, 2014 9:35 AM

@Nick P:
” And you also think their approach is a waste of time/resources? Am I understanding your post correctly?”

Well, I think the BND data collection is as good as many other projects from the heroes at BND. For example, staging a smuggle of weapons grade plutonium in a regular airliner, just for the sake of proving it is possible. http://www.spiegel.de/spiegel/print/d-9181696.html or the project of creating a secret nazi army http://www.spiegel.de/international/germany/wehrmacht-veterans-created-a-secret-army-in-west-germany-a-969015.html or the deliberately downplay of dangers in afghanistan in reports to the german chancellery http://www.focus.de/politik/deutschland/geheimdienst-vs-kanzleramt-bnd-beklagt-schlechten-draht-zu-merkel_aid_664259.html these are all similarly useful and heroic efforts.

Under the german chancellor Helmut Kohl, west germany achieved its reunification. It is known that Kohl had a special politics regarding to BND reports: He considered them so useless that he put their weekly folder behind the folder with articles from the german tabloid “bild” which is usually full of lies and stupidity. Kohl said “secret services always think they are important. But they aren’t. The german chancellor Helmut Schmidt called them dilettantes, now he is saying that Schmidt never let the BND submit him its reports. http://www.heise.de/tp/news/Altkanzler-kanzelt-BND-ab-2048932.html

I think the BND is unfortunately one of the most aggressive secret services in the world. Operations like the complete monitoring of all german internet providers, including the world’s largest internet node de-cix, or staging a smuggle of weapons grade uranium in a regular airliner, or the appearance of BND agents whenever a war is currently in the making, confirm this. I do not think that germany needs this service in this form. Similarly is the NSA.

In a world where even terrorist organizations like ISIS submit twitter posts, open source intelligence becomes more and more important. I think they should spend their time analyzing the public speeches of the politicians or the twitter posts if Isis. Then they should secure the networks, hardware and infrastructure. And largely that’s it.

Armed attacks to someone formerly belonged to the usual business of the police. The US should simply thrust its police and the policemen of the other governments. Yes, then a large risk remains of an attack like 9/11 being undetected. But the nsa surveillance system does not lower this risk. The Boston bombers are the direct proof of that.

Benni July 1, 2014 11:36 AM

Among the recently announced spy targets of the NSA is the european central bank.

And now it turns out that the european central bank gets its internet from the same company
http://www.sueddeutsche.de/digital/nsa-affaere-auch-die-ezb-nutzt-it-dienste-von-verizon-1.2022053

that provides the network for the german parliament, as well as the internal RSA tokens that protect the communications of the german ministry of labour:

Verizon

http://www.spiegel.de/spiegel/vorab/verizon-wartet-sicherheitstechnologie-im-bundesarbeitsministerium-a-978037.html

Furthermore, spiegel reports that the parlamentarian comission that controls the BND now has created an assistence group of 7 investigators, and wants to use its right to search through BND headquaters in Pullach and they want to interrogate BND employees.
They want to do the same with the service for protection of the constitution, the inner secret service of germany….

Nick P July 1, 2014 1:08 PM

@ Iain

Thanks for the explanation.

@ Benni

That viewpoint makes plenty of sense. 🙂

@ CallMeLateforSupper

This thing. Your company used the real deal, then. I think the failure was mainly due to IBM mixing two markets that just didn’t mix well. Of that line, the PC/370 was most interesting as it was just a card. Reading on I see the S/390 Integrated Server used a Pentium 2 for I/O Service Processor. That a P2 met the performance requirements for even a lower end mainframe is hopeful for new developments using open cores that are typically 100-400Mhz.

re Channels

Yes, the Burroughs model should sound familier to an IBM admin. Most of the major mainframe vendors used I/O coprocessors and channels, although the specifics are different. I’ll be honest that I still don’t understand how Channel I/O works on IBM just because the descriptions confuse me. There’s Channel Subsystems, Channels, Subchannels, Service Processors, Control Units, actual wires, and so on. Here’s my abstract design for my own Channel I/O so far:

  1. Compute Processor, main memory, etc. obviously
  2. I/O processor with RISC/CISC core, acceleration of I/O critical functions (eg endian conversion or checksums), DMA engine, onboard SRAM, and physical connection to various devices.
  3. Device interrupts go to I/O processor.
  4. Device drivers and maybe I/O related OS functions run on I/O processor.
  5. I/O processor directs DMA operations.
  6. I/O processor might update parts of memory that are I/O related (esp status)
  7. I/O processor has some way of communicating with compute processor.

That simple. Most of what I described could probably be integrated on one affordable FPGA, not to mention a SOC. IBM’s model is just too complicated for me although it might be a necessity I don’t understand yet due to Channel I/O being new for me. I read Burrough’s model, though, and didn’t feel so overwhelmed. It even sounded a bit like my design. 🙂

I’m still open to new design points on the I/O system as it’s still on paper and exploratory. The key points are: computation on one processor, I/O management on another, isolation of memory spaces for these, and ability to leverage existing chip-up security approaches to protect code on both. The last thing is easy to do if the design is simple. And what we use to protect I/O processor might be different from main processor or it might reduce cost by reusing main processor. Many tradeoffs. Yet, I’m sure there’s a construction that has most advantages of IBM model with cost and simplicity of PC model. Maybe simpler due to fewer interrupt-related issues.

“it was readily apparent that becoming “I/O bound” was simple to arrange!”

Yes, I still run into that problem on some network devices and USB mass storage. A USB backup kills performance on some Linux desktops. If it had I/O coprocessor, I’d probably not even notice it so long as I wasn’t doing anything I/O dependent. And I could always prevent that issue by copying my work to a RAMdisk tied to compute processor before starting the backup.

Chris July 1, 2014 1:11 PM

Hi all again after I send some ideas about how to avoid big brother
I did get some feedback, however not from the one that is important.
So I repeat:

The legislation as how to collect the data is gathered for where its heading:
-Maybe wrong english ?

This is again as far as I know possible to do with privoxy, I dont like privoxy
for number of reasons, since it only handles https 1.0 but..

This is the idea:
Client wants to go to country – xx01 — through proxy chain where privoxy sees that
and send that request to a new chain.

So what you do is you build about as many countries that exists and make a chain for every country.

Then every countrie is going to a separate TOR chain where the END point is at this country:
This is something I have done now and it works as a POC, i cant see why it couldnt be build into forinstance JanusVM ot other appliances, personally I user Janus because its easy to modify if you compare to Tails

Just frustrated about no comments since its important to do it this way.

//Chris
PS: This is a legislation solution to a problem
DS:

Benni July 1, 2014 1:46 PM

Seems not only Helmut Schmidt refused to read reports from intelligence agencies:

Obama advisor Podesta: “The president set up a process for more specific review and narrowing the targets of investigation. We’re not going to publish that list, but I think he has set up a process, quite frankly, because some of the disclosures as to who had been targeted were probably beyond the knowledge of anybody at a political level in government.”

SPIEGEL: How could that happen?

And by the way, the former chancellor Schmidt writes here that the german ambassador in russia did not even know what the strange antennas on top of the german embassy were for in times of the cold war:
http://www.zeit.de/2013/45/nsa-abhoeraffaere-gelassenheit

This here is an old article revealing how BND and the service for protection of the constitution placed bugs on the house of a german nuclear engineer:
http://www.spiegel.de/spiegel/print/d-40941938.html

and this here is a biography of a stasi sigint officer.
http://www.heise.de/tp/artikel/40/40043/1.html

One learns for example, that during 1970, even the west german BND made errors in implementing its cryptography, and the stasi could read everything. Stasi even dug holes in the ground of pullach, in order to tap the communication lines of the service. This did not reveal much, but stasi already had enough moles planted at high ranks in the service.

Stasi regularly parked their vans before buildings with computers where they monitored the emissions from the computer screens. Telephone lines were also tapped and when fibers came up, Stasi monitored them with splitters. in 1989, Stasi monitored 40.000 west german telepnone lines. http://www.spiegel.de/spiegel/print/d-13689773.html

But the modern cryptography of the german ministries and BND turned out to be un crackable at the end of the GDR.

When GDR finally broke down, http://www.heise.de/tp/artikel/40/40043/1.html
note that many former stasi fellows are now employed by BND, but some were reluctant because of their own pride.

So yes, it seems we have former Stasi officers reading our emails at BND.

Chris July 1, 2014 1:58 PM

This ofcourse will get alot of pressure towards TOR

Weather or not anonymizing is it legal or not, it most likely will be and allready has been
that all TOR traffic will be regarded as hostile.
Intresting times ahead, pay attentiong to all new laws that are forthcoming.
Thank you EFF for your effords.

So thats not the point of this excersise.
Anyhow, read the law and if its not ok change it, this is not a technical war.
Allthough I am very sure that it can be won technically as well, however when you know the law it could be used against the advarceries.

//Chris

Chris July 1, 2014 2:13 PM

There has been very little information about how to build your own TOR network
And if you understand the legislation as I described it, it will render this approach meaningless since all TOR traffic will be regarded as hostile.

So what we need now is tor 1 2 3 4 5 6 7 8 and a gazillion of them
SO how can we build parallell unknown tor networks ?

This is also one of the questions that I have never seen asked or answered
Its getting very important now

Information about that is VERY important
since I do like TOR but I dont like to be trapped
Its a nogo and its going to die

//Chris

Chris July 1, 2014 2:44 PM

Hi ok the last one but it comes back to legislation:
And this makes me angry but i dont know how else to say
but this war need to be won:

How is it that in this planet where we have what?
couple of billion people where most of them actuallt at leat have heard
about internet there is…
994 Exit TOR Routers !!!
Ok the problem here as well is Politics and Law
It so happens that if you have an exit router you will get through hell
Been there done that, that need to fuc!”#!”# change
Give lists to lawyiers that know the law and then everyone at leat in
this group set up an exit router and fight.
People say I cant do anything, well here is your chance do it. fight and win it
and tell other people how to do it.

Ok that was about EXIT routers, no biggie 🙂
But here is a thing, and correct me if I am wrong, if I have a lot of tor traffic
from my endpoint it will render MY traffic less easy to spot = more anonimty
Then how is it possible that with all of us that know all that there is ONLY
hmm 5676 routers in the network, could it really be possible that there is
among the billions of people in this world that few routers in the network ?
Very strange… I meen that

http://jlve2y45zacpbz6s.onion/

Ok goodnight
+++
//Chris

Chris July 1, 2014 3:17 PM

Hmm ok I remembered one thing also even though i havent read about it
But how is it exactly that
uPNP in TOR is not supported
For christ sake that would make it possible to turn every TOR user to a router
automatically at install time as a choice.
//Chris

Benni July 2, 2014 12:05 AM

Openssl has now officially admitted that they have:

https://www.openssl.org/about/roadmap.html

Bugtracker backlog
Incomplete/incorrect documentation
Inconsistent coding style
Lack of code review
No clear release plan
No clear platform strategy
No published security strategy

With point 1), the bugtracker backlog, the Openssl developers propose the following way forward:

“Reduce over time the existing RT backlog (Timescale: Ongoing). This may include the mass closure of very old tickets, such as those raised before the release of any currently supported version”

Yeah, mass closure of bugs instead of fixing them. Given that Openssl has security vulnerabilities since version one (see this vulnerability: http://ccsinjection.lepidum.co.jp/blog/2014-06-05/CCS-Injection-en/index.html ) clearing the bugtracker by mass closing without fixing seems to be a great idea that can come only out of the minds of developers supported by the US ministry of defense and the US homeland security department.

I notice also that Openssl got new developers recently.

But the old ones responsible for that mess are still marked as “active”. And that is a problem. Someone who wrote some code has in general a certain reluctance to delete it, or the writer may overestimate his abilities and is more likely to say “well, thats my code. therefore it is correct”. An external code review should certainly take place but that should be the third step. The first review must be internal and it must be independent for the reasons above. The new openssl developers should have to independently read and understand the code written by the old openssl developers, in order to think about potential flaws.

People like Andy Polyakov, who wrote this little rop hacking API in openssl should not contribute to the codebase any longer: http://freshbsd.org/commit/openbsd/f868fc6f39a2c45a6c2bab70addc92525d467904

similarly, people who allowed to enter badly written code, like Stephen Henson, Ben Laurie or Mark J. Cox and others who were responsible for that mess should really step back now and remain inactive for a while, until openssl has been cleared, flensed and fixed..

AlanS July 2, 2014 12:48 AM

Privacy Board posted its 702 report. Haven’t had a chance to read it yet and not much commentary so far. Summaries on Lawfare and Just Security. EFF didn’t appear to like it.

Wael July 2, 2014 1:03 AM

@Benni,

Bugtracker backlog
Incomplete/incorrect documentation
Inconsistent coding style
Lack of code review
No clear release plan
No clear platform strategy
No published security strategy

How can anyone defend “open source” given the quoted problems? This subject was discussed in a related thead recently…The two main points are:

  • Open source software verification inherently relies on trust as well! I take that back! It relies on the trust that some one verified it, simply because it’s open-source
  • You will not verify Open-Source anymore than you will verify Closed-Source, because of the first point

If every peepingTom, Dickhead, and dirtyHarry (emphasis on the middle name) is allowed to mess with the code base, and no one is verifying the code (plus the listed problems above), then the outcome is just about what one would expect…

anonymous July 2, 2014 1:04 AM

New attack vector: keylogging by cam
Smartphones obviously have cameras. These are now good enough to track your fingers’ movements by looking at the reflection of the display in your eyes (worse if you wear glasses, or even sunglasses). This gives away your passwords to apps which have camera access.
The camera on the back can be used to take pictures of your fingertips, which can then be used to duplicate your fingerprints.
Both attacks have now been proven to work by researchers of the University of Berlin.
Paper will be presented at WOOT in San Diego.

Gerard van Vooren July 2, 2014 1:23 AM

@ Benni about OpenSSL

The original OpenSSL developers are probably aware of what’s going on right now with LibreSSL and BoringSSL. If they don’t learn from either of these projects then they really are cocky, but reading their roadmap, I think they addressed most pain points. Whether they are capable of implementing it correctly, that is something we have to find out.

To me this looks like LibreSSL-with-FIPS. Of course OpenSSL faces the same problem that LibreSSL has and that’s backward compatibility. In that aspect BoringSSL is looking much better because they are breaking the API if they want to.

Here is one gem (line 370-373) in OpenSSL I recently noticed and didn’t want to hold back 😉

Gerard van Vooren July 2, 2014 2:15 AM

Again about OpenSSL

Of course we are forgetting the two most important points:

1) TLS is still way too complicated and needs drastically simplification. With PFS, but for the rest: stream of bytes in -> encrypted stream of bytes out.
2) The language C is still crap (C++ is worse). It need to be replaced with a safe and sane language that knows the code is gonna be read much more than it is written down, has automatic memory management (or RAII), and concurrency.

Petrobras July 2, 2014 2:40 AM

@Gerard van Vooren: “is gonna be read much more than it is written down, has automatic memory management (or RAII), and concurrency.”

The one I know is Parasail. Unfortunately, they are discontinuing the option to produce executable that do not need their virtual machine 🙁 You might ask them to think again about that on their mailing list.

CallMeLateForSupper July 2, 2014 8:40 AM

@Nick P
“I’ll be honest that I still don’t understand how Channel I/O works on IBM just because the descriptions confuse me.”

I hear that! You are in good company; it’s a commom complaint.

I found that it was helpful to tackle understanding basic channel/control-unit interactions first. Ignore the very existance of “subchannel”; ignore all exceptional conditions as well as the operations that produce them (Halt I/O and Clear I/O CPU instructions come to mind.) Don’t even think of getting your head around, for example, “subchannel busy”[1] until you have “channel busy” and then “control-unit-busy”[2] well in hand. Baby steps. Unfortunately, just as with many other complicated subjects, it’s usually difficult or impossible for a nubie to separate basic functions from all the rest.

The popular publication “System S/360 and S/370 Channel Interface OEMI” (at one time I could recite the pub.# by heart). Is not – I say again NOT – a tutorial. Knowing and understanding everthing in it merely qualifies one to undertake futher, lengthy, frustating study.

[1] “subchannel” is a concept, a logical – not a physical – thing.
[2] a control unit is not part of a channel, it is part of an I/O box (tape, printer, disk, etc.). It is the interface between a channel and that I/O box.

Nick P July 2, 2014 11:28 AM

@ Gerard, Petrobas

ParaSail is a prototype so obviously out of the question. Ada and Active Oberon meet Gerard’s requirements, though. Both were used to write OS’s. D is decent for people of C++ background. Go is aiming at that. There’s also Java subsets and runtimes for systems use (mainly embedded). BASIC dialects such as PureBASIC are still popular for being more readable and safe, while still fast. (No good, built-in concurrency though). I just found SuperPascal. It’s interesting from a concurrency perspective, but I’d not call it production.

I’ll note that my research on safe/secure OS’s show that some of Gerard’s requirements might be handled by a core kernel or runtime. It wouldn’t be needed in the language itself. The GEC computers, for example, had a “Nucleus” with concurrency, exception handling, memory management, I/O, and more built-in. The code on top, including OS, just made calls to that. Then the language you use doesn’t need that stuff built-in.

@ CallMeLateforSupper

I’ll try that. I might also be able to find a local mainframe programmer that could explain it to me. We have a lot of big banks, warehouse firms, etc out here that almost certainly run mainframes at HQ. The admin might enjoy the rare moment that someone is actually interested in how their dinosaur works rather than laugh at them. 😉

“[2] a control unit is not part of a channel, it is part of an I/O box (tape, printer, disk, etc.). It is the interface between a channel and that I/O box.”

I couldn’t get my head around the need for a control unit given the I/O functionality on the main chip and the device. And now you say the CU is what the device has. That makes… SO much more sense. 🙂

Benni July 2, 2014 2:46 PM

A member of the parlamentarian control comission that has oversight on the secret service BND said yesterday :

http://www.spiegel.de/panorama/justiz/koalition-will-konfiszierung-von-mafia-vermoegen-erleichtern-a-978412.html

“Confiscating a car is useful during criminal investigations since this could modify the testimony in a criminal case” and “It would be effective if we had a reversal of the burden of proof, like in italy, when it comes to organized crime.”

Today the same member of parliament is wanted by police because of drug abuse, more precisely he is suspected by the police to be a long standing crystal meth addict…

Figureitout July 2, 2014 11:04 PM

First off let me say thanks to Chris and Thoth for presenting their ideas and looking for feedback. You have to be persistent. Outside of my focus (way outside) currently as I was stupid enough to not secure enough reliable back-ups before opening up and not setting up reliable internet AP’s as I’m now operating from essentially all untrusted and infected computers…Can’t give you anything now. Security won’t make any advances w/o practical guides and contributions from all over the globe. You never know as there’s someone too scared to post, but may get an idea from your posts.

Questions/Comments/Concerns on Power Supply Design for Analog Engineers
–Focusing on this for time being b/c I want some kind of reasonable assurance I’m feeding my computer healthy volts. Looking forward to getting to more computing and eventually coding/interfacing w/ pins, memory, and clocks but I want to set this up first. Know I won’t get my design right but will apply heavy shielding, ferrite, foam, fuses, and maybe a RNG or two w/ its own power supply to off-set that. Please feel free to call me all kinds of idiot names if the ideas are so stupid that they should’ve never been spoken in the first place…The simplest of a power supply filter typically consists of 1 capacitor charging for the jiggly-insanity of ramped down AC thanks to some heavy-duty coils (transformers) and dissipating what it was made to do; while beautiful in theory, modern designs have banks of capacitors, toroid cores, and increasingly more complex (yet w/ incredibly reliable performance) IC’s to make up for all the problems facing us today. Few ideas:

1) First off, would a linear power supply or a switched-mode PS (SMPS) be more secure? Too vague a question w/o a design? Is it as comparable today as it was in the past? So the linear one would generate more heat and be less useful for on-the-fly development but by creating that heat cloud up thermal imaging if I spread it and not do any “switching” which could conceivably be modulation of some sort… SMPS would potentially give off more voltage spikes and in turn more potential “injections” that I want to avoid at all costs.

Have a few cheap DC-DC converters laying around, want to make use of them. I know there are “buck” and “boost” converters that step-down and up voltages. Would taking say 3 of these, connecting them in series, 1st one takes in 24V and ramps down to 3V, 2nd one ramps back up to 24V, last one ramps down to 3.3 or 5V; w/ either just visible volt-meters inbetween them or tiny Atmel chips which would be fairly easy to program to check these voltages and short out and shutdown the computer on any variation, be a good idea?

2) Thought up a little circuit just today, can’t simulate it at the moment due to all my computer problems and busy-ness (waiting on server traffic for online simulation…). I want to prevent injections even when the computer is off. So I was thinking something along the lines of for example 5V coming in on a single wire, hits a switch is closed on a jumper wire just letting power flow and open switching to an extremely large potentiometer and a reverse-biased diode or another nifty thing like 2 diodes facing each other to prevent negative and positive spikes, all grounded too. When turned off, I would crank all these potentiometers (ridiculously high Ohms too) to max values to absorb power. Was told that that’s stupid and you should just do and open and closed circuit w/ the switch, but I want some extra assurance…Saw a blown out switch before, which is unrealistic b/c it was from a frickin’ lightning strike but still, I wonder about this kind of intense injections and maybe that allows a path for power to flow unimpeded…

3) Fairly OT question on ROM’s (not when you consider how they’re programmed..) as I was reading about “hysteresis” as it relates to ferromagnetic materials (hat tip Clive Robinson) a scary question popped up. So you blow out the specific fuses or antifuses in some sort of matrix fashion to set your PROM. Starting off all fuses read 1, once blown, read 0. It is apparently still possible to blow out remaining fuses after programming, and either altering the PROM (perhaps ingeniously resulting in a nasty PROM virus) or just flat out blowing all fuses to zero out your PROM and in turn kill your computer for awhile.

Has anyone done this? Is this another “fault injection” or some kind of power-related attack? I mostly play w/ flash memory these days and ROM’s aren’t something you just mess around w/ b/c once it’s done it’s done…

Gerard van Vooren July 3, 2014 4:55 AM

@ Nick P

The GEC instruction sets were interesting. The problem however is that it’s vendor specific. It would be better to have a standard for this kind of instruction sets, like with POSIX. Altough I am not a fan of standards I think it’s the best we have.

The other alternative is to have the instruction sets build in the kernel for hardware abstraction and then indeed only use system calls. Interesting solution. In this case it would also be nice if a system call API was kinda standardized.

I agree that all the programming languages reinventing the wheel when it comes about memory management features, concurrency, real time, etc is not the way to go, but they needed to because it just isn’t available in the kernels or cpu instruction sets. The result however is a massive amount of code overlapping. Each compiled program has its own runtime and that’s both a waste and a can of vulnerability worms.

SuperPascal looks good. I like the syntax. The type system of Pascal is what always attracted me. The same with Ada. You just say what you want. No (OOP) workarounds. And lowercase keywords is also nice to look at. A major downside of Pascal is that it lacks modules. I don’t know the state about that in SuperPascal. That said, I also wouldn’t rule out Zonnon.

Of course, what we are talking about here is hypothetical. The systems design world is dominated by C and you need a major player, that includes government stimulations / regulations, or maybe a new RMS, to change this status quo.

Tom July 3, 2014 5:16 AM

More news from Germany: State-owned news outlets NDR and WDR have been examining parts of XKEYSCORE’s source code (it is not stated how they got those parts) and found specific IP adresses. One belonging to the Chaos Computer Club, and another belonging to a student’s TOR exit node. [Sorry, german only, don’t seem to find an english version: http://daserste.ndr.de/panorama/archiv/2014/Quellcode-entschluesselt-Beweis-fuer-NSA-Spionage-in-Deutschland,nsa224.html%5D

This is relevant, because the Federal Prosecutor General previously stated that there is no “initial suspition” for foreign spying missions (yes, despite all that’s been leaked by now) on german people. That was the reasoning behind his decision not to start an official investigation on foreign spying activities.

This student’s case possibly might put some pressure on the Federal Prosecutor General and thus lead to an official investigation on NSA (and GHCQ) spying.

Benni July 3, 2014 5:19 AM

breaking news:

Users who search after tor or tails in the internet automatically get marked by NSA’s xkeyscore as “extremists”, this comes out after analyzing the sourcecode!!! of xkeyscore!

Furthermore, NSA monitors “Directory Authorities” of tor.
http://www.tagesschau.de/inland/nsa-xkeyscore-100.html

http://daserste.ndr.de/panorama/archiv/2014/Quellcode-entschluesselt-Beweis-fuer-NSA-Spionage-in-Deutschland,nsa224.html

http://www.heise.de/newsticker/meldung/XKeyscore-Quellcode-Tor-Nutzer-werden-von-der-NSA-als-Extremisten-markiert-und-ueberwacht-2248328.html

http://www.spiegel.de/netzwelt/netzpolitik/nsa-spaehte-tor-server-von-deutschem-student-mit-xkeyscore-aus-a-978914.html

If someone writes an email when he is connected to tor, NSA automatically analyzes the content of the mail.

Additionally, NSA monitors the german hacker group chaos computer club. These people, however, regularly assist the german government with reports to security questions, for example, the german parliament forbid election computers after the expertise of the chaos computer club.

Clive Robinson July 3, 2014 6:26 AM

@ Figureitout,

With regards power supplies you need to think of them as a black box with two sets of bidirectional terminals. One set you connect to a transmission line back to the primary power source, the other via a shorter transmission line to a load. Further you need to remember that most loads are not passive and can and do push power back into the power supply.

The advantage of the old fashioned transformer and smothing cap designs is the very very very low bandwidth between the terminals it can often be fractions of a Hertz and thus of limited use as a communications medium either forwards or reverse. The downside of this bandwidth is poor regulation with dynamic loads, usually requiring an active regulator and thus an increased voltage from the transformer. A further down side is the physical bulk and the mass that goes with it that can mean upwards of 25g/W at full load rating. Much of this mass is from the transformer with 80% of it’s mass being laminated iron plates that form the core. Ferite cores as usually found in torroid transformers offer a not just a better power to weight ratio even at 50Hz but they also radiate a great deal less than iron E-cores.

Switch mode power supplies work by simple rectification of the mains voltage and then pulsing this into a small transformer or inductor at a high frequency. The higher the frequency the smaller the inductive components need be. Further the smaller the filters need be as they are dealing with hundreds if not hundreds of thousands of times the mains frequency (see series resonant power supplies). Whilst the smaller physical form, less mass and greater efficiency are all desirable features, from a security asspect they are considerably worse than traditional power supplies.

This degrading of security is because SMPs have quite a high frequency response between the terminals, and worse they generate waveforms at much higher frequencies with high spectral content which radiate much more easily in the ELF through to VHF range. The result is that an eavesdropper can see the instantaneous power consumption of the load which gives rise to the possibility of Power Analysis of the load (I’ve mentioned this before with regards smart metering and it’s ability to tell which household appliances are in use or not and thus household activities of the occupants).

There are solutions to these problems but they do create others if care is not taken in how you go about things.

From a home production using easily available parts, you can do the following, the first is issolate the mains via one of those “safety issolation” transformers many workmen use. You will aditionaly need to issolate the input and output earths from each other. You then use commercial cased mains filters on both sides to help keep high frequency signals coupling either inductivly or capacitivly across the transformer, and this also requires you to keep input and output wiring well seperated as well (as per TEMPEST guide lines). At this point you potentialy have a “clean mains supply”.

The next thing is a source of argument amongst various people which can reach a “flame war” argument level, however I’m pro for various reasons whilst others are anti for different reasons. This controversy is about the use of a UPS to provide extra benifit. Some UPSs are designed to be continuously on to provide increased “brown out” protection, the way some of them work acts like a big lowpass filter in the reverse direction thus limiting Power Analysis greatly. It needs to be the right type of UPS and you will need to check it’s characteristics to ensure it is doing what you want not potentialy making the problem worse. If you do use one again remember to filter the outputs and if possible issolate the earths.

You should then be able to use an ordinary SMP type computer powersupply to provide the outputs you require. However you will need to do a couple of things, the most important is putting load resistors on the outputs (unless fitted internaly) these keep the SMP working in it’s desired characteristics as well as reducing power line impedance to reduce signal radiation.

You will then have to deal with your earthing and screening issues and you may find you have to provide not just a low impedance strap but also a series resonant strap as amateur radio operators do to overcome the inherent inductance of the low impedance strap.

Also remember the basic TEMPEST rule of keeping signal and power cables seperate by atleast a metre and keep them orthagonal (at right angles) to each other as much as possible to reduce coupling.

Iain Moffat July 3, 2014 6:29 AM

@Figureitout: Linear regulation offers an adversary fewer remote monitoring options than switch mode conversion (because an adversary may be able to detect slight frequency modulation of the switching frequency by load current variations whereas linear DC regulators with good input filtering don’t radiate but do directly transmit the load current back to the supply.

Most of the post 1970 (so EMP and TEMPEST-Aware) military PSUs I have seen circuits for are basically an input filter to prevent radiation, followed by EMP protection followed by a switch-mode converter with transformer output followed by DC series-pass transistor regulators for each required voltage – you may find the archive of military radio manuals at http://www.radionerds.com a useful source of examples. The key for a secure design without side channels is to ensure that activity on the output side does not directly modulate the current on the supply side.

Having said that you can’t actually do better than a battery that is charged off line (so there is a mechanical double-pole changeover switch that connects the battery terminals to charger or load but never allows the load to be connected to the charger/supply directly) with any voltage conversion happening within the screened enclosure of the equipment being powered. This avoids radiation by or signals being carried by the power cable altogether. If the battery output is the highest voltage needed then all the voltage conversion can be simple linear 78xx or similar regulators to avoid radiation, which is probably the simplest safest way to get an electrically quiet supply. You should be able to manufacture an input filter using toroids in series and capacitors in parallel with the DC supply to the regulator easily enough.

Hope this helps

Iain

Nick P July 3, 2014 11:52 AM

@ Iain

“Having said that you can’t actually do better than a battery that is charged off line (so there is a mechanical double-pole changeover switch that connects the battery terminals to charger or load but never allows the load to be connected to the charger/supply directly) with any voltage conversion happening within the screened enclosure of the equipment being powered. This avoids radiation by or signals being carried by the power cable altogether. ”

BOOM! Exactly what I suggested! It’s was the only thing my EMSEC ignorant self could come up with. Power line leaking information? How about we get rid of the power line? Unfortunately, that simple style of thinking didn’t help with the monitor. 😉

Iain Moffat July 3, 2014 5:01 PM

@Nick: The military computers I have seen inside of e.g. http://www.thexmod.com/item_detail.asp?id=11991 generally use an inverter to supply a small LCD or plasma display all within the same screened enclosure – there are a number of transparent conductive materials that can be used to cover the screen without leaving a hole in the faraday cage. Earlier examples that I know only from photos tended to have a switches and lights user interface with a printer for text output, or used a vector CRT with a long storage phosphor. Either approach radiates each displayed character only once so is a much greater challenge to read remotely compared with a regularly refreshed CRT or LCD.

I suspect the modern “electronic paper” displays as used in Kindle are in the same league as a printer or a vector CRT in this respect – in general low power consumption and infrequent refresh mean low radiation. A small 2.7″ 274 x 176 pixel display with an SPI interface and onboard controller is about £30 here in the UK e.g. http://repaper.org/doc/em027as012.html or Mouser part 924-EA-LCD-009 which uses a 3.3v supply at 8ma (so definitely compatible with offline battery power) and would certainly be adequate for the UI of a secure basic computer although not an engineering workstation.

Iain Moffat July 3, 2014 6:01 PM

Some time ago there was discussion on this forum about how difficult it would be to make an inline IDE (or compact flash card) encryptor. I came across this article on building an IDE interface for a Z80 8 bit system:

http://www.retroleum.co.uk/electronics-articles/an-8-bit-ide-interface/

It looks simple enough to be done (with the addition of encryption) in a small FPGA or a fast microcontroller – someone somewhere with the time and energy (i.e. not me) ought to have a go. I would hope to see something with IDE ports for the host and the drive, plus a serial port that can be cabled to the outside of the computer for key fill and diagnostics independent of the host. Perhaps also a “zero” button to erase the key.

Nick P July 3, 2014 6:26 PM

@ Iain

Thanks for the info on monitors. It might come in handy. The IDE thing might be fun for the hobbyists here messing with 8- and 16-bit systems. I won’t use anything less than a 32-bit as there’s already open cores for them. Heck, the SPARC T1 and T2 are open too. And the third core on this page is quite useful if one’s building in IDE/ATA support.

Iain Moffat July 4, 2014 5:33 AM

@Nick: I used the 8 bit Z80 IDE link because it is the best simple documentation of a minimal IDE/PATA interface I have found. The IDE data path is natively 16 bits so a 16 or 32 bit CPU is needed for efficient transfer between the host and disc ports and I agree 32 bit would be desirable for easy implementation of a modern reputable encryption algorithm. The challenge is to avoid the temptation to expand the trusted code base to fit the capacity of the 32 bit CPU ! It really ought to be the low level I/O code, the crypto algorithm invoked during read and write operations, and a very basic CLI to allow key setting, self tests, and disk status checks when off line from the PC.

I dont think it could ever be a commercial project as PATA/IDE is now obsolescent technology – the attraction is that it can be done with a few generic logic devices and a generic CPU programmed by the end user using software small enough to read the source in a day so requires minimal trust. A commercial product or kickstarter venture would need to use SATA or the SD card interface to have a worthwhile future and anyone attempting that will need to face the much greater complexity of SATA or SD and the resulting implementation will be that much harder for anyone to audit and prove free of backdoors 🙁

Iain

Clive Robinson July 4, 2014 6:43 AM

@ Iain,

Thanks for the 8bit IDE controler link, I’ll add it to my filing cabinate (of which I now have many 😉

It takes me back many many years to a time when the SC On SCSI stood for “Shugart Compatible” and 8inch 100MByte drives were “high end” and I was involved with interfacing them to a body scanner computer we had designed, that was so “high end” in design it quallified at the time as a “super computer”. The HD interface was difficult as we needed to suck the data off eight or sixteen drives in parallel just to get a barely acceptable data rate. The error issue was a problem which is why we developed (and patented) what would later be called RAID (but being a small company a fat load of no use that turned out to be).

@ Nick P,

Have a closer look at that IDE controler the eight to sixteen bit interface is in part a simple “letter box” interface, which gives a one word dual ported cache memory interface that with a little thinking could expandanded into a multiple word or buffer letter box with dual ported RAM chips Iain mentioned the other day.

Such letter boxes are needed when the respective systems interfaces are not compatible without “dog-n-traps leaping through burning hoops” type designs. The penalty of such systems is both speed and lock out, however sensible software and double interupt and flag design should minimise these issues.

When system busses are compatible letter box interfaces can be more conveniantly developed using MMUs and DMA and for security the MMUs should be controled not by their respective CPUs but a securiry hypervisor.

I briefly mentoned such systems in our C-v-P ( as Wael likes to call it) discussions.

If you are designing segregated IO systems where flexability is a requirment –ie different CPU families– I would suggest you look at such letter box designs over IOMMU solutions as often you can design a “standard bus interface” solution with just PALs such as a 22V10 on each CPU interface and 74HC latches and OC buffers to solve the myriad of level translation issues that arise these days.

Admittedly as I’m a bit of a “cheap skate” I tend to use 100MHz PIC parts in early prototypes as it’s usually a one chip solution you can develop in a matter of a few minutes with “cut-n-paste” software development once you’ve done the first one or two.

k9 July 4, 2014 8:02 AM

What kind of mischief could someone get up to, if they could hack your phone and computer to make you think think it was a day earlier than it really was?

Nick p July 4, 2014 11:49 AM

@ k9

Sounds like the start of a budget movie. My experience in this area was a long time ago when one of my adversaries tried planting evidence on my computer to get me in heaps of trouble. I get officially investigated and interrogated before I know anything about it. The key part of the evidence was the time on the files. They believed their network and systems were locked down so the timestamps were valid. So, I had to show that both weren’t true.

For the time part, I made a program that changed system time and another that changed metadata on files. I made both in Visual Basic 6 using simple search results to show them that the least competent technical user could pull it off. I also breached their network in under a minute due to its configuration, popping up messages and altering files on various computers. The combination showed the opportunity, but they still needed a name and motive. So, I’d have to try to explain why that was infeasible and even irrelevant when discussing hackers.

In the end, they still didn’t buy it. My remaining time with that organization was short and I lost that reference. Taught me important lessons about perception vs reality in IT security, along with motivating me to deal with some issues. Anyway, modifying system time could destroy careers back in the day. I imagine similar stuff could happen today.

Figureitout July 4, 2014 4:45 PM

Clive Robinson
–Feeling rather generous as of late, eh? :p Thanks, some of your suggestions may be too much for me (and I’m always a little suspicious if they’re adding a way in for you lol); but I’ll add your comments to my design notes. Companies and I’m sure gov’ts have entire teams of engineers dedicated to PS design, figures…

Amazingly, just the other day doing a little testing there was an incredible amount of Pk-Pk voltage ripple coming from a DC supply and then to another step-down converter, and then even w/ a capacitor and inductor on the output; still so much ripple…So I’m pretty sure this is going to be a continuing frustrating subject for me. Anyway to continue on (I’m assuming a basic rectifier circuit off the transformer, BTW). I’m thinking a linear supply is the way to go for my purposes (EMSEC). Not to mention easier diagnosis of failing components, my dad was most likely able to find 2 faulty transistors using just a multimeter and schematic (I’ll see some other time if he was right as it was someone’s supply that they couldn’t troubleshoot). Also, my little variable resistor to a reverse-biased diode has another name on the charged-up capacitors for when you turn the power off. “Bleeding resistors” (hopefully not heart-bleeding…lol), that serve an important safety aspect of discharging the caps. My idea though is for shorting switches through-out the computer, each time before turning on and after turning off, I’d have to close them all. The shorts would force fault-injections to really get specific on the actual chip or memory banks.

The advantage of the old fashioned transformer and smothing cap designs is the very very very low bandwidth
–That is what I really like to hear. I’m ok w/ the weight too, I’m not building some flashy/sexy looking machine; it’ll be ugly and semi-secure. So many times in electronics, and life in general, there are “special numbers”; ie: eerie properties that can be extremely non-intuitive. So there are single and double section filters (the circuit is beautiful), is there any reason to get a little carried away and make like a 10-section filter or is that something for me to test?

For potentially vibrating inductors, is a dollup of hot glue sufficient to shut them up? Also would you keep or minimize transistors (that seem to be gaining popularity) in the design?

worse they generate waveforms at much higher frequencies with high spectral content which radiate much more easily in the ELF through to VHF range.
–Too scary, hell no! My ARRL handbook agrees, I quote:

Switchmode circuits can also generate radio-frequency interference through VHF because of switching frequency harmonics and ringing induced by the rapid rise and fall times of voltages and current. In attempting to minimize the ON-OFF transition time, significant amounts of RF energy can be generated. To prevent RFI to sensitive receivers, careful bypassing, shielding, and filtering of both input and output circuits is required.

RE: UPS’s
–I’m split on that, since it’s likely I’ll find a way towards failure quicker than success, I’m wary. Saw a nice one at work, big inverter, just not sure if it will make things better or worse.

Also remember the basic TEMPEST rule of keeping signal and power cables seperate by atleast a metre and keep them orthagonal (at right angles) to each other as much as possible to reduce coupling.
–Ah man, a METER?! I’m not making a computer for giants…I’ll just shield, foam (test the other day reduced g-force from ~125g’s to ~3-4g’s for a half-centimeter piece of foam), ferrite, and toroid loop them. And twist them, at first I thought that was just for neatness, but it has an actual purpose (canceling out EMI).

Iain Moffat
Linear regulation offers an adversary fewer remote monitoring options than switch mode conversion
–Well, I’m thinking that’s what I’m going w/. So long as I can filter BOTH ways and not make a rookie mistake…

RE: radionerds.com
–Site is chock-full of info, thanks. Proper link to it: http://www.radionerds.com/index.php/Main_Page

The key for a secure design without side channels is to ensure that activity on the output side does not directly modulate the current on the supply side.
–Hmm, ok…deep. Makes me think of some transistor switch involved though…

Having said that you can’t actually do better than a battery that is charged off line
–Assuming the battery doesn’t have some malicious circuit enclosed in it, yes. That’s another project w/ my Cassiopeia I hope to get done before school starts again (I’m not going to get done w/ the computer all the way, at least I don’t think I am) and I have some TI calcs for remote operation and tiny file transfer/storage. They aren’t shielded yet though…ugh.

RE: the xmod TEMPEST computer
–This is how I envision my computer somewhat. A briefcase like machine that folds open. Is the rolly-ball on the right a sort of “mouse”? Is the keyboard shielded and the wire to the board shielded too? On the back-side, do you know the purpose of that extruding metal w/ holes in it? Also, the top left, that mesh-screen, is that the heat-holes and is it also shielding?

RE: link to Z80 IDE interface
–Yes, I was planning on adding an IDE interface actually, when I get there. I’ll definitely “have a go”; also I have some older CDROM’s that have IDE connectors to potentially have some CD capabilities, but I’m getting ahead of myself. There was also another computer enthusiast who made an IDE-interface which I may use too. (News item dated June 16, 2014 for future archive readers).

http://cpuville.com/news_and_issues.html

Hope this helps
–They pretty much always do. You can restrict yourself better than me and focus on good info, calm demeanor, nice advice. 73 de USA.

Iain Moffat July 5, 2014 3:42 AM

@Figureitout: Regarding the Tempest PC: The ball to the right of the keyboard is a trackball. The keyboard case is metal and I assume the keyboard flexible cable is screened – I have never examined it in detail. The grid at the back looks like it should be a cooling air intake with a fine wire mesh filter behind it, but there is no corresponding exhaust. The metal extrusions with holes don’t seem to have an electrical role – I think they are part of the frame. I bought two with the intent of repackaging a modern PC into them to make an electrically very quiet machine for use at amateur radio events, rather than as a security measure, but they ended up in storage when I moved house. The discs went in slots behind the door to the right of the screen, and there was an optical fibre network port on the back. They actually have a gel-cell lead acid battery inside which is one reason for the size – so they can use off line charging too. Possibly that’s the reason for an air vent in case of the battery releasing hydrogen. My ones had no discs in – I have seen another listed on e-Bay that booted into Windows NT4.

Wael July 5, 2014 3:45 AM

@Clive Robinson,

I briefly mentoned such systems in our C-v-P ( as Wael likes to call it) discussions

I call ’em as I see ’em 🙂

Figureitout July 5, 2014 11:33 AM

Iain Moffat
–Cool! Thanks for sharing! And uh…oh you have 2 eh..? Hmm…looking to get rid of one? :p

JerryH July 6, 2014 9:53 PM

Charles Stross has a whole squid civilization at the bottom of
a water engulfed planet. The sister of the heroine chooses to
live with them after getting a fortune of her mothers illegally
obtained wealth. Great book. You should read his definition
of Bezos worms, another denizen of the depths & a shot at
Jeff Bezos by someone in the writing biz.

Figureitout July 7, 2014 10:25 PM

RE: my ripple troubles
–Found the problem today, it was a goddamn cheap laptop switching power supply…no joke. Noise on 54kHz and 7-10MHz…Saying it so others can learn from it. I should’ve known as the scope was literally sitting on top of it…But got a chance to make use of the shielded room (muhahaha:) and it was a useful tool in the diagnosis. What really tipped me (us) off was still getting all this ripple off a DC battery, it should’ve been flatlined…Unplugged the culprit (after turning off lights, monitors, etc.) and getting the results we should have. Also, messing w/ the scope probes, you can pick up the noise off the scope power supply itself…really and truly the frickin’ scope probes are in and of themselves an antenna.

Just be wary of noise sources if your doing some scope work, it will lead you to false conclusions (which we were able to rectify). Even a multimeter can induce noise and lead to slightly false readings if you don’t know what you’re doing.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.