Friday Squid Blogging: Video of Live Giant Squid

Giant squid filmed swimming through a harbor in Japan:

Reports in Japanese say that the creature was filmed on December 24, seen by an underwater camera swimming near boat moorings. It was reportedly about 13 feet long and 3 feet around. Some on Twitter have suggested that the species may be Architeuthis, a deep-ocean dwelling creature that can grow up to 43 feet.

Some more news stories.

A few days later, a diver helped him get back out to sea. More amazing video at that link.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

And Happy New Year, everyone.

Posted on January 1, 2016 at 12:29 PM155 Comments

Comments

Clive Robinson January 1, 2016 5:03 PM

@ Nobody,

Perhaps the only person who didn’t see this coming was Dad.

🙂

That is the best demonstration of an “end run” attack against “end point security” I’ve ever seen.

Even the dumbest of C floor Execs should understand what a “security end run attack” is after looking at those four photos.

@ Bruce,

You should get those photos for your next book, they are certainly worth a thousand words on their own 😉

Power Through Data January 1, 2016 5:07 PM

Microsoft, Google and Facebook gained immense corporate wealth and political power by allowing the draconian CISA law to pass without so much a whimper.
With all privacy restrictions and penalties removed, these deputized corporations are given license exploit citizen privacy and diminish our founding-father freedoms. The law provides immunity from prosecution.

For example they target elementary schools by building our children’s verified identity dossiers in the classroom by declaring themselves a ‘School Official’. Parents have no input or even made aware that their child’s daily school records and correspondence are supervised by advertising giant Google.

The CISA will allow ambitious political appointees to further push the privacy envelope and circumvent judicial due process, using FBI enforcement as their weapon of choice. FBI director Hoover developed the original dossier system to seize control over the Washington political establishment for decades beginning circa 1950. The public never knew about the blackmail until long after Hoover died.

Power to Manipulate
The most important implied aspect of the new CISA bill is it gives immense power to private unregulated Big-Data collectors to skew elections and tailor the lawmaking process.
For example a corporation’s confidential political action committees can voluntarily and selectively frame what ‘dirt’ they decide to release to the government through CISA channels. Even if it breaks the spirit of the law, this process is automatically classified and immune from Freedom of Information requests. Anyone identified as having a conscience will be asked to leave the room or reassigned.
Law enforcement is already well versed in making up stories to hide sources of information. This technique of FBI sponsored Parallel Construction already allows law enforcement to deceptively mislead the public, jury, prosecutors and judges. CISA goes far beyond cybersecurity, and permits law enforcement to use information it receives for investigations and prosecutions of a wide range of crimes involving any level of physical force including domestic violence, fights at school, breaking and entering, auto theft, rape and robbery.

In the Pocket
Big-Data offers the rich and powerful a complete one-stop proprietary service combining dossiers with tailoring Search engine results and the framing of corporate generated news and polling. The legions of targeted, malleable, spoon-fed,smart phone users will then form their ‘own’ opinion.
Politicians and 0.1% will quietly come begging to influence the next election. They will be told nothing is free. As an current example the low-cost highly skilled immigrant program was just greatly expanded befitting Big-Data, all at American engineer expense.

No Worries Mate She’ll be Right (Time to Worry)
Ironically the NSA cheerleaders in Congress are already in meltdown mode learning that THEY TOO are under NSA surveillance with real-time reports being sent to the White House. These same lawmakers voted for their lobbyist authored CISA law without even reading it. These morons also don’t realize (under CISA) their email, phone records within the USA have also been forwarded to the FBI for foreign national security and subversion issues.
Like crazy mad scientists, the Beast they created has already turned. No one is safe or off-limits, especially Congressional Oversight Committees. Snowden also stated he could easily eavesdrop on the POTUS.

Clive Robinson January 1, 2016 5:34 PM

@ Bruce,

A paper you might want to read,

http://www.princeton.edu/~aylinc/papers/caliskan-islam_when.pdf

Long story short, a team at Princeton have found a way to generate usable “fingerprints” of software authors style from executable files, thus identify the authors work across apparently otherwise unrelated executable files.

This is not only bad news for malware writers, it’s very very bad news for the various agencies such as the NSA and GCHQ. Because it will enable those writing attack code for them to be identified from other non secret work that carries their name or professional affiliation.

ianf January 1, 2016 6:57 PM

@ Clive,

you are doing yourself a disservice when you signal an intriguing academic paper merely by posting a pointer with an ominous-sounding “ISLAM_WHEN” in the URL, rather than its original, self-explanatory title:

When Coding Style Survives Compilation: De-anonymizing Programmers from Executable Binaries

by Aylin Caliskan-Islam∗, Fabian Yamaguchi†, Edwin Dauber‡ Richard Harang§, Konrad Rieck†, Rachel Greenstadt‡, and Arvind Narayanan∗

Personally, I’ve been waiting for something like that to appear—though perhaps only for original source, not for decompiled, uncommented program code—ever since I first read about Donald Wayne Foster’s quantitative and qualitative forensic linguistics, which also are at the core of any serious (de)cryptography. I owe part of my career in IT to once logically decoding a huge, for years not maintained, but working, FORTRAN program (pre-FEM stress analysis, of which I know nada), during which and after a while, I could almost feel what mood the long-deceased programmer was in when he wrote each routine ;-)) In any event—interesting.

Doug Coulter January 1, 2016 7:47 PM

@ianf, @Clive

I wonder what this would have made of our code when we were writing MusiCad, WavEd, and plugins for CoolEdit.
I was at the time running a small, tight team of serious DSP coders trying to get the absolute maximum possible out of 486 and early pentium machines for audio DSP. We frequently looked at compiler output to determine how to write the best C (or C++ later on) to get it to generate the best code for the cpu family in question, as this was faster than putting in MASM in terms of programmer trip-time.

Yes, we looked at the Intel libraries that auto-optimized for each flavor for such things, but the terms of use were quite onerous and no, we didn’t want to have to donate our hardware to them either (one of the requirements, and this DSP, TMS320c31 board wasn’t cheap on our budget). We did do some CPU detection and alteration of some of the floating point stuff in particular, as cache, pipeline, and ordering issues were quite different inside the then-current crop of cpus.

Would that merely have identified us as (merely) extremely seriously good coders?
How much does the way things were broken down affect these metrics? There really is a very narrow range of “best way” to divide up streaming DSP work in multiple threads such that users can have realtime previews of special efx while the UI remains responsive to input on the controls, and the output is glitch-free.

So, can one programmer’s “perfect” code be told from another, or just that this guy (or gal or team) is really, really good?

Slightly OT..

Of course, one has to laugh at some of the things people attribute these days to “state level actors” – most of which my team could, and did, easily accomplish in an afternoon with not too above average toolsets. Reverse engineer a hidden debug bios in a PIC chip (which violated some patents, hence hidden and “prevented from being downloaded by their own tools”)…but of course not ours – we routinely wrote our own just to help debug what was going to be our real project, finding most of the manufacturers efforts somewhat anemic to say the least. It was always the first job on the project for something using a new embedded uP for a customer we did product design for.

These days…heck if you want to flash a bios, the code on a drive, whatever, not only can you get the tool to do so freely from the manufacturer, but at a minimum two samples of “working” code – what was in there already, and the update. It takes “state level actors” to figure that out? Really? Even then we could look at a batch of hex and know which of very few popular uP’s it was for – and there’s not a whole lot more basic instruction sets out there now…and of course, there are free decompilers for each readily available.
These guys pulled it off pretty easily and with a uP with instruction mapping so dense there was only one illegal instruction… http://www.bunniestudios.com/blog/?p=3554
I leave the implications for the broken (and by nature, not fixable) USB protocol to the imagination of the reader. Oh, wait, no need: http://hakshop.myshopify.com/products/usb-rubber-ducky-deluxe?variant=353378649

I guess people who can’t solder need things like this…
It’s a good thing most of we accomplished engineers have the “martial arts” effect going for us – we made all this cool stuff – we don’t have anything to prove, and by golly, we don’t want to break it – in my case it let me retire while young enough to enjoy it. Mine’s the white hat in the corner.

Thoth January 1, 2016 9:44 PM

@Clive Robinson, Nick P, Markus Ottela, Secure Tunneling et. al.
We know that spying and especially the use of metadata and routes have been one of the foundation of SIGINT/COMINT. @Clive Robinson had also mentioned about using the broadcasting method of sending messages to hide the origin of sender/receiver. Trying to make a whole new protocol would allow SIGINT to pick up on the signature of protocols and attempt traffic analysis.

The better method would be to use an existing and commonly relied on public protocol as a tunnel to hide the actual message.

Some suggested common tunneling protocols:
– Bittorrent
– DTLS (UDP + HTTPS)
– plain old HTTPS
– FTP/TFTP/SFTP
– SSH

Any other suggestions for a commonly used tunnel protocol (best if it has a broadcast messaging support) to help establish a secure messaging over broadcasting protocol ?

Nick P January 1, 2016 9:51 PM

@ Nobody

That was awesome. Excellent example of out-of-the-box thinking security requires. Beating or enforcing it.

@ Clive

Thanks for that nice list, esp given the Haskell link. Author calls it a toy language but it has all the core principles and techniques. Needed one like that. Bookmarked. 🙂

ianf January 1, 2016 10:08 PM

@ Doug Coulter, your guess is as good as mine… I have yet to read the paper (not sure I will since it’d be way above my algorithmic head), but suspect that the decompiled code-to-be-fingerprinted has to be at least of a certain length, type and scope, and that the method is better at determining proximity between known and unknown-author examples, than at comparisons to a database of sample instances. Given (I keep hearing) that most professional utility programmers essentially keep refining the same loop time and again, I’d imagine that that ID-decompiler looks for things like median length, frequency of and degrees of nested loops and other base constructs, then evaluates all such “insights” in some “master footprint” to arrive at a hashed fingerprint to be compared to others of similar “nature.” I suspect it’d find it harder to fingerprint hard hand-optimized code such as yours, than such relying on known library compiled object patterns.

As an example, had this tool detected this kind of construct (no joke, though the syntax perhaps somewhat misremembered by me):

    IF A WHILE X THEN B UNTIL Y ELSE C ENDIF ELSE Z ENDWHILE

… I’d have recognized it as could-but-have-been-written-by (a, once) BBC Basic programmer; due to error in the interpreter, it allowed this kind of Mœbius-like code… with endless loops and now and then interesting bells and whistles.

ianf January 1, 2016 10:48 PM

@ Clive (cc: Nobody)

Even the dumbest of C floor Execs should understand what a “security end run attack” is after looking at those four photos.

You’re kidding, right? Or do you R.E.A.L.L.Y think that people, never mind dumbest C floor Execs, go around thinking in terms of identifying and labeling security end run attacks? If so—just what warped view of what it takes to become a C floor exec (and then your uppercase Exec) do you subscribe to?

BTW. it took me a while to understand that these were not physical car keys, but electronic fob-keys of (must be) quite newish Yankee car models… and that, given additional distance and steel plate to the ignition sensor, poorly designed ones to begin with.

Doug Coulter January 1, 2016 11:39 PM

@ianf
Yup, I’ve seen just what you describe myself – but as source code. I could surely tell which of my brilliant 2 compatriots (I was lucky in hiring!) wrote something, and a little of their mood by the source. Not as much by the final binary, but yes, some. Your point is totally valid. In retrospect, it’s obvious to me, anyway. One of those guys would finish before the vision statement could be fully articulated, the other polished endlessly and could never finish anything – you had to tell him a job needed 150 features, and take it away at 120. The former you had to take out and get drunk so as to complete describing the task before he finished, as no one likes doing it over, after all. But neither was run of the mill – both were at extremes. When we coded “together” as in “extreme programming” which book came out years later – I doubt you could tell who was at the keyboard at all, other than that it was “us” collectively. You could definitely tell our code from most of our customer’s – they hired us for a reason, after all, and we thought they were glad they did.

Perhaps I was making the wrong point, or looking at things in a less useful way.

I would guess that most of the use of programmer ID via binaries would be to figure out where some malware came from – most other code has some sort of attribution already, after all. And those guys, from what we saw (we captured, but didn’t “catch”, a lot of viruses from the dialup days forward, and analyzed them) – aren’t usually all that great at programming – one or two nifty tricks perhaps, and as you say, endless refinement of the same thing(s). IIRC, we could sort of tell them apart by the skill evidenced, and a bit by styles. Not sure we could attribute with any confidence, other than to say “this is probably not the same guy”.

That followed an interesting progression itself. At first, most of the malware we saw was with a free (Borland C or Pascal) compiler – which tagged itself in the executable, and guys too dumb to even strip out the debug symbols…gee, that was easy, we had debuggers and could even see their comments – source – and so on. But those were “kids playing”. Lulz.

Later, there was money in it. The code got a lot better, and we started to see patterns in the binaries we were familiar with, because MS Dev Studio’s compiler had these various quirks we were familiar with from our other work, what it would do with this or that construct in terms of creating a binary from it. It became obvious that with money in it, better programmers were getting involved. Still hardly anything more clever than return oriented programming, which was just starting to happen when I stopped watching – and which is now kind of passe if I understand things like address space layout randomization – but it was a slick trick at the time – and one we adopted for some legit code where we had the necessary control of the environment to make it a safe technique. We were always about efficiency – we wanted to produce code that created the reaction “I didn’t know my machine could do that!”.

I’m unaware of things that might have been state actors – and if what we know now is close to true, most of their work is not going to show up at your machine anyway for analysis – they’ve MITM’d whatever and that class of attack. One would think they’d prefer that over something that leaves tracks in your bios and so on. You would think they’d be more concerned about attribution than most.

The kind of trick linked by @nobody seems more where the gain at the margin is nowadays – the total end-run (not even a subtle side-channel), but I am no longer in the active biz, though I’m still doing things with computers. As a retired homesteader now, I’ve developed (perhaps even anticipated, been at this awhile) a network of things – like IOT, except nope, no internet involved, just LAN. Not even all ethernet, in fact. Sometimes hard wires are more reliable than hoping a computer or interface hasn’t crashed when a function is really important, or a failure means some danger.

I have no need to control the various functions of my (off grid solar, water, heat, garden, etc) stuff from “on the road”, and the IOT seems like a bunch of disasters waiting to happen, not even considering the crazy loss of privacy. One could get worried about things like persistent flashed bioses and suchlike – secure hardware used to be a given (or close – a Z80 didn’t have the gates to hide things in at the price) and it used to take a UV lamp or at least a jumper move or a parts change to compromise things like that – I’ve heard nickp mention his thoughts on it here a few times and generally agree that it’s darned hard these days to be sure (They call it hardware for a reason? I started out as a hardware guy.)

At some point I suppose paranoia is more or less useless – if the “bad guys” want you, here comes the black conveyance and you’re tactically outnumbered, and there you are – you are had. If they really wanted you – they wouldn’t have to spy on you to get evidence – I myself can edit audio/video/documents well enough that “god himself couldn’t tell it’s fake”. Wouldn’t even need Cardinal Richelieu’s six lines. I suppose the surveillance is mainly to help them figure out who to want to “get” in the first place.

I suppose this is where Bruce would chime in with some words about balance, risk and costs. I think I’m personally in a happy place there, but no two situations are identical. I’ve been familiar with the cost of false detect * probability of false detect = (ideally) cost of missed detect times probability of missed detect – and often been tasked with reducing the overlap, or what I call increasing the dynamic range – not merely setting a threshold at min cost, but improving the systems so that when those are set equal, cost is lower than before. In other words, rather than simply setting the perfect decision threshold, move the two distributions further apart for reduced overlap.

I do wonder what, in the end, attribution of malware really buys us, however. I can see LEO types drooling over it, but it seems like a whack a mole game to me, as soon as you get one – another pops up, it’s not hard to learn how to be at least an annoying amateur at malware these days. And since attribution wouldn’t be sure, it’s only one step of a few required to “take some loser out of the market” anyway. So, a fun concept to toy with, but…given the results of the “wars on X” maybe we should just declare a war on good code and systems, instead.

As it is, most of the stuff I see these days reminds me of the old saying – “if we built buildings the way we do systems, the first woodpecker to come along would destroy civilization”.

Wesley Parish January 2, 2016 2:28 AM

Interesting read:

I miss when people took time to be exposed to opinions other than their own, and bothered to read more than a paragraph or 140 characters. I miss the days when I could write something on my own blog, publish on my own domain, without taking an equal time to promote it on numerous social networks; when nobody cared about likes and reshares, and best time to post.

http://www.theguardian.com/technology/2015/dec/29/irans-blogfather-facebook-instagram-and-twitter-are-killing-the-web

Hossein Derakhshan (@h0d3r) is a Tehran-based author. He is currently working on an art project called Link-age to promote hyperlinks and the open web.

Clive Robinson January 2, 2016 3:27 AM

@ Thoth,

The main problem with existing protocols is they are very asymetric in the data direction, thus they leak metadata quite baddly.

The reason for the asymmetry is both historical and the way the internet is used resulting in “efficient” protocols.

Put simply the Internet was about transporting files efficiently, mainly from a server to client. Thus you have a very short pull request packet followed by a whole load of short acks from the client, with the server side providing mainly long data packets. On the comparatively rare occasions the file transfer was in the other direction, the short acks would come from the server. Thus without padding the metadata of packet length identified uploaders from downloaders and the number of packets would often fingerprint the files. Thus ringleaders / visionaries / rabble rousers / commanders could be easily identified by authorities, just by passively watching packets flow by.

Thus you need to look at more balanced protocols, which unfortunately are normaly for interactive services such as for phone calls or video confrencing.

These protocols however have their own charecteristics that give away if they are real or fake audio/video.

Thus it would be better to start off with a “Token Ring over IP” type protocol. Where there is a fixed number of packets going from node to node around a ring.

Thoth January 2, 2016 4:53 AM

@Clive Robinson
It seems like TOIP is a rather old protocol. Would look into it soon. Is there a necessity for the exterior tunneling protocol to be symmetric during the initialization of the external tunnel ? The tunnel’s purpose is simply to disguise the internal actual secure messages to bypass Deep Packet Inspection and the more the external tunnel looks convincingly like a common encrypted protocol (e.g. HTTPS) the more likely it is to survive being inspected whereas the internal actual secure messages would definitely need to be symmetric.

Clive Robinson January 2, 2016 6:21 AM

@ ianf,

It’ll be easier to deal with you points in reverse.

electronic fob-keys… and that, given additional distance and steel plate to the ignition sensor, poorly designed ones to begin with.

It depends on the spec, but as a general case the specs are designed for “good customer experience” the same as credit cards etc. That is their real purpose “secure authenticated transactions” is subservient to manufacturer “good name” and “customer support costs”.

Thus as these electronic keys have to work from within jacket pockets on freezing days, strong interferance from shared ISM etc band usage, and customer desire to unlock from anything upto fifty meters away (think having loads of shopping on a rainy day, the customer comes out of the store and whilst still under the shop awning puts one handfull of bags down operates the key, then picks up the bags to rush quickly to the car to minimise soaking / getting hair wet / etc etc) the output power is quite high.

Marketing are looking at “customer experience”, do you think they would let you p155 on their parade with ludicrous stories about people stealing safes to steal a car?… Long answer short Not a snowball’s chance in Hell. But that is exactly what a “thinking hinky” thief will do, and now it’s known so will many others. That as they say “Is just the way the cookie crumbles”.

You’re kidding, right? Or do you R.E.A.L.L.Y think that people, never mind dumbest C floor Execs, go around thinking in terms of identifying and labeling security end run attacks?

Well as you yourself indicate you had initial problems of understanding what the security issue was.

One asspect that follows on from the Dunning-Kruger effect, is that above average practitioners in any field of endevour tend to under estimate their abilities thus think others should easily understand the issues at hand, when in fact they realy don’t through lack of contact with the problem domain.

A lot of people outside of the US have no idea what an “end run” is, they tend to call it more appropriately an “own goal” or similar. Thus when we glibbly talk of “end run attacks” we have a habit of thinking it’s “self explanatory”. However when pushed for a clarifying explanation many security proffessionals will pick a true but poor example such as “keyboard logger” or “Shoulder surffing via covert CCTV” etc etc. Non domain experts don’t understand those words, they just think “jargon” or pick up on a word they do know like CCTV and get the wrong impression of what is being said.

The joy of this example is it’s “unexpectedly off the wall” and it’s actually quite funny. Which means it is going to get their attention and the true meaning of “end point security end run attack” will get through loud and clear.

As for “Dumb C level Execs”, you might not like it but it’s a variation on “Dumb Professor” or “not street wise”. The thing is usually by the time you get to C level Exec you are fairly set in your ways as your 18-38 new learning decades are behind you. In your own field you are smart and on the ball from experience, but in new fields of endevor, you lack experience and the livelyness of mind to snap up new/novel ideas. Thus you are on the same starting blocks as Joe Sixpack, whos field of endevor is couch surffing. Where you get to after that is based on how good you are at skill transfer from one field of endevor to another, it’s why once upon a time Rrenaissance Man was highly prized.

CallMeLateFor Supper January 2, 2016 8:10 AM

Education theatre. And no doubt eye-wateringly expensive.

“[…] a cottage industry of counterterrorism training in recent years aimed at teaching people how to spot would-be terrorists before they attack.

“These behavioral indicators have become central to the U.S. counterterrorism prevention strategy, yet critics say they don’t work. ‘Quite simply, they rely on generalized correlations found in selectively chosen terrorists without using control groups to see how often the correlated behaviors identified occur in the non-terrorist population,’ Michael German, a former FBI agent who is currently a fellow at the Brennan Center for Justice at New York University School of Law, told The Intercept.”

https://theintercept.com/2015/12/31/prior-to-san-bernardino-attack-many-were-trained-to-spot-terrorists-none-did/

Thoth January 2, 2016 8:27 AM

@all
Snake-Oil Key (EveryKey). It seems the Security Snake Oil business is thriving ground.

What is wrong with EveryKey ?

1.) Just let every other security product claiming “Military Grade Encryption”, it mostly never fails to fall into some category of weak encryption (below 128 bits key or using ECB mode for symmetric ciphering). Same trick that CyberArk uses to promote their crapware.

What defines an acceptable Military Grade Encryption to the US Military for at least Top Secret clearance namely the use of at least 256 bits key for symmetry crypto or 4096 bits key for RSA or DH and 384 bits and above for ECC crypto. GCM mode is also encouraged. Those are the NSA Suite B algorithms with NIST recommended key lengths and modes for Top Secret level clearance of data protection.

Even with the above algorithms, without a FIPS 140-2 Level 3 and above and also a CC EAL 4+ and above specification, it would not be acceptable in the Government sector (or even in the Banking sector with PCI-DSS and EMV governance).

2.) Bluetooth OTA updates ? Just like insecure phone updates OTA (Over-the-Air) ? Hmmm….

3.) Device usage without login into the EveryKey device. A very very poor idea to simply use proximity to detect and unlock with device around. If a user has his EveryKey taken from him before he can deactivate the device and his stolen EveryKey device is presented and uninstalled or logged in without his permission or coerced…

What could have been done to improved the EveryKey:
1.) Switch to AES 256 bit key and get a FIPS 140-2 Level 2 and above and also a CC EAL 4+ and above certification before calling it Military Grade. Even with those certification, that does not immediately makes it Military Grade unless it has been certified by the US DOD and NSA or some country military organisation for use in an actual military context with Restricted or equivalent and above classification. They should stop the bad hype and just use AES encrytion…

2.) Updates should be done via a physical port instead of OTA as it’s more secure. They did not mention if the firmware has been signed and encrypted for a secure update either.

3.) An on device screen and PIN pad would vastly improve the EveryKey’s security where a user would unlock the EveryKey via a PIN entry into an embedded PIN pad then connecting the EveryKey to the target device to unlock it.

4.) The idea of freezing a EveryKey device is somewhat partially flawed. It requires all the devices relying on EveryKey to be Internet connected over some secure channel (HTTPS) to listen for an emergency message from EveryKey’s servers on the status of the paired EveryKey device. If the EveryKey device has an onboard secure PIN pad and a screen connected to a Secure Element chip requiring the login of a EveryKey before it’s use, this would avoid the need of having every connected EveryKey device to have some kind of secure Internet access.

Both Plastc and Ledger Blue (links below) supports a on screen touchpad that allows PIN entry directly onto the card/device and have a CC EAL 5+ ad above smartcard chip as it’s Secure Element all packaged into a single platform which is the most ideal form factor for a more secure device for protecting secrets.

Link:
https://everykey.com
https://plastc.com
https://www.ledgerwallet.com/products/9-ledger-blue

CallMeLateForSupper January 2, 2016 8:30 AM

Interesting blog post by Dan Luu. “How Completely Messed Up Practices Become Normal”

“The data are clear that humans are really bad at taking the time to do things that are well understood to incontrovertibly reduce the risk of rare but catastrophic events. We will rationalize that taking shortcuts is the right, reasonable thing to do. There’s a term for this: the normalization of deviance. It’s well studied in a number of other contexts including healthcare, aviation, mechanical engineering, aerospace engineering, and civil engineering, but we don’t see it discussed in the context of software. In fact, I’ve never seen the term used in the context of software.”
http://danluu.com/wat/

A current article in The Atlantic references “normalization of deviance”:
“What Was Volkswagen Thinking?”
http://www.theatlantic.com/magazine/archive/2016/01/what-was-volkswagen-thinking/419127/

Gerard van Vooren January 2, 2016 9:11 AM

Happy new year and best wishes to all!

@ Dirk,

It’s good to read “inside information” because quite frankly I am fed up with the standard US gov output and the news sites that use “unnamed officials” as sources. What’s interesting in the report is that it confirms what you claim a long time ago. Toppling Assad will lead to the IS flag in Damascus. Anyway, we will see where this proxy war leads to. So far “bombing towards democracy” has been a fallacy and has resulted in extremism, which is also quite understandable.

Clive Robinson January 2, 2016 12:02 PM

Did the NSA Take out the BBC?

That is a question that is being asked in some -limited- quaters currently…

As some people might know the UK’s BBC has quite a large web presence, and last Thursday Morning (UK time) it was subjected to a very intense DDoS attack.

The whole attack was a little odd to start off with, and the BBC other than contacting one or two UK agencies has been very very tight lipped about it.

However information leaked out that some of the originating IP addresses are those that have been suspected of involvment with alledged US government agency activities.

Well the story got a little more interesting, with UK news sources reporting,

    New World Hacking says it carried out the distributed denial of service attack [against the BBC] on Thursday morning

Apparently the US based NWH known to the US authorities for attacking the likes of ISil, decided to “test it’s abilities” and used the BBC as the test target.

The US Government Executive have been talking up sending in drones etc on the perpetrators of Cyber-Warfare attacks against the US and the West. So at the very least the question that arises is “What are the US Government going to do against NWH for a very obvious, illegal and unjustifiable act of Cyber-Warfare that originated in the US by US citizens against a friendly Nation?”

What are the odds that it will be “squat diddly” or “token at best”?

Especially as some of those IP addresses are suspected of being part of the US Government… This could turn out to be very very embarrassing for the US Diplomaticaly, after all the current US government President has very publicaly accused both China and North Korea of Cyber-attacks by proxy through supposedly NGO’s within their borders…

What makes this any different now it’s inside US borders… Thus some are going to say it’s time the US Government showed strong leadership on this matter, in rounding up and severely punishing all involved…

Time to turn on the popcorn maker and drag up a few comfy chairs and cans of your favourite libation, this could get interesting…

r January 2, 2016 12:21 PM

@Clive,

Coming from the demo scene I’ve suspected that code was capable of fingering authors for a very, very long time. Don’t get me wrong: It’s interesting to see that it’s still mine+able through the compiler output of a higher level language – but these are habitual or educational awareness issues that can be mitigated through training, true
awareness and technology.

I think lingual patterns would be harder to cover up than coding ones, design decisions or not, but I’m still interested in the total shock and awe of this advent publicly…

I’m not going to talk about how I think this can be mitigated, I took steps a couple years ago to head off this very suspicion. I’m originally from the x86 demo scene and to assemblers this was assumed I’d think.

IMNobody2 January 2, 2016 2:01 PM

@Power Through Data, RE: CISA and Congress.

Well said, every word.

There are no technical solutions available to stop invasion of our devices and collection of our every electronic keystroke, swipe, spoken word or image. All is lost. Literally.

It’s come down to a political problem. The corporate-military-government dictatorship has all the power, we have none. It’s up to the people to take it back.

Until then, it’s up to each individual to watch his or her own back. Even dumb-ass collaborating congressmen. But, they don’t really care about it. It’s all BS as usual. Frankly, most people don’t care either. It’s all so convenient and fun.

Oddly, the Snowden Revelations merely expedited our transition to an Orwellian mass surveillance world.

The end.

Grauhut January 2, 2016 2:04 PM

@BoppingAround: Time to market pressure on IoT devs will kill us all sooner or later! 🙂

From 32c3: https://www.youtube.com/watch?v=5gf6mFz1rPM

The older “the usual software base” gets, the wider its used because of ttm, the older/weaker the basis of our systems becomes, the harder it will hit us some day when the architecture crashes because some trasher means its time for it to fall.

Imagine what Unabomber Kaczynski could do today or tomorrow.

Clive Robinson January 2, 2016 2:14 PM

@ r,

I’m not going to talk about how I think this can be mitigated…

I can understand that, as the information is quite hard won.

The thing those who have not tried to hide signals for real usually fall foul of, is the incorrect assumption that,

Signal + Noise = Noise

Because signals and noise have different characteristics. Signals have to be decoded so have things like period by which they can be averaged. Real noise not having period thus drops by root n of the number of samples averaged. Faux noise does have a period and can be detected and correlated and thus negated by a synthetic inverse.

Very much all obsfication, where a signal has to be intelligible without a secure side channel (and code does to run) has a variation of the averaging / inversion issue.

For those that want to know more, a good place to start is Claude E. Shannon’s near fifty page article in the Bell technical house journal.

Clive Robinson January 2, 2016 2:44 PM

@ Grauhut,

The older “the usual software base” gets, the wider its used because of ttm, the older/weaker the basis of our systems becomes…

Could you not just say “teardrop” 😉

For those not old enough to have seen the original have a look at,

https://en.m.wikipedia.org/wiki/Denial-of-service_attack

And scroll down to teardrop, it hit all versions of MS networked OS’s at the time, Linux and other versions of *nix, all of which shared early BSD network code, which had the bug in it. So now you know why some think the BSD logo looks like a “little devil” 😉

Wael January 2, 2016 6:32 PM

@Clive Robinson,

The thing those who have not tried to hide signals for real usually fall foul of, is the incorrect assumption that,

Signal + Noise = Noise

This valid statement will bite you in the rump, in the not so distant future 🙂

Grauhut January 2, 2016 7:10 PM

@Clive: Teardrop was kindergarden stuff, sql slammer was the real deal! One funckyng UDP packet, taking down even midrange switches… 😉

Charles Earl Bowles January 2, 2016 7:55 PM

“Sing a song of sixpence
A pocket full of rye
Four and twenty blackbirds
Baked in a pie

When the pie was opened
The birds began to sing
Was that not a tasty dish
To set before a king?”

Figureitout January 2, 2016 7:56 PM

Doug Coulter
–You mentioned DSP coding, random question if you have the time: ever done any kind of spread spectrum implementations (so RF stuff) or was it mostly audio stuff?

Clive Robinson
–Thanks, yeah looks good but I have compilers for lots of targets already. May do it eventually, whether I actually use it day-to-day is another story (if it has insane bugs generating f*cky binaries, can’t be easily ported and not even support full C say, bleh..not useful). More likely I’m going to modify someone else’s project that looks nice. Go ahead and give me a mean look of disapproval, I’ll write some nice risky “wheelin’ & dealin’ application code” for ya, complete w/ bugs that happen every full moon ;p

65535 January 2, 2016 9:14 PM

@ Nobody, Clive and others

“That is the best demonstration of an “end run” attack against “end point security” I’ve ever seen.”

It is. But, there was some misplaced trust or “social engineering” in the process. The “Dad” had to trust his kid not to take the safe, a semi-risky assumption. I’ll not comment further.

Associated the above “end-run” is the “Paypal/Krebs account theft”. Although this may have been covered but I’ll give it a go.

Basically, Brian Krebs had his Paypal account stolen using simple social engineering or confidence game – although Krebs did use two factor authentication.

“You’re sort of missing the point here. I had two-step authentication (PayPal security key fob) enabled, and the attacker got past that. I don’t know if PayPal simply didn’t require it when the password was reset, but the point is that two-factor is kind of useless when someone can just call in and reset your password verbally by answering a couple of out-of-wallet questions.”-Brian Krebs

http://krebsonsecurity.com/2015/12/2016-reality-lazy-authentication-still-the-norm/comment-page-3/#comment-397278

[And]

“The truth of the matter is that two-factor authentication only really works if the company is willing to write off customers who lose access to their 2nd factor…The whole point of having a system where you are required to produce that second factor is simply that you are required to produce it. If it can be bypassed by sweet-talking somebody with admin access, then the additional protection is non-existent.” – Otto

http://krebsonsecurity.com/2015/12/2016-reality-lazy-authentication-still-the-norm/comment-page-3/#comment-397290

[entire post]:

http://krebsonsecurity.com/2015/12/2016-reality-lazy-authentication-still-the-norm/

Next is a similar problem with the widespread transfer of private information from the NSA down to the local police. This will cause a huge amount of leakage via social engineering and other means… Snowden proved it.

“More immediately troublesome, last minute changes to OmniCISA eliminated a PCLOB review of the implementation of that new domestic cyber surveillance program, even though some form of that review had been included in all three bills that passed Congress. That measure may have always been planned, but given that it wasn’t in any underlying version of the bill, more likely dates to something that happened after CISA passed the Senate in October… it seems that the move to take PCLOB out of cybersecurity oversight accompanied increasingly urgent moves to take DHS out of privacy protection… it sure seems that, in addition to the effort to ensure that PCLOB didn’t look too closely at CIA’s efforts to spy on — or drone kill — Americans, Congress has also decided to thwart PCLOB and DHS’ efforts to put some limits on how much cybersecurity efforts impinge on US person privacy.”- emptywheel

https://www.emptywheel.net/2016/01/02/why-is-congress-undercutting-pclob/

Given, the car key fob flub, the Krebs theft and now the wide spread dissemination of people’s data without oversight I am having a hard time believing that “technology” can solve the current problems.

If technology can solve any of the above cases let me know.

L. W. Smiley January 2, 2016 10:57 PM

@Thoth Open Source Intelligence [32C3 CCC]. Have fun watching the Watcher.

Nice presentation. Having an interest in graph theory and how to use it to make sense of big data, create narratives, etc. Using the database at https://www.usaspending.gov/DownloadCenter/Pages/dataarchives.aspx I was able to find that OPM around 2012 was using Juniper SRX3400 firewall and someone pointed out that it runs Junos not ScreenOS. Though I also discovered the Junos has two types of password $1$ md5 hash and $9$ reversible obfuscation and apparently cracked, https://forums.juniper.net/t5/Junos/Password-encryption-algorithm-in-Junos/td-p/96208 and http://blog.stoked-security.com/2011/06/juniper-9-equivalent-of-cisco-type-7.html

Anyway I noticed he mentioned that one guys resume included prism process engineer which he guessed it refered to PRISM but googling back in 2013 turned up something of interest to me, Prism Model Checker, http://www.prismmodelchecker.org/ for studying:

discrete-time Markov chains (DTMCs)
continuous-time Markov chains (CTMCs)
Markov decision processes (MDPs)
probabilistic automata (PAs)
probabilistic timed automata (PTAs) 

Anyway I sent an email to info@transparencytoolkit.org requesting they might add https://www.usaspending.gov/DownloadCenter/Pages/dataarchives.aspx databases since seeing where and for what money is being spent is always revealing even if it doesn’t include black budget items.

Clive Robinson January 3, 2016 9:53 AM

Are you an Artist?

Or just overly sensitive or even have an ASD?

http://www.huffingtonpost.com/entry/artists-sensitive-creative_567f02dee4b0b958f6598764

Apparently the old “artists are conflicted” is partialy true, but what causes the conflict is Hyper Stimulation from the environment. Not quite the see sounds or hear colours but certainly different to the bulk of the bell curve.

The article has a bunch of questions at the bottom, it would be interesting to find out how “star programers” etc answer, because they can be extrodinarily creative and many I know posses that outlook on life that sees the ordinary in everyday life but finds the funny or sardonic side you get in good stand up comics.

Wael January 3, 2016 11:54 AM

@Clive Robinson,

The article has a bunch of questions at the bottom…

I’m an HSP according to the questions 🙂

“P” for person, not people.

Sid January 3, 2016 12:05 PM

@IMNobody2

I feel your pain brother. We have much to fear.

However, that sort of “All is lost. Literally.” talk is a cowardly, defeatist sentiment that only serves to discourage yourself/others and facilitate the growth of power of that corporate-military-government dictatorship you mentioned. And while there may be no technical solution available to stop a targeted attack against you personally, there are plenty of solutions available to make it much, much more difficult for the non-targeted individual to be surveilled. Feel free to keep complete-and-absolute-privacy as an ideal, but in your day-to-day digital life, don’t neglect to exercise your ability to create speed-bumps/blind-spots/obfuscation/mis-information/compartmentalization and any other activity that puts even the slightest damper on their immoral/criminal surveillance.

All is not lost.

The key is understanding the difference between relative and absolute change. We only discourage ourselves by seeking absolute solutions. Instead, we should be actively implementing those incremental/partial solutions we have available to us – to the degree one’s particular situation, resources, skillset, and circle of influence allows. Ask yourself, are you today doing – at least – the minimum? And how could you do more to help make the situation incrementally better? Please consider helping support the folks doing more than just bitching by fighting hard for us in the courts (e.g., EFF, ACLU, EPIC, Privacy International, etc.). And remember to “boycott” – to whatever degree you’re able – those “free” services, corporations, politicians, and products that fund this immoral/criminal surveillance.

Not just pretty words.

The individual alone may be powerless. But history has shown that many acting together can create great force for change. It’s rarely the majority that affects change and much more often a passionate, vocal, active, and courageous minority that tips the scales toward an incrementally better world. This is neither the first nor the last time The People find themselves materially threatened by their plutocratic counter parts. This power struggle has been going on for millenia and will continue long after were gone. Our ancestors had their battles, these are ours. Take heart in knowing that if the situation was truly hopeless, all had been lost long before we were born.

Stay frosty my friend.

Daniel January 3, 2016 1:17 PM

@Nick P

“out of the box”

poor choice of metaphors. That key was very much in the box.

Wael January 3, 2016 1:23 PM

@Daniel,

That key was very much in the box.

Pretty good!

@Nick P,

Tell Daniel the box was out of the bigger box (the house.) 🙂

Europe Takes the High-Road January 3, 2016 2:11 PM

There is great hope for many countries of the world. 109 countries are set to follow the EU Human Rights Data Protection laws. It’s comparable to the USA Constitution but the difference being it will be strictly enforced:

“The European Court of Justice is enforcing the Charter of Fundamental Rights – a document which in Europe has the equivalent legal status to the Constitution in the US – very rigorously. There have been a series of very important rulings this year.
The big takeaway from the most famous of those rulings – Schrems v Facebook – is that ‘essentially equivalent standards’ of data protection are a condition for processing data in Europe.
Protecting privacy is not about choosing which ads you see:
a) it means giving people a choice about whether their behavior online is secretly monitored
b) it means being open and honest to individuals about the data which is being collected and why
c) it means taking the principle of data minimization seriously in an era where anonymity is becoming an anachronism, and cybersecurity can never be completely assured – as a lot of companies in Silicon Valley are realising more and more.

Data protection has truly gone global. 109 countries now have data privacy laws – more than half of all the countries in the world. They are increasingly looking to Europe for models, and several are waiting for the GDPR before adopting their own new or revised rules.
This brings me to my third point.
The GDPR will, nevertheless, be a game changer. I can highlight just a few of it the biggest changes.

  1. You, as companies, should only need one phone number to talk to the regulator in the EU. The European Data Protection Board will have its own legal personality and have binding powers. It will be up to us, as independent data protection authorities, to coordinate our positions on questions which affect more than one country in the EU.
  2. Data security will no longer be an ‘optional add-on’. Applying data protection by design in product development will be obligation. From my discussions in the Bay Area in the autumn, this seems entirely in keeping with the trends in tech innovation, where venture capitalists as well as regulators are expecting companies to invest in solid, robust safeguards against compromising client data.
  3. There will be ‘big data’ protection – There should be more emphasis on controllers being accountable and taking responsibility for the decisions they take, on the basis of their legitimate business interests, in collecting and using personal information for compatible purposes. There should be less recourse to ‘box ticking’ where ordinary customers are expected to ‘consent’ to terms of data processing which they cannot be expected to understand.”

Snowden’s revelations were a HUGE success and as we see here are making the majority of the World a nicer place to live. However in isolated and cut-off China, North Korea and the USA it’s become much worse.

Clive Robinson January 3, 2016 2:44 PM

@ Europe Takes the High-Road,

Snowden’s revelations were a HUGE success and as we see here are making the majority of the World a nicer place to live. However in isolated and cut-off China, North Korea and the USA it’s become much worse.

You forgot the worst of the sinners against Privacy and the Personal Freedom that arises from it, the UK under Cameron and May. Both of whom appear to think Orwell’s 1984 is an introductory guide and that the mental tourture of Jeremy Bentham’s Panoptican should be the bed rock foundations on which to build the “Big Society”.

Hopefuly Brexit will either not happen or it will be Eng-damed where Scotland, Wales and NI will say “Get yea hence to endless purgatory” to the deluded English Conservatives so out of touch with reality and boot England to a little below Zambia in the world standing.

However my main “new year wish” is that I want to see with the out going UN leader, a proper re-evaluation of the UN Security Council such that the current five permanent members have their veto ability removed.

Clive Robinson January 3, 2016 3:08 PM

@ Doug Coulter,

Long time no hear, I trust you are well, and you still have the beard (mine is getting rather more “badger” in it than I want 🙁

I guess your high energy physics experiments are taking up more time than they used to. Likewise I guess you’ve also found out how expensive it is as a hobby (a friend found out just how expensive fully deionized water is when you want to use it as the dielectric in high voltage transmission line caps and ended up making his own).

Yeah my ASM style carries across all the micros I’ve worked with, worse it carries across into my C code as well. It comes from having spent a number of years developing RTL for RISC core and the microcode to turn it into something others find “usable” at the machine code level. I have a bad habit of using stacks and state machine thinking and generaly regard objects as a decadance to far (like gold plated caviar). Mind you you should see the look on some peoples faces when I do write “objects” in ASM, to say it’s picturesque is an understatment, especially when they wibble about “encapsulation”…

Nick P January 3, 2016 3:11 PM

@ Daniel

Ha. The box was the faulty neural circuitry blocking good ideas from getting in or out of the dad’s head. 😉

Falsum Nomen January 3, 2016 4:57 PM

@ Clive
I self-identify as an HSP (highly sensitive person), and I’m always glad to see more exposure of Elaine Aron’s ideas. It can sometimes be not much fun living in a world made largely by and for people who are relatively insensitive to much of the environment that we all share. What is distressing to an HSP, may only be a mild irritant to those with lower sensory-processing-sensitivity (the technical term). So wider recognition of this apparently very common trait (15-20% of population) is always welcome to me. I optimistically look forward to the day it is explicitly considered in town-planning processes.

I wouldn’t be surprised if quite a few posters here are HSPs. Apart from the tendency to process information very thoroughly before acting, one of the associated traits is that HSPs don’t perform well when being watched, and in fact greatly dislike it.

I have wondered if this may go some way to explaining why the majority of the populace don’t seem to be very bothered about survelliance, while a minority find it very oppressive to their well-being.

And yes, I have been a graphic artist and I’m musical and a bit of a computer type, among other things. So the cap definitely fits.

r January 3, 2016 9:40 PM

@doug, clive, ianf.

(i didn’t originally see all you guys on this topic, we all responded to clive at once)

4 examples outside of malware where this could be applied – bitcoin, whonix, i2p and truecrypt.

while whonix itself may not be vulnerable to this form of attribution (does it include any independently developed libraries?) the other’s most certainly do and are.

if you’re in the public space of cryptography privately my heart is with all of you: stay safe.

@ianf,
i generally believe you’re right about the size of a footprint increasing the viable signal but don’t discredit things like unique spelling/punctuation or self developed tricks and technology to generate a very unique ‘short’ ‘spike’. these fingerprints are a function? of one’s footprint.

@clive,
even the types of noise you include in your signal could serve to amplify the signal.

@doug,
commercial developer reverse engineering private ip lol: that’s a grey area. 🙂

paranoia destroys ya January 4, 2016 4:18 AM

I noticed some odd behavior in a message caught on our in-home phone answering machine.
It was the automated menu and prompts heard for changing the greeting message on the telephone company’s voice mail system. “To listen to your recording, please press 1. I’m sorry, I didn’t hear your response.” (Sometimes its humorous hearing computers trying to have a conversation with no-one.)
Usually these are accessed by calling your own number from that phone.
Many of the robocalls we get with spoofed numbers display our own number.
Does anyone know if in theory whether a spoofed number can be used as a hack into accessing phone messages and other items stored by the telco?
Or maybe it might be only hearing the tail end of asking to leave a message to call back.

Winter January 4, 2016 4:36 AM

@Clive
“You forgot the worst of the sinners against Privacy and the Personal Freedom that arises from it, the UK under Cameron and May. Both of whom appear to think Orwell’s 1984 is an introductory guide and that the mental tourture of Jeremy Bentham’s Panoptican should be the bed rock foundations on which to build the “Big Society”.”

Maybe we should start to think about what will happen in Poland now that their local “Tea Party” has taken absolute power?

The signs are all bad, very bad.

Winter January 4, 2016 9:23 AM

@BoppingAround
Imagine, religious nuts inside the Tea Party gain total control of both houses and the presidency and the supreme court. That is what happened in Poland, but this time it is Catholics.

Now, imagine you are an opposition politician or a journalist.

Nick P January 4, 2016 2:11 PM

@ Clive, Dirk

Linux is not ready for the desktop

Great write-up [by a Linux supporter] of the problems with a long list of issues that need to be handled. I see it as a kind of a critique and HOWTO for people looking to fix obstacles to adoption. The thing that bothers me is that some of these problems go back to the UNIX Hater’s Handbook. A little balance on that provide by Eric Raymond’s summary and review here. And yet many still… aren’t… fixed…

L. W. Smiley January 4, 2016 3:27 PM

Well the Baltimore City Housing Inspectors are planning to do random apt. inspection in my complex starting Thurs. I requested that I be present during inspection if selected, though Camara v. Municipal Court (1967), https://supreme.justia.com/cases/federal/us/387/523/case.html indicates that I could demand they produce a warrant. Anyway the apt complex customer service rep started raising his voice insisting they have the right of entry at anytime. Indeed the lease states that the apt complex or it’s representatives or agents have right of entry at anytime, but city government inspectors are not apt complex employees nor their agents. Still working this out with customer service being polite and nice. I have computers and electronics projects and expensive ceramics and glass out that I don’t want tampered or broken. I just want to be present during inspect. I taped a note to my door stating they do not have my permission if I’m not home. Hope they respect this. End runs around the 4th Amendment…sneak and peaks, plus I roll my own tobacco cigarettes, and if my rolling machine or butts draw suspicion…

On a happier note of sorts I managed to pull the battery connector of my iPod classic 160GB from the circuit board while trying to upgrade to a 500GB ssd msata and zif adapter board. The happy part is I managed to solder it back down to the circuit board successfully. the 5 terminals are each about a nanometer apart, and had about a nanogram of solder holding them down in the first place. I did create 2 solder bridges with my fine tip soldering iron, but fixed those with desoldering wick leaving just enough to leave them connected to the board and at least it works and passes power diagnostics. Now if only I could get it into disk mode for itunes to do it’s restore trick.

Those little 6″ flexible steel rulers from the hardware store work great for opening the iPod classic case, you about 8 of them and some medium to heavy guitar picks.

Dirk Praet January 4, 2016 5:07 PM

@ L. W. Smiley

Well the Baltimore City Housing Inspectors are planning to do random apt. inspection in my complex starting Thurs. I requested that I be present during inspection if selected, though Camara v. Municipal Court (1967) … indicates that I could demand they produce a warrant.

I recommend you urgently contact your lawyer to send a formal complaint to the City of Baltimore, and citing above referenced case. Replace the note stamped on your door by a copy of this letter.

Thoth January 4, 2016 5:26 PM

@Nick P
The best Linux desktop would still be Linux Mint variants and off-shoots of this project like PureOS. It is by far the most usable Linux desktop and I have it running on my Linux PC. PureOS is a more security focused variant of the Linux Mint with security enabled by default and one of the default OS shipped with the Purism open source hardware laptops.

L. W. Smiley January 4, 2016 9:01 PM

@Dirk Praet

Trying to follow your advice. I emailed the letter below and will follow tomorrow with regular post. I can’t afford a lawyer, and it’s sad that we can only assert our rights, if we can afford one or one takes up our cause. Suffering a withering assault through creative or incorrect interpretations of contracts, leases, and laws, and Supreme court decisions, and no one is accountable for simple mistakes in the application of laws:

Stephen Kauffman January 4, 2016
xxxx xxxxxxx Dr
Apt x
xxxxxxx, MD 212xx

Marilyn J. Mosby, States Attorney for Baltimore City
120 East Baltimore Street, 9th Floor
Baltimore, MD 21202

Dear Ms. Mosby

Starting this Thursday, January 7th 2016, the Baltimore City Housing Inspections is going to perform random apartment inspections at Wellington Gate Apartments in zip code 212xx. I don’t have a problem with these inspections as long as I am present for this inspection if my apartment is chosen for inspection. However if they insist on entering when I am not at home, then they will need to obtain a warrant per Supreme Court Case, Camara v. Municipal Court 387 U.S. 523 (1967) https://supreme.justia.com/cases/federal/us/387/523/case.html
Granted my lease with Hendersen-Webb, the property manager, gives Wellington Gate, its employees and agents the right of entry at anytime, this right does not extend to government employees who cannot be construed as agents of the landlord, even if they are to be escorted by the landlord. Again, all I am asking is to be present during any such inspection, since I have expensive breakable items, computers and electronic projects out that I don’t want tampered with or damaged. I have brought this to the attention of the landlord who is pleading helplessness due to the random selection process. So I ask that arrangements be made so I can be present or otherwise obtain a warrant to enter per Camara v. Municipal Court. I may be reached at 410-xxx-xxxx or on my cell at 443-xxx-xxxx. Thank you for your attention. I will send this letter by regular post to your office.

Respectfully,

Stephen Kauffman

cc
Mayor Stephanie Rawlings-Blake
Maryland Disability Law Center
Maryland Legal Aid

Who? January 5, 2016 6:04 AM

@ Figureitout

Google, HP, Oracle Join RISC-V
Great news on open hardware. Get some commercial people on it to make things happen. Still will be very difficult to verify and defend from subversion(unavoidable..attack much easier than defense). Would love to see a “Novena” style computer based around this chip.

Huh?

Google is not exactly a corporation I would trust. Not to say they despise its customers, their privacy and security.

HP? What HP? The HP that diverted a truck of servers adquired by a customer to the National Security Agency?

Oracle? The Oracle that bought and destroyed Sun Microsystems, and is so open source friendly?

No, these are not great news on open hardware. They have the resources required to make powerful RISC-V processors a reality, but cannot be trusted.

Dirk Praet January 5, 2016 7:46 AM

@ Nick P

Re. Linux is not ready for the desktop

I sadly have to concur with quite some of the issues raised on the page, and the situation is probably even worse for desktop *BSD OS’es. Linux/BSD as a desktop to date remains an endeavour for computer savvy folks that not only are willing and able to invest time and effort to make everything work but just as much to accept certain limitations and depending on what your requirements are. It’s a trade-off everyone needs to make for him/herself.

@ L. W. Smiley

I emailed the letter below and will follow tomorrow with regular post.

It would probably help if you could find a law student or template letter to make it more legalese. I equally recommend giving them less leeway than you’re doing right now and threatening with punitive damages in case of non-compliance. You may also wish to contact Robinson & Associates in Columbia or call the law offices of Christopher L. Peretti in Riverdale, MD at 301-875-3472. Both seem to be having expertise in trespassing cases and offer free consultations.

Figureitout January 5, 2016 8:43 AM

Who?
They have the resources required to make powerful RISC-V processors a reality, but cannot be trusted.
–What do you think I care about? It’s supposed to be open source, would be great to catch any of these companies adding a backdoor too, screwing up open hardware. This doesn’t mean anything if you don’t know how check it anyway (w/ tools that you think you could readily understand?–But they better tell the truth or chip designs would have dangerous holes).

Then there’d be people like me, who would have to just accept what people say (university researchers w/ some of the tools necessary being most trustworthy to check operation) and maybe have a reference design of a USB chip I could program, maybe more.

BoppingAround January 5, 2016 11:31 AM

[Off-topic, re: Linux] Nick P, Dirk,
Is there a similar list for Windows? Besides http://linuxfonts.narod.ru/why-windows-10-sucks.html which is not exactly what I’m looking for.

Judging from that article, I am probably rather lucky as I cannot relate to most of the problems the author describes. Now I’m curious how much bullets I did dodge on Windows 🙂

Nick P January 5, 2016 12:19 PM

@ Who

re RISC-V

You can always feel free to review the open HDL, code, specs, and tests of the RISC-V Rocket processor to feel more comfortable. Then, synthesize your own FPGA netlist or ASIC from that. Include tricky functionality to catch certain swaps or weird attacks. Wire a micro version using TTL’s like the Magic-1 did. Split it onto diverse FPGA’s while accepting low MHz.

Any number of things for you to do if you worry about the companies pushing further development. The rest of us will just use the OSS deliverables to our advantage. 😉

@ Dirk Praet

Appreciate the review. The thing I like about it is that it’s pretty objective. It’s a list of very specific issues that can be tested, validated/rejected, improved on, compared, and so on. That it’s a huge list covering all sorts of things important for desktop use puts final nails in coffin of Linux as “ready for” the desktop.

Certain distros are quite usable and solve many problems. Some people’s use case (esp web browsing or email) just works with one setup with few problems after that. Yet, many have a different experience where there’s all kinds of strange, manual steps found with lots of Googling. If the shit works at all.

Case in point, I’m about to dig through forums on Mint to try to fix an error the upgrade brought in. Two actually. One is the screen flickering sometimes when I’m entering text where about no input works. Might come out of it, might not. Graphics driver? X Windows? Desktop? App? Who the hell knows. Another involves dragging a Save As window to see the title under it so I can type it into the box. Rather than just move dialog, this resizes the main window behind it and places it directly behind the dialog anywhere I move it. What… the… hell…?

Wish the problems showed themselves before I moved 10+GB of data onto it. So, next is either Fedora (for RedHat tech) or SUSE (your recommend) to see how they are. Unless I fix this “easy-to-use, desktop for grandmaws” with my tech savvy. 😉

@ BoppingAround

The author did include some Windows-specific items in the Linux page. Windows mostly just works, though. Very little on that page has a comparable situation in Windows. That’s really the point of the article. And by Windows, I mean Windows 7: the last, good one. The recent ones will probably lead to a proliferation of such lists. 🙂

Nick P January 5, 2016 12:31 PM

@ Clive

I posted a while back that Modula-2 might be getting a revision or revival. Recently, Cardelli posted an update the the Modula-3 compiler. So, both a major C and C++ alternative are getting some new life. Then, there’s the RISC-V announcement. Interesting times. 🙂

I’m also kicking myself over the Springer book situation. Idk if you remember them recently posting all kinds of Comp Sci books for free that previously cost a lot of money. Many were big time books and some were obscure goldmines. I was thinking I’d delay downloading it since it was a lot of material and I was busy. Recent articles indicate they closed it all back up with no explanation. Speculation includes a rogue employee posting it on the website. In any case, I missed out on legal copies of a lot of good stuff. (sighs) 🙁

KCNA January 5, 2016 1:12 PM

And now in other news…

Shutterfly is being sued for using face scanning technology on uploaded images.

The company had argued that Illinois’ Biometric Information Privacy Act (BIPA) allowed companies to scan images (as long as actual persons would not be scanned). However the U.S. District Judge Charles Ronald Norgle disagreed with Shutterfly’s assertion in this case.

Shutterfly Biometric Data Privacy Class Action Moves Forward
https://wolfandpravato.wordpress.com/2016/01/05/shutterfly-biometric-data-privacy-class-action-moves-forward/

Interestingly though Google+ (the social network), Bing, and Facebook also do similar scanning (and at least FB has been sued as well).

Thoth January 5, 2016 6:21 PM

@Curious
I think we have talked a lot regarding encryption chip backdoors on blog comments in the past and most of us suspected something long time ago.

In fact, most chip have to be viewed from a skeptical point of view as almost every chip made is a blackbox despite itself having a documentation on it’s instruction set but how are you sure those documents are exactly doing what it was designed while in reality during manufacturing no one slipped a little something inside ?

This is where you design distributed computing and split up your trust between multiple chips which is @Clive Robinson’s Prison design. Given an Intel chip, would you trust ? It is hard to tell from just looking at documents and instructions when in fact something in the actual chip circuit might be doing something unexpected ?

There are a few known and recommended ways to protect yourself from a leaking chip performing encryption which we have discussed. The first one is never to trust it’s RNG solely since a backdoored RNG (similar to the Juniper backdoor case) would make predicting keys much easier. It is better to get your keys from other random sources and then load the keys into the chip to do the encryption operation.

If you want to check for simple backdoors, you can do a deterministic encryption with a known key and watch it’s output and if you want more sophistication, you can measure the output line for fluctuations in timing or power to detect side channel leaking of backdoor information.

These inspection techniques do not have a 100% rate of catching backdoors in chips. The best method to catch any backdoor is to physically decap an IC chip to look onto it’z physical circuitry although I would say most modern chips are so complex these days which would take quite some effort to physically reverse engineer one.

Thomas_H January 5, 2016 9:56 PM

@moz:
Ok, thanks. As I figure it, if it was police brutality, it would have been all over the news anyway. Why anyone would bother his family is beyond me by the way (I certainly was not going to); they deserve some peace at this point.

@Clive Robinson:
The thing those who have not tried to hide signals for real usually fall foul of, is the incorrect assumption that,

Signal + Noise = Noise

Because signals and noise have different characteristics. Signals have to be decoded so have things like period by which they can be averaged. Real noise not having period thus drops by root n of the number of samples averaged. Faux noise does have a period and can be detected and correlated and thus negated by a synthetic inverse.

Lemme have a try (not having read the paper). I would say that adding noise is either completely useless, or makes a coder more recognisable. In the case of real noise, it can rather easily be identified and therefore ignored. Faux noise has the problem you indicate, but also another, depending on how it was generated: 1) obfuscating code/comments by adding machine-generated random text or scrambling comment before compilation -> probably easy to find, 2) keyboard-mashing -> not only easy to find, but may also consist an identifier for the author, e.g. because the sequences of text are not really random (favoured hand movements), limit themselves to part of the keyboard (might even indicate right- or lefthandedness, or presence of a numeric pad), 3) application of filters that change text to suggest writer has specific nationality -> might be able to ID filter used, will not hide coder quirks.
I guess the attack surface can be reduced by simply stripping off comments before compilation (one less identifier, plus smaller file size). So what do we find in the code itself? Favoured methods, syntax, function names, really now it’s just the same as with any written text: style, origin-related quirks (recurring English typos that only occur with certain nationalities, etc.), inconsistencies related to increased proficiency in the task at hand (also known as “learning”) that may also give indications towards which kind of intellectual failings people have (dyslexia, dementia, ‘this work bores me’). Not much to be done about that, except some kind of universal dictionary system that automatically translates “favoured coding style” into “standard coding style” (needs to be local, no backdoors allowed). I think building something like that will be hard. Well, no, making a system that is perfect will be hard, if not impossible.
I’m in two minds whether a programming language that allows multiple paths to reach the same goal (e.g. Perl) is an advantage or not. On one hand, being able to use radically different coding styles allows for a special kind of obfuscation. On the other, humans are inconsistent and it will happen that they use one style when they ought to have used the other and will think “fuck it”, in turn allowing interested parties to link programs written in one style to others in other styles. That is ignoring the possibility of overarching patterns between a single programmer’s multiple programming styles.

Not sure if this issue is actually solvable. Actually, I think it can’t be solved. But it can be made more complex for nosy creeps.

65535 January 6, 2016 1:59 AM

Winblows

“I’m sure everyone who doesn’t have any problems with Win10 has read the 45 pages of terms and conditions, right? …the privacy policy? Start reading and weeping”- Windblows

Good point. The fine print is quite revealing. I’ll touch upon a few of them

[Microsoft]

Name and contact data. We collect your first and last name, email address, postal address, phone number, and other similar contact data.
Credentials. We collect passwords, password hints, and similar security information used for authentication and account access.
Demographic data. We collect data about you such as your age, gender, country and preferred language.

Interests and favorites. We collect data about your interests and favorites, such as the teams you follow in a sports app, the stocks you track in a finance app, or the favorite cities you add to a weather app. In addition to those you explicitly provide, your interests and favorites may also be inferred or derived from other data we collect.

Payment data. We collect data necessary to process your payment if you make purchases, such as your payment instrument number (such as a credit card number), and the security code associated with your payment instrument.
Usage data. We collect data about how you interact with our services. This includes data, such as the features you use, the items you purchase, the web pages you visit, and the search terms you enter. This also includes data about your device and the network you use to connect to our services, including IP address, device identifiers (such as the IMEI number for phones), regional and language settings. It includes information about the operating systems and other software installed on your device, including product keys. And it includes data about the performance of the services and any problems you experience with them.

Contacts and relationships. We collect data about your contacts and relationships if you use a Microsoft service to manage contacts, or to communicate or interact with other people or organizations.
Location data. We collect data about your location, which can be either precise or imprecise. Precise location data can be Global Position System (GPS) data, as well as data identifying nearby cell towers and Wi-Fi hotspots, we collect when you enable location-based services or features. Imprecise location data includes, for example, a location derived from your IP address or data that indicates where you are located with less precision, such as at a city or postal code level.

Content. We collect content of your files and communications when necessary to provide you with the services you use. For example, if you receive an email using Outlook.com, we need to collect the content of that email in order to deliver it to your inbox, display it to you, enable you to reply to it, and store it for you until you choose to delete it. Examples of this data include: the content of your documents, photos, music or video you upload to a Microsoft service such as OneDrive, as well as the content of your communications sent or received using Microsoft services such Outlook.com or Skype, including the:
• subject line and body of an email,
• text or other content of an instant message,
• audio and video recording of a video message, and
• audio recording and transcript of a voice message you receive or a text message you dictate.
We also collect the content of messages you send to us, such as feedback and product reviews you write, or questions and information you provide for customer support. When you contact us, such as for customer support, phone conversations or chat sessions with our representatives may be monitored and recorded. If you enter our retail stores, your image may be captured by our security cameras.

In short if you don’t opt out of the above you will have you text, audio, and video collected and data mined.

“We share your personal data with your consent or as necessary to complete any transaction or provide any service you have requested or authorized. For example, we share your content with third parties when you tell us to do so, such as when you send an email to a friend, share photos and documents on OneDrive, or link accounts with another service. When you provide payment data to make a purchase, we will share payment data with banks and other entities that process payment transactions or provide other financial services, and for fraud prevention and credit risk reduction.

In addition, we share personal data among Microsoft-controlled affiliates and subsidiaries. We also share personal data with vendors or agents working on our behalf

…we will access, disclose and preserve personal data, including your content (such as the content of your emails in Outlook.com, or files in private folders on OneDrive), when we have a good faith belief that doing so is necessary to:
1. comply with applicable law or respond to valid legal process, including from law enforcement or other government agencies;
Our Use of Cookies and Similar Technologies
Microsoft uses cookies and similar technologies for several purposes, including:
• Storing your Preferences and Settings. Settings that enable our services to operate correctly or that maintain your preferences over time may be stored on your device. For example, if you enter your city or postal code to get local news or weather information on a Microsoft site, we may store that data in a cookie so that you will see the relevant local information when you return to the site. If you opt out of interest-based advertising, we store your opt-out preference in a cookie on your device.
• Sign-in and Authentication. When you sign into a site using your personal Microsoft account, we store a unique ID number, and the time you signed in, in an encrypted cookie on your device. This cookie allows you to move from page to page within the site without having to sign in again on each page.
• Interest-Based Advertising. Microsoft uses cookies to collect data about your online activity and identify your interests so that we can provide advertising that is most relevant to you. You can opt out of receiving interest-based advertising from Microsoft as described in the Access and Control section of this privacy statement.
• Analytics. In order to provide our services, we use cookies and other identifiers to gather usage and performance data. For example, we use cookies to count the number of unique visitors to a web page or service and to develop other statistics about the operations of our services.
Our Use of Web Beacons and Analytics Services
Microsoft web pages may contain electronic images known as web beacons (also called single-pixel gifs) that we use to help deliver cookies on our sites…
In addition to placing web beacons on our own sites, we sometimes work with other companies to place our web beacons on their sites or in their advertisements… Finally, Microsoft services often contain web beacons or similar technologies from third-party analytics providers…

In addition to standard cookies and web beacons, our services can also use other similar technologies to store and read data files on your computer…
Local Shared Objects or “Flash cookies.” Web sites that use Adobe Flash technologies may use Local Shared Objects or “Flash cookies” to store data on your computer…

Silverlight Application Storage. Web sites or applications that use Microsoft Silverlight technology also have the ability to store data by using Silverlight Application Storage…

When you conduct a search, or use a feature of a Bing-powered experience that involves conducting a search or entering a command on your behalf, Microsoft will collect the search or command terms you provide, along with your IP address, location, the unique identifiers contained in our cookies, the time and date of your search, and your browser configuration. If you use Bing voice-enabled services, additionally your voice input and performance data associated with the speech functionality will be sent to Microsoft.

Retention and de-identification. We de-identify stored search queries by removing the entirety of the IP address after 6 months, and cookie IDs and other cross-session identifiers after 18 months.

Managing Search History. Bing’s Search History service provides an easy way to revisit the search terms you’ve entered and results you’ve clicked when using Bing search through your browser. You may clear your search history in Bing Settings. Clearing your history removes it from the Search History service and prevents that history from being displayed on the site, but does not delete information from our search logs…

You may access Bing-powered experiences when using other non-Microsoft services, such as those from Yahoo!. In order to provide these services, Bing receives data from these and other partners that may include date, time, IP address, a unique identifier and other search-related data. When you click on a search result or advertisement from a Bing search results page and go to the destination website, the destination website will receive the standard data your browser sends to every web site you visit – such as your IP address, browser type and language, and the URL of the site you came from (in this case, the Bing search results page). Because the URL of the Bing search results page contains the text of the search query you entered (which could include names, addresses, or other identifying information), the destination website will be able to determine the search term you entered…

Cortana is your personal assistant. Cortana works best when it can learn about you and your activities by using data from your device, your personal Microsoft account, third-party services and other Microsoft services. To enable Cortana to provide personalized experiences and relevant suggestions, Microsoft collects and uses various types of data, such as your device location, data from your calendar, the apps you use, data from your emails and text messages, who you call, your contacts and who you interact with on your device. Cortana also learns about you by collecting data about how you use your device and other Microsoft services, such as your music, alarm settings, whether the lock screen is on, what you view and purchase, your browse and Bing search history…
Cortana regularly collects and uses your current location, location history, and other location signals (such as locations tagged on photos you upload to OneDrive)… Cortana accesses your messages…Cortana learns who is most important to you from your call, text message, and email history. This is used to keep track of people most relevant to you and your preferred methods of communication, flag important messages for you and improve other Cortana services such as speech recognition… To help Cortana better understand the way you speak and your voice commands, speech data is sent to Microsoft to build personalized speech models and improve speech recognition and user intent understanding…Cortana also allows you to connect to third-party services for additional personalized experiences based upon data from the third-party service. For example, choosing to sign into Facebook or LinkedIn within Cortana allows Microsoft to access certain Facebook or LinkedIn data… If you choose to send your full browsing history to Microsoft in Microsoft Edge [most people probably should not send their full browsing history to Microsoft… Although, both Apple and Google Chrome probably do record all browsing history – ed]…
To help you discover content that may interest you, Microsoft will collect data about what content you play, the length of play, and the rating you give it…. [Grove Music] To provide this information, Groove Music and Movies & TV send an information request to Microsoft containing standard device data, such as your device IP address, device software version, your regional and language settings, and an identifier for the content…

[Health apps]

Sharing Health Data.A key value of HealthVault is the ability you have to share your health data with people and services that can help you meet your health-related goals. By default, you are the custodian of any records you create. Custodians have the highest level of access to a health record. As a custodian, you can share data in a health record with another person by sending an e-mail invitation through HealthVault. You can specify what type of access they have (including custodian access), how long they have access, and whether they can modify the data in the record. When you grant someone access, that person can grant the same level of access to services (for example, someone with view-only access can grant a service view-only access)…

[OneDrive]

When you use OneDrive, we collect data about your usage of the service, as well as the content you store in order to provide, improve and protect the services. Examples include, indexing the contents of your OneDrive documents so that you can search for them later and using location information to enable you to search for photos based on where the photo was taken. We also collect device information…

[Outlook]

When you delete an email or item from a mailbox in the Outlook.com web service, the item generally goes into your Deleted Items folder where it remains for approximately 7 days unless you move it back to your inbox, you empty the folder, or the service empties the folder automatically, whichever comes first. When the Deleted Items folder is emptied, those emptied items remain in our system for up to 30 days before final deletion.

[Skype]

Your Skype profile includes your username, avatar, and any other data you choose to add to your profile or display to others. Depending on the profile settings, your Skype profile data is included in the search directory to enable other users of Skype (or products that interact with Skype, such as Skype for Business) to search for you and connect with you… Some Skype products are offered via a partner company’s service and/or supported through a partner company that uses your data subject to the terms of its own privacy policy. Microsoft may access, disclose and preserve your data (including your private content, such as the content of your instant messages, stored video messages, voicemails or file transfers)… qualified third-party websites (“marketing affiliates”) can receive payment for referring users to Skype.com if they complete certain actions, such as the purchase of Skype Credit. If you arrive on Skype.com from a marketing affiliate website, the marketing affiliate will set a cookie on your computer, which is used to qualify them for compensation… Skype applications use notification services available for your device to let you know of incoming calls, chats and other messages when you are not actively running or using the Skype application. For many devices, these services are provided by a third party. These third-party notification services receive information about the caller or sender of the message and portions of the message as part of providing the service and will use this information in accordance with their own terms and conditions and privacy policy. Microsoft is not responsible for the data collected by third-party notification services… In some versions of the Skype software that offer interest-based advertising, you may opt out of interest-based advertising displayed in the software by visiting the privacy options in the software or account settings menu. If you opt out, you will still receive advertisements based on your country of residence, language preference, and IP address location…Skype applications offer audio or IM translation features. When enabled, audio conversations are translated, converted to text and provided as a transcript. Voice and text data are used to provide and improve Microsoft speech recognition and translation services…

[Widows 10]

When you activate Windows, a specific product key is associated with the device on which your software is installed. The product key and data about the software and your device is sent to Microsoft to confirm your valid license to the software. This data may be sent again if there is a need to re-activate or validate your license. On phones running Windows, device and network identifiers, as well as device location at the time of the first power up of the device, are also sent to Microsoft for the purpose of…
Windows generates a unique advertising ID for each user on a device….
Microsoft collects and uses data about your speech, inking (handwriting), and typing on Windows devices to help improve and personalize our ability to correctly recognize your input…

Microsoft operates a location service that helps determine the precise geographic location of a specific Windows device. Depending on the capabilities of the device, location is determined using satellite global positioning service (GPS), detecting nearby cell towers and/or Wi-Fi access points and comparing that information against a database that Microsoft maintains of cell towers and Wi-Fi access points whose location is known, or deriving location from your IP address. When the location service is active on a Windows device, data about cell towers and Wi-Fi access points and their locations is collected by Microsoft and added to the location database after removing any data identifying the person or device from which it was collected. Microsoft may also share de-identified location data with third parties to provide and improve location and mapping services… Some Windows devices have a recording feature that allows you to capture audio and video clips of your activity on the device, including your communications with others.

Microsoft regularly collects basic information about your Windows device including usage data, app compatibility data, and network and connectivity information. This data is transmitted to Microsoft and stored with one or more unique identifiers that can help us recognize an individual user on an individual device and understand the device’s service issues and use patterns. The data we collect includes:
• Configuration data, including the manufacturer of your device, model, number of processors, display size and resolution, date, region and language settings, and other data about the capabilities of the device.
• The software (including drivers and firmware supplied by device manufacturers), installed on the device.
• Performance and reliability data, such as how quickly programs respond to input, how many problems you experience with an app or device, or how quickly information is sent or received over a network connection.
• App use data for apps that run on Windows (including Microsoft and third party apps), such as how frequently and for how long you use apps, which app features you use most often, how often you use Windows Help and Support, which services you use to sign into apps, and how many folders you typically create on your desktop.
• Network and connection data, such as the device’s IP address, number of network connections in use, and data about the networks you connect to, such as mobile networks, Bluetooth, and identifiers (BSSID and SSID), connection requirements and speed of Wi-Fi networks you connect to.
• Other hardware devices connected to the device.
Some diagnostic data is vital to the operation of Windows and cannot be turned off if you use Windows…

Microsoft Edge is Microsoft’s new web browser for Windows 10. Internet Explorer, Microsoft’s legacy browser, is also available in Windows 10. Whenever you use a web browser to access the Internet, data about your device (“standard device data”) is sent to the websites you visit and online services you use. Standard device data includes your device’s IP address, browser type and language, access times, and referring website addresses. This data might be logged on those websites’ web servers. Which data is logged and how that data is used depends on the privacy practices of the websites you visit…Microsoft browser information saved on your device will be synced across other devices when you sign in with your Microsoft account. This information can include your browsing history, favorites, saved website passwords, and reading list. For example, in Microsoft Edge, if you sync your reading list across devices, copies of the content you choose to save to your reading list will be sent to each synced device for later viewing.

[Wifi]

Wi-Fi Sense allows you to automatically connect to Wi-Fi networks around you to help you save cellular data and give you more connection options. If you turn it on, you will automatically connect to open Wi-Fi networks. You will also be able to exchange access to password-protected Wi-Fi networks with your contacts. Please note that not all open networks are secure…

[Widows apps]

If you allow the Camera app to use your location, location data is embedded in the photos you take with your device. Other descriptive data, such as camera model and the date that the picture was taken, is also embedded in photos and videos. If you choose to share a photo or video, any embedded data will be accessible to the people and services you share with. You can disable the Camera app’s access to your location by turning off all access to the location service in your device’s Settings menu or turning off the Camera app’s access to the location service… Your photos, videos, as well as screenshots, saved in your camera roll automatically upload to OneDrive… When you take photos embedded with your location, the Photos app can group your photos by time and location… The People app lets you see and interact with all your contacts in one place. When you add your Microsoft account to a Windows device, your contacts from your account will be automatically added to the People app. You can add other accounts to the People app, including your social networks (such as Facebook and Twitter) and email accounts. When you add an account, we tell you what data the People app can import or sync with the particular service and let you choose what you want to add. Other apps you install may also sync data to the People app, including providing additional details to existing contacts… When you sign in with a Microsoft account on your device, you can choose to back up your information, which will sync your SMS and MMS messages and store them in your Microsoft account. This allows you to retrieve the messages if you lose or change phones. After your initial device set-up, you can manage your messaging settings at any time. Turning off your SMS/MMS backup will not delete messages that have been previously backed up to your Microsoft account…

[Xbox]

Xbox Live includes communications features such as text-based messaging and online voice chat between players during gameplay. In order to help provide a safe gaming environment and enforce the Microsoft Code of Conduct, we will collect, review, and monitor a sample of these communications, including Xbox Live game chats and party chat communications in live-hosted multiplayer gameplay sessions offered through the services…

[Cookies]

Some of the cookies we commonly use are listed in the following chart. This list is not exhaustive, but it is intended to illustrate the main reasons we typically set cookies. If you visit one of our websites, the site may set some or all of the following cookies:

MUID: Identifies unique web browsers visiting Microsoft sites. It is used for advertising, site analytics and other operational purposes.

ANON: Contains the ANID, a unique identifier derived from your Microsoft account, which is used for advertising, personalization, and operational purposes. It is also used to preserve your choice to opt out of interest-based advertising from Microsoft, if you have chosen to associate the opt-out with your Microsoft account.

CC: Contains a country code as determined from your IP address.

RPSTAuth, MSNRPSAuth, KievRPSAuth: Helps to authenticate you when you sign in with your Microsoft account.

NAP: Contains an encrypted version of your country, postal code, age, gender, language and occupation, if known, based on your Microsoft account profile.

MH: Appears on co-branded sites where Microsoft is partnering with an advertiser. This cookie identifies the advertiser so the right ad is selected.

ACH01: Maintains information about which ads you clicked on and where you clicked on the ad.

TOptOut: Records your decision not to receive interest-based advertising delivered by Microsoft.

https://www.microsoft.com/en-us/privacystatement/default.aspx

[Privacy and Warranty]

We share your personal data with your consent or as necessary to complete any transaction or provide any service you have requested or authorized. We also share data with Microsoft-controlled affiliates and subsidiaries; with vendors working on our behalf; when required by law or to respond to legal process; to protect our customers; to protect lives; to maintain the security of our services; and to protect the rights or property of Microsoft…
• DISCLAIMER OF WARRANTY. The application is licensed “as-is”, “with all faults” and “as available.” The application publisher, on behalf of itself, Microsoft (if Microsoft isn’t the application publisher), wireless carriers over whose network the application is provided and each of our respective affiliates, vendors, agents and suppliers (“Covered Parties”), give no additional contractual warranties, guarantees or conditions in relation to the application. You have all mandatory warranties foreseen by law, but we grant no other warranties. Covered Parties exclude any implied mandatory warranties, including those of merchantability, fitness for a particular purpose and non-infringement.
• 11. LIMITATION ON REMEDIES AND DAMAGES.
o a. The application publisher shall not be liable for any user content or other third-party material, including links to third-party websites, and activities provided by users. Such content and activities are neither attributable to the application publisher nor do they represent the application publisher’s opinion.
o b. The application publisher shall only be liable if material obligations of these licence terms have been violated.
o c. The application publisher, its vicarious agents and/or its legal representatives shall not be liable for any unforeseeable damage and/or financial loss with respect to any indirect damage, including loss of profit, unless the application publisher, its vicarious agents and/or its legal representatives have at least acted with gross negligence or willful misconduct.
o d. Any statutory no-fault liability of application publisher, including, without limitation, liability under the product liability act and statutory liability for breach of warranty, shall remain unaffected by the limitation of liability. The same shall apply to liability of application publisher, its vicarious agents and/or its legal representative in the event of fraud or their negligence resulting in personal injury or death.
o e. No other contractual and legal claims besides those covered in subsections (i) to (iv) of this section 11 may result from these application license terms and/or the use of the application or services made available through the application.
The following products, apps and services are covered by the Microsoft Services Agreement, but may not be available in your market.
• Account.microsoft.com
• Advertising.microsoft.com
• Arrow Launcher
• Bing
• Bing Apps
• Bing Desktop
• Bing Dictionary
• Bing in the Classroom
• Bing Input
• Bing Maps
• Bing Navigation
• Bing Reader
• Bing Rewards
• Bing Search app
• Bing Toolbar
• Bing Torque
• Bing Translator
• Bing Webmaster
• Bing Wikipedia Browser
• Bing.com
• Bingplaces.com
• Citizen Next
• Cortana
• Default Homepage and New Tab Page on Microsoft Edge
• Device Health App
• Groove
• Groove Music Pass
• HealthVault
• Choice.microsoft.com
• Maps App
• Microsoft account
• Microsoft Family
• Microsoft Films & TV
• Microsoft Health
• Microsoft Translator
• Microsoft Wallpaper
• Microsoft XiaoIce
• MSN Dial Up
• MSN Explorer
• MSN Food & Drink
• MSN Health & Fitness
• MSN Money
• MSN News
• MSN Premium
• MSN Sports
• MSN Travel
• MSN Weather
• MSN.com
• Next Lock Screen
• Office 365 Consumer
• Office 365 Home
• Office 365 Personal
• Office 365 University
• Office Online
• Office Store
• Office Sway
• Office.com
• OneDrive
• OneDrive.com
• OneNote
• Onenote.com
• Outlook.com
• Picturesque Lock Screen
• Pix Lock
• Send
• Skype
• Skype in the Classroom
• Skype Manager
• Skype Qik
• Skype WiFi
• Skype.com
• Smart Search
• Snipp3t
• Spreadsheet Keyboard
• Store
• Sway.com
• Tossup
• Translator
• UrWeather
• Windows Live Mail
• Windows Live Writer
• Windows Movie Maker
• Windows Photo Gallery
• Windows Store
• Xbox and Windows Games published by Microsoft
• Xbox Live
• Xbox Live Gold
• Xbox Music
• Xbox Store

[in short you have no redress if you use Microsoft products]

https://www.microsoft.com/en-gb/servicesagreement/default.aspx

@ Dirk Praet

“[Any non-tech savvy individuals]Who probably hasn’t the foggiest idea what Bitlocker, TPM or telemetry services are anyway. Anyone falling into that class for all practical purposes has already been unwittingly assimilated into the Fenestric Matrix™ or MicroBorg™ Collective.” –Dirk

That’s not the point!

It is easy to fall into that category for a variety of reasons [Time, money, and technical training].

If you have to exchange information with said Non-tech-savvy-MicroBorg people – like your family and friends – your emails, texts, SMS, IP address, and possibly your social contact list will be intermixed with said “MicroBorg” individuals.

Worse, in the “PayPal/Krebs” you could be maliciously link to a terrorist and put on a list – which could cause you endless problems.

https://www.schneier.com/blog/archives/2016/01/friday_squid_bl_508.html#c6714309

In conclusion as the tentacles of the NSA/Corporate Data Mining Machine multiply – the higher the chance you will be de-ammonized and drawn into the Data Mining grinder.

The Data Mining Grinding Machine includes all Corporations and entities who Data mine and all financial instructions who are involved in your Credit Score [Microsoft, Google or Alphabet, YouTube, FaceBook, Twitter, Pintrest, paypal and all financial institutions who provide services to Credit Rating agencies] and so on. Those are statically facts.

Being technically savvy only goes so far. Once, you are surrounded by an ocean of non-tech savvy “MicroBorg” individuals you will become a victim of the Data Mining disease as you exchange information with them.

The entire problem will be teaching people not to click the “OK” button at every step, Political activism by the EFF and/or a showdown between the US Supreme court and the Executive office off the President – who issues dubious executive orders. And, by some fortune, the EU de-links its spy machine from the American Spy Machine. Some of which we will have little control over.

Excuse the grammar errors and other errors. This is off the cuff writing.

Thoth January 6, 2016 2:51 AM

@Nick P, Clive Robinson, JackPair fans et. al.
JackPair manages to get their form factors and boards designed and now they are left with the issue of improving voice quality. It seems they are trying hard on the voice compression performance using the open source Codec2 library but apparently there is a higher error rate of their analog modem under GSM channels that affects voice quality at the range of 300-600 bps.

They also tried to look into commercial closed source offerings of codecs which are giving better results but the downside is it has export control restriction affecting Cuba, Iran, North Korea, Sudan, Syria, Armenia, Azerbaijan, Belarus, Burma, Cambodia, China, Georgia, Iraq, Kazakhstan, North Korea, Kyrgyzstan, Laos, Libya, Macau, Moldova, Mongolia, Russia, Tajikistan, Turkmenistan, Ukraine, Uzbekistan and Vietnam.

Anyone has suggestions to improve voice compression (300-600 bps) can attempt to guide them so that they can stick to an open source voice compression technology.

Link: https://www.kickstarter.com/projects/620001568/jackpair-safeguard-your-phone-conversation/posts/1448629

Gerard van Vooren January 6, 2016 2:53 AM

@ Skeptical, Dirk,

About “But the Turkish Government and political/legal system is far better than Assad’s regime.”

Assad himself went to college in the UK. He is a well educated man. Before the civil war he was seen as a respectable man who had the best intentions with Syria and a reliable partner for most countries.

What we have seen in the last couple of months in Turkey was a dictatorship in the making. Most dictators who have been elected behaved quite well until they saw that the elections didn’t went good or when they saw that they couldn’t be elected anymore because of the law. At that point they start to become a dictator. That is because they get wet feet. They know that they already at that point have done things that a successor will investigate. A dictatorial regime can work quite well until there become uprisings. Then you find out how brutal it becomes. The last elections in Turkey could mean too that this country ends up like Syria when things doesn’t work out the way it should.

There is a Dutch book about this subject and it is called “Het dictatorvirus”.

tyr January 6, 2016 3:52 AM

If you think Assad is a villain how about a fat kid
with a bad haircut and an H-Bomb (that’s a fusion
weapon using a Bethe cycle for those not up on the
differences ).

So far his rocketry won’t quite reach the east coast
of USA but that’s just a techicality which can be
overcome by a little hard work.

If I was Billy Gates I’d be looking at real estate out
of range and nowhere near any big targets like those
who build defense stuff.

Clive Robinson January 6, 2016 5:40 AM

@ tyr,

So far his rocketry won’t quite reach the east coast
of USA but that’s just a techicality which can be
overcome by a little hard work.

As I’ve said before with Nuke Biological and Chemical weapons, it’s not the WMD payload but the delivery mechanisms you need to worry about.

If a “fat boy” comes NOKing on your back door then you have a real problem. The Japanese, Taiwanese and Australian’s are in range, not just of rockets but planes, boats and submarines.

The thinking that says rockets have to have ground track range is often misplaced. Provided you can get the velocity then orbital mechanics may be your friend / enemy depending on where you are. But planes whilst not fast can carry considerable pay loads, airliners especialy can be used as guided missiles, which is something AQ demonstrated with 9/11 without a WMD payload. It would not be overly difficult to turn an old 747 into a flying bomb. Further as became clear with the Malaysian aircraft, tracking of planes is not very reliable even today and once effectivly out of site of land can fly just about any where they please.

As was seen with “Piggybacking the Space Transport System” the load does not have to be within the body of the aircraft. As Virgin is demonstrating a plane can lift and deploy a short range rocket up into the edges of space. And SpaceX have shown that you don’t need the finances of a major state to get significant payloads up to three hundred miles into space for delivery to the ISS.

But there is also the use of submarines, the UK strategic deterant is deployed from submarines as are some US systems. Unlike rockets building submarines can be kept much more secret and just about everywhere on the globe is within easy reach of the sea, effectively unnoticeably.

Unlike the aforementioned countries the US does not have a potentially hostile Super Power saber ratteling just a short distance over the horizon flexing it’s muscles, getting ever closer by building artificial islands to make extra territorial gains. Such things tend to sharpen the publics minds when in your back garden. Worse when there is what is an unknown wild child directly adjacent.

As can be seen from the Trump posturings, many of the citizens of the US mistakenly belive they can have an “isolationist policy” and thus not have to think on the consequences of US foreign policy. It might be interesting to see the result of the wake up to a NOK knocking on the US back door, but I’m not sure there is a safe place to watch from…

ianf January 6, 2016 6:09 AM


    Were tyr “Billy Gates” he’d be looking at real estate out of range and nowhere near any big targets like those who build defense stuff.

Logical, but… show me a place in the continental U.S.A. where they do NOT build defense stuff, some of it never deployed duds, but the bad boys need not know that, and it’d still be within the theoretical A-bomb fallout envelope. On the other hand Chilean Atacama desert looks inviting. Antarctica?

ADMINISTRIVIA @ Clive Robinson

As a sage of this forum, you get to write whatever you please, and then nobody questions the veracity or sanity of it. But could you at least please not use your own private abbreviations like “NOK” when it is DPRK you mean? Or using (formally correct, but uncommon) terms such as “Space Transport System” in lieu of “Space Shuttle”? Above everything else, it prevents later keyword discovery, because the expected keywords simply aren’t there. Thanks much in advance.

Clive Robinson January 6, 2016 9:19 AM

@ ianf,

With regards North Korea, various parts of the IC refer to them as “the NKs” etc with NK pronounced as Norks / Knocks / etc. This terminology has spread outwards and is becoming used in much wider circles.

My use here on this occasion was a gental “word play” to someone who I suspect it would raise a wry smile.

I would agree in more general terms it can be problimatical as “No Official Cover” is another TLA that gets pronounced similarly (the Russian’s refer to them as “non residents” as “resident” is used for those officers formally in the Diplomatic “residency” or part of the “Mission”).

Thus in The UK you could hear “a NOC in the NK” sounding like “a knock in the knock”. Though the US drawl tends to make “norks” the prefered verbal contraction. Thus “a knock in the norks” which to an English ear suggests a euphemism for saying “a kick in the groin” etc, which is most definitely not “the mutts nuts” if you are the recipient.

ianf January 6, 2016 9:49 AM

@ Clive, you’re too old a coder-trooper not to recognize the value of distinct, unique keywords/ labels/ identifiers (such as DPRK) over any bland, thus ambiguous (NK, NOK, etc) “alternatives”. So there’s no need to explain how your term is soooo deeply entrenched in IC from which it migrated elsewhere. Besides, you’re not writing for IC personnel—unless you are ;-))

Nick P January 6, 2016 12:11 PM

@ Bruce Schneier and all

The miTLS project is doing a formally specified and verified version of TLS in a ML-like language. They’ve described a series of attacks on TLS in their publications as a result of this work. The work is interesting in general but I found this paper to be extra interesting. It applies the composition of state machines to implementing the TLS protocol. That’s the approach that was used in high assurance systems (eg A1/EAL7) of the past. Results in cleanly handing a mix of components that turned messy and insecure in other implementations. Quite the argument for both formal verification and state machine approach for highly-secure, protocol implementations.

Curious January 6, 2016 3:08 PM

“The Father of Online Anonymity Has a Plan to End the Crypto War”
“http://www.wired.com/2016/01/david-chaum-father-of-online-anonymity-plan-to-end-the-crypto-wars/”

“The mere mention of a “backdoor”—no matter how many padlocks, checks, and balances restrict it—is enough to send shivers down the spines of most of the crypto community. But Chaum’s approach represents a bold attempt to end the stalemate between staunch privacy advocates and officials like FBI director James Foley, CIA deputy director Michael Morrell and British Prime Minister David Cameron who have all opposed tech companies’ use of strong, end-to-end encryption.”

Curious January 6, 2016 3:34 PM

“Security Losses from Obsolete and Truncated Transcript Hashes” (Sloth)
http://www.mitls.org/pages/attacks/SLOTH

“Our main conclusion is that the continued use of MD5 and SHA1 in mainstream cryptographic protocols significantly reduces their security and, in some cases, leads to practical attacks on key protocol mechanisms. Furthermore, the use of truncated hashes and MACs for authenticating key exchange protocol transcripts is dangerous and should be avoided where possible.”

“Partly as a consequence of this work, the TLS working group has decided to remove RSA-MD5 signatures and truncated handshake hashes from TLS 1.3. We encourage TLS 1.2 implementations to disable MD5 signatures immediately and SHA1 signatures as soon as practical. We also advocate that tls-unique should no longer be used for channel binding in application-layer authentication protocols.”

BoppingAround January 6, 2016 5:18 PM

[re: Windows 7] Nick P,

Mostly. I did think about it during my morning coffee time today, and
recalled a fair amount of forum threads when something would break in
Windows. Whether a video driver failure, the aforementioned rot (think of C:
suddenly eating up to 30 GB of space [A]), the infamous ‘$PROGRAMME has
stopped working’.

They did indeed put some work into the video output subsystem — as nagging as
those failures notifications were it was fairly possible to operate and work.
Unless you fancied a game.

Reminding myself all of this, looks like I have missed a whole lot of !!FUN!!
on Windows too.

7 was rather all right. I had some trouble with the sound though: it would
squeak occasionally when I started a program (anything, from an IM client to a
game). My printer wouldn’t work too most of the time but I am unsure who was
the perpetrator there: a badly-written driver or Windows itself.

—– If you are not interested in Win10, skip to the end of the post —–

Now Windows 10 which I recently put into a VM. It was kind enough not to nag me
with requests to create a Microsoft account. I suppose because I disabled the
network adapter for that VM. But for everything else it’s a proper can of worms.

The three screens with various ‘consumer experiences’ at the final stage of the
installation procedure reminded me both of police investigators and jealous wives.
A lot of crap in the Settings app which was somewhat unpleasant to navigate besides
being inconsistent — some links would send me to the Net, some to the ‘old’ Control
Panel, some to other apps. The Control Panel is now interlinked with the Settings app
too, to my grief.

A significant amount of questionable surveillance-related options in the Group Policy
Editor. Much more than in 8.1. I am unsure if I located every single one of them.
It is also inconsistent: for some options you have to toggle them on in order to turn
them off; for some it’s the opposite. It is also hard to say how effective they
are: if I have disabled Telemetry via Group Policy, it stays enabled in the Settings
app. Which one is truthful?

Allegedly disabled Cortana remains in the Process List after reboot. Should have
called it Shodan instead.

Four desktops are nice. The laptop’s exhaust remained relatively cool during my tests.
I guess it’s a bit better optimised than 8.1 — the exhaust was hotter when I ran a VM
with it. About the only positive things about Win10 so far. It feels like a beta
release.

I have killed everything in the Windows Firewall and set it to deny everything so now
I’ll try to detect if it still talks to MS. The best way would be to route the traffic
through another machine and mirror it there but I don’t have one.


[A] I recall a thread about a similar problem on MS Answers and seeing some
‘MVP’ getting all pissy and, ‘why would a professional like you rummage in
thousands of files instead of buying a terabyte hard drive’ for their precious
OS. Perhaps a solution; not for those with smallish laptops with the limited
capacity of early SSDs, for crying shame.

Thoth January 6, 2016 6:43 PM

@Curious
re: The Father of Online Anonymity Has a Plan to End the Crypto War
It looks like this online anonymity wouldn’t end any Crypto War. Crypto War has existed since the beginning of time when humans learn to communicate. Since the days of the Caesar cipher or primitive lookup tables on huge sheets of tablets or papers. It would not end either as it represents the struggle of power of the Echelon and the power of the Peasants. If you think about Cryptography, it is a representation of the advancement of human communications and thoughts to a point where ideas and speech needs to be protected.

The PrivaTegrity is yet another Snake Oil protocol. Reason is it has a hole allowing decryption of communication ! A secure protocol would not allow any sort of decryption or forging except for the legitimate parties who own the keys but a protocol that waves it’s hands and says it has no backdoors but has a golden key in the form of requiring it’s none administrators to gather together and decrypt someone’s communications … what is the likelihood it would be misused and what if this golden key protocol has a weakness just like any other golden keys or backdoor protocol ?

This protocol is going to be problematic with holes by the virtue of having golden keys included.

Now, what is the likelihood you can coerce all 9 administrators or snoop on them and in turn become the master administrator by controlling the other 9 puppet administrators ? It sounds a bit hard due to the fact that 9 administrators would be in 9 different physical and judicial areas but with the power and resource of a High Strength Attacker and the fact that almost every other commercial chip are a blackbox, how is that going to be difficult if you are going to spend resources on controlling just 9 administrators and their machines and the pay-off is thousands of so-called “anonymous” traffic which one control and read ?

The Orwellian powers that be would be more than happy for this protocol to be introduced and James Comey, David Cameron, May, Obama, Mike Rogers et. al. would be so happy for such a tool to further their “Orwellian Utopia”.

Dirk Praet January 6, 2016 6:50 PM

@ 65535

Once, you are surrounded by an ocean of non-tech savvy “MicroBorg” individuals you will become a victim of the Data Mining disease as you exchange information with them.

You are unfortunately right. Which is why I try to exchange as little information as possible and have reverted to using an ordinary landline with a secret number as my main means of communication. My adolescent nieces think of me as a fossil and most of my non-IT friends just don’t understand why I know their favorite technology toys better then they do but without actually using them.

tyr January 6, 2016 9:27 PM

@ianf

I’m thinking immediate blast radius as a downwinder
I know far too much about how attempting to get far
enough away from nuclear weapons to avoid some effects
is pure futility.
I recall a young lady who said if a nuclear weapon went
off in the USA she was going to commit suicide. She
seemed to think it was an automatic death sentence if
on the same continent.
I also remember the guy who in avoiding world war two
found an island with no signifigance or value and
moved to Guadalcanal to escape the war.

Lets pick the nine administrators from the religions/
ideologies and require a full consensus from all. We
can start with the followers of Eris, a Randian
anarchist, and work down from there.

@Nick P.

Since laughter is good for you, I have been cured by
ugh. I can’t find the man page for yucc yet.

Thoth January 6, 2016 11:21 PM

@all, Blackphone et. al.
Silent Circle codes allows access to Blackphone modem. A pretty nasty bug that has potential of spying on raw modem data.

Blackphone’s PrivatOS has been known to be vulnerable to almost other known Android bugs but they make it up with the speed of pushing out bug fixes. This doesn’t sound assuring for a phone that prides itseld to be “secure”.

It is really overdue for Google to adopt a secure capability based microkernel architecture for Android underneath it’s Linuxy insecure kernel to add additional security. Something like a secure microkernel with a userspace Linux kernel hosting it’s Android OS and drivers loading into different microkernel segments with some kind of link from the Linux kernel in Android to the drivers in separate microkernel segments would be useful.

Link: http://www.theregister.co.uk/2016/01/06/silent_circles_blackphone_bug/

Nick P January 6, 2016 11:56 PM

@ Thoth

I saw that. I might send them my voice codec papers in case they haven’t seen them. Good chance they’re encumbered by patents or won’t help but might help. Thanks for the reminder.

@ r

I doubt it did. It’s a research project. Unlike most, they are making one practical discovery and improvement after another. I like that. Their hope is to figure out the hard details while providing what’s necessary to verify real-world implementations. That stuff will come later and end up on the lists.

@ tyr

Haha. Yeah, it was pretty good. Also showed the difference between the cathedrals and the bazaars quite well. Interestingly, Raymond hints at that himself when he criticizes it but qualifies that by saying the authors used some really better languages and platforms. As in, they were used to what good design and implementation looked like. Then, they had to use UNIX. Vitriol followed. 😉

@ BoppingAround

Yeah, the Windows systems always had their issues. Most of what you described was likely 3rd party stuff that got blamed on them. There were so many hardware and legacy app issues it was crazy. Then there were Microsoft’s on top of that. The list on Why Windows Sucks, excluding 10, is fairly small compared to Linux and a decent amount of it has easy, 3rd party solutions.

Far as Windows 10, yeah it seems to be LEO’s and advertisers’ dreams come true. Plus a clusterfuck in general starting at Win8 with Win10 fixing that up a bit. My temporary recommendation, which I can’t test being off Windows, is to use Windows 7 Embedded where possible given it’s Windows 7 with removable stuff and longer support. Should be able to both turn it into a desktop and trim it at same time.

Not sure as I haven’t messed with one of those since XP edition I think. Worth someone trying. There’s also the brute force method used by “tommy” that used to comment here: keep deleting files and trying apps until you figure out exactly how much sneaky and useless stuff you can delete while system still runs. Need backup-restore strategy there. He got a Windows XP box, MS Office, Firefox/NS, Sandboxie, Internet security suite, and more down to 650MB CD. Frigging crazy lol. I bet Win 7-10 could be similarly trimmed and Win7 Embedded make it even easier. Probably turn off a lot of that surveillance shit just due to context of how it’s used.

Thoth January 7, 2016 2:01 AM

@Nick P
They are encumbered by patents for the voice compression technology. They have swapped to using he clised source commercial version that is subjected to export control for now as the Open Source version doesn’t cut.

Curious January 7, 2016 3:56 AM

I am no professor, but reading about relying on the security of using the data from photons in optic cables, I can’t help but wonder, if having security rely on the detection of possible eavesdropping when photons are sent through an optic cable, is it possible to create a copy the photon steam into a shunted stream that can be eavesdropped in peace? Maybe at some point before the photons are sent down optic cable? (E.g during construction).

Curious January 7, 2016 4:13 AM

Btw, I am reading on Slashdot, re. the “Bicycle” attack on TLS/HTTPS, that supposedly: “The new HTTPS Bicycle Attack can also be used retroactively on HTTPS traffic logged several years ago”.

Clive Robinson January 7, 2016 4:37 AM

@ Nick P,

Quite the argument for both formal verification and state machine approach for highly-secure, protocol implementations.

More like the first steps all “engineers” would take in designing something.

That is, all software development should move in this direction, irrespective of it’s intended purpose. Because as should be becoming clear, a failure at any level opens a door through which others may reach in to find, change or remove what is inside to their not the systems owners advantage.

That is though people think of “data silos” thereby suggesting containment and segregation, they tend to forget that a silo is just a bucket. And as most people know the contents of a bucket are very susceptible to holes, not just via leakage but siphoning as well.

But when built into systems there are other effects buckets are prone to. One reason the Titanic sank was not the gash in it’s side, but failings in the design of it’s much vaunted watertight compartments that would make Titanic “unsinkable”. They were not watertight compartments as we would understand them today, they were in effect buckets with open tops and lots of holes in the sides. So when the water reached the top of one compartment it simply flowed into the next, so not “compartments” more “baffles” and thus the Titanic was effectively doomed on the drawingboard.

This is a problem most software and system architects fall foul of, by thinking the OS provides “siloed compartments” rather than “open buckets”… Thus data can leak or be siphoned from one data bucket to an other and suffer spoilage in the process, much to an attackers benifit.

With regards “state machines”, it’s funny that it should be mentioned just a few hours after I admited that it was something that made my coding style recognisable to a stylometery author identification attack… So “doing the right thing could be a personal security threat” 😉

Wael January 7, 2016 5:23 AM

@Clive Robinson, @Nick P,

So “doing the right thing could be a personal security threat” 😉

That has always been the case. If you convince others to do the right thing too, then you’ll reduce the personal risk, or maybe not 🙂

Clive Robinson January 7, 2016 5:28 AM

@ Curious, (Wael)

You asked if this was important,

    “How long is your password? HTTPS Bicycle attack reveals that and more”

As I was chatting with @Wael just a few days ago yes it’s important.

If you assume that a user is just using “dictionary words” then it’s easy to see how the number of trys you have to make can be vastly reduced by knowing the word length. Even when you assume the user is adding digits, it still gives a reduction in search space as short words can be eliminated from the try list.

But when it comes to “pass phrases” as –opposed to words– it can reduce the number of trys, for instance the “Horse, battery…” method can if the dictionary is known –which could be likely” can likewise be distinguished and word combinations eliminated.

Now think about what happens if you can also get “typing tempo” as well. Humans “type in words” and “frequent patterns”. This can give the length of individual words in a pass phrase, which can give it away almost compleatly.

As I pointed out in my challenge to @Wael “the cat sat on the mat” has a 333233 tempo, how many “common” or “well known” phrases have that tempo? One we know but I doubt there are more than four, thus the real pass phrase entropy is at best 2bits, not the 4+(1.5×16) minimum of 28bits you might be led to belive…

As with all “human remembering” pass phrases and words suffer from the failings of the human mind and body. Which unfortunatly leak just about every secret via bio-metric time based side channels.

Thoth January 7, 2016 6:27 AM

@Curious
The problem is with streaming ciphers (GCM modes and the likes). You simply encrypt whatever there is and pipe it down the line. If you look at block ciphers, they have to be padded with some padding scheme and if you can make all the traffic equal length as many of us have mentioned, it makes traffic analysis harder with the packets all looking equal length.

The quick fix would be to do a SHA2 or SHA3 hash of password to make all passwords appear equal length on the client side before sending it down the pipe and the unexpected good side is you don’t send passwords in the clear (even if HTTPS protected) and once the hashed password reaches the server side, you further do your BCRYPT and/or SCRYPT to further stretch and “massage” the password.

There are additional solutions that do End-to-End password encryption which includes a client side executable (e.g. Javascript or the dreaded Java Applet) which will come with a server side generated session secret which you encrypt your password (using fixed length block ciphers and proper/sufficient padding) and all done via the HTTPS tunnel.

Clive Robinson January 7, 2016 7:04 AM

@ Curious,

I can’t help but wonder, if having security rely on the detection of possible eavesdropping when photons are sent through an optic cable, is it possible to create a copy the photon steam into a shunted stream that can be eavesdropped in peace?

The answer is complicated, but the basic argument can be seen with sending marbles down a tube. That is to examine an individual marble you have to first take it out of the tube, but secondly the process of examining it is destructive in some way thus it can not be put back after examining. But also even if you could put it back, the process of examination takes time thus the slot it came from would be both long gone and empty.

Whilst filling the slot can be solved, the question of fidelity arises in that can you examine the marble in sufficient detail to pass muster at the receiver of the intended recipient. The answer depends on who you talk to but there is agreement that if things are set up correctly the answer is no. Which gives rise to the question of what “correctly” means in real terms.

And this is where the fun starts, because you are not limited to just examining the photons transmitted by the originator, or those received by the intended recipient.

If you look back on this blog you will see that over the years Quantum Key Distribution / Quantum Crypto has come up a number of times. Importantly with ways it may be possible to attack such systems.

Originaly it was not possible to guaranty sending either single or truely independent photons. Thus there was a possibility of getting multiple photons from the originator that would enable attacks to be made. Thus semiconductor techniques were pushed to reduce or eliminate these problems.

However other parts of the system are “physical devices” and thus suffer from all sorts of issues that their “theoretical models” do not. The question thus arises of can these physical issues be exploited? The simple answer is YES they most definitely can be, and are likely to remain so.

The first side channel attack, was clear before the first “proof of concept” experiment was carried out. It was noted that the polarizers being used were audibly noisy in a way that enabled an observer to know their state before the originator sent a photon.

But even if you can not hear the polariser their state can be detected in other ways. That is due to internal reflection and mismatch it is possible to send a short burst of photons at them and from what comes back determine their state. Importantly this can be done in ways which the originator and intended recipient will find difficult to detect.

Then there is the issue of “transmission loss” that needs to be dealt with. Put simply if you put photons into one end of a fiber, not all of them will come out the other end. It is this that limits the range of any QKD system at any given signaling rate. As has been demonstrated it is possible for an attacker to use this issue to their advantage in a number of ways.

I could go on with other potential attacks, but personaly I don’t think QKD offers sufficient guarantees currently for me to trust it more than a high quality crypto algorithm on a properly designed system. Then there are the downsides of QKD, it realy is range / rate limited to a degree that renders it at best of extreamly limited use. Then there is the issue of it being “Point-to-Point” only, that is you can neither range extend it or switch the photons to different receivers in a practical usable way.

Wael January 7, 2016 8:46 AM

@Clive Robinson, @Curious,

But when it comes to “pass phrases” as –opposed to words…

Our fearless host doesn’t believe in that pile of crap. He uses epic pass poems. Me? I use TPM-fortified, extra classy pass-limericks. And you, @Clive Robinson, use every now and then a world-class pass poems. We’re ahead of folks that drank the “pass-phrase” cool aid.

Now think about what happens if you can also get “typing tempo” as well.

Tempo is, at least in music, a qualitative measure–not a quantitative one. In chess, Tempo is the slightest advantage one can obtain; its equivalent to a gained “move”.

thus the real pass phrase entropy is at best 2bits, not the 4+(1.5×16) minimum of 28bits…

Entropy has many meanings[1]. Care to calculate the entropy of a strand of DNA composed of arranging the elements of the set {A, T, C, G} in a 3-billion character long string?

[1] What’s the entropy of the possible set of “entropy meanings”? You’ll find this entry in a cryptographic thesaurus book, in the twilight zone.

PS: links took about 3 minutes, previewing took 6 minute, composing took about 4 minutes. And that’s because I didn’t get the “server in use” error 😉

BoppingAround January 7, 2016 9:39 AM

Dirk Praet,

my non-IT friends just don’t understand why I know their favorite technology
toys better then they do but without actually using them.

In my experience those aren’t the worst. The worst are those who work in IT
yet it seems as if they remain wilfully ignorant of certain technological
developments and topics (think data mining, data brokers, targeted adverts and
such). To let it slip out to them that you don’t use Google is nigh similar to
say to an avidly religious person that you don’t believe in god.

Nick P,
The clusterfuck did actually seem better in 8.1. I could at least trim
the useless for me ‘Modern’ stuff, the Control Panel was feature-complete and
there were much less dubious Group Policy entries and stuff of questionable
nature in the system. A relatively polished Windows 8 if you can live with the
full-screen Start Menu (there are several programs on the Net to fix that
too).

I’ll look into the Embedded version. Never touched the stuff before.

ianf January 7, 2016 10:54 AM

@ BoppingAround

[…] “A relatively polished Windows 8 if you can live with the full-screen Start Menu (there are several programs on the Net to fix that too).

A follow-up question from someone who has never owned (only used) a Windows unit before: given a brand-new, still shrink-wrapped Windows 10 tablet (not MSFT Surface, a Lenovo something), is it possible to DOWNGRADE it to Win 8 as per your description? Steps involved—so that the tablet won’t “phone home” while it’s being “subtracted”?

ianf January 7, 2016 11:52 AM

    tyr recalls a young lady who said that if a nuclear weapon went off in the USA, she was going to commit suicide.

Get on the blower to this young lady THIS INSTANT, inform her she has a future in Hohollywood where they’re always on the lookout for true-story heroïnes, ready to sacrifice themselves on the altar of their guiding idea, to be heralded later on the silver screen. Also Meg Ryan needs a rôle for a comeback.

[…] a guy who, to avoid WWII, found an island with no signifigance or value and moved to Guadalcanal to escape it.

There also was at least one British family that found life in the UK too hectic for their peace of mind, and moved to the real offshore backwater Falkland Islands just-in-time for the war of 1982.

    [That said, no less an authority on world health trends than statistics maven Hans Rosling recently said in another context, that those who want to GET MEGA-RICH (relatively) QUICK off their guaranteed-not-to-be-affected-by-rising-sea-levels BEACH PROPERTY, had better invest in a stretch of the naturally sloping Somalian coast: longest pristine beaches in the world, now still pirate country, but dirt cheap; because when Somalia gets its act together and becomes normal again, it will be the place for UNPOLLUTED TOURIST RESORTS for the well-to-do of ME, Saudi and India. So international hotel chains will come calling. Strange as it sounded, I believed him.]

Thoth January 7, 2016 5:10 PM

@Grahut

“Three Rings for the Elven-kings under the sky,
Seven for the Dwarf-lords in their halls of stone,
Nine for Mortal Men doomed to die,
One for the Dark Lord on his dark throne
In the Land of Mordor where the Shadows lie.
One Ring to rule them all, One Ring to find them,
One Ring to bring them all and in the darkness bind them
In the Land of Mordor where the Shadows lie.”

Now you know why there are nine admins in the super duper snake oil of the year (2016) called cMix ? The Nine Rings for the fallen Men that would doom us mortals and the One Ring (a.k.a The Echelon) that would rule us all.

Hopefully some cryptographers like Matt Green or DJB comes along and break the cMix protocol using their maths to proof why the Nine Rings are Sauron’s (in the form of Annatar – the “gift giver”) gift for destruction.

Clive Robinson January 7, 2016 5:55 PM

Spare a thought for Royal Navy Sub Lt David Balme, who has died aged 95.

He was responsible for bording U110 during WWII and thus obtaining an Enigma machine and importantly the settings book, that at the time was desperately needed by Bletchly Park, to break into U-Boat traffic at a critical time of the war.

How many peoples lives were saved by his actions is unknown, but it can safely be said it was considerable.

You can read more at,

http://www.royalnavy.mod.uk/news-and-latest-activity/news/2016/january/06/160106-enigma-naval-hero

tyr January 7, 2016 6:05 PM

Here’s a nice problem from the future.

http://www.nature.com/articles/srep18827

Apparently quantum effects from reduced geometries are
not going to be quite as straightforward to guard
against as we believe now. Since new information is
coming in every day about things never seen before it
makes automating stuff that works without bizarre
effects quite difficult.

@ianf

So the real agenda against Somalia is a bunch of real
estate crooks? Sounds about right to me that’s what
got George Washington into the revolution business
since the british crown wouldn’t allow him to grab
indian lands. Personally I find Skinnyland too far
from a reliable food supply to want a beach cottage
there.

I’m still waiting for the Rescue of Maiden Lynch
movie, a harrowing sexual drama of the mediaspin
folk in full array.

Diogenes had the way to locate the nine men needed
just light them up with your lantern.

Clive Robinson January 7, 2016 6:18 PM

@ Curious, Nick P, Wael,

Speaking of passwords / phrases, this might be of interest,

https://hynek.me/articles/storing-passwords/

NOTE what it says about “other methods” that have been previously recomended, but now should be considered at best obsolete.

As I’ve indicated in the past, passwords are a significant liability and make servers storing them in any form a target, and thus an unneeded business liability.

Thus even though this method addresses some of the faults of older systems it does not stop the “server as a target” issue which other methods do solve…

Anura January 7, 2016 6:53 PM

@Clive Robinson

I’m not familiar with that algorithm, but glancing through I’m saddened that it is based on a hash function from the SHA-3 competition that didn’t win. I’d prefer if you could build it using a common hash function.

I read a bunch of papers a while back, and composed a list of requirements that I had for a password hash function to ensure resistance to every single known attack I came across, that I’d be curious if that follows. Some things, like ensuring passwords are padded to a fixed length to avoid someone being able to distinguish between passwords above and below a cutoff length are not common, but doable anyway with existing hashes (but not enforced). Using encoding rules and a counter to ensure that the input to the function cannot be repeated unless ALL parameters and the round are the same is probably the most important rule, to prevent any possibility of collisions (which are theoretically possible in PBKDF-2).

I did write an algorithm, but never published it – I decided against parallelism, as it just grows the implementation and can be done by just running it however many times in sequence and hashing the results. The memory is pseudo-randomly modified (in 64-bit chunks) with the location based on only the salt (using a fast XorShift-based RNG to minimize any possible performance gain from precomputation) but the contents are modified based on the password using a simple computation (XOR location contents to previous value, add next value from hash, rotate left by 31 bits, write to location) that ensures memory must be accessed sequentially.

Thoth January 7, 2016 6:55 PM

@Clive Robinson, Nick P, Curious, Wael, Password Security et. al.
There are two ways you secure a password in the current market (yes, note the word… market). One way is you hash the password and the other is encrypt (plus hash if you will). The cheaper alternative and most straight-forward is hashing password because PHP library has a ton of password hashing schemes for you to choose and most people are using PHP for their website.

Before I go on, a little disclaimer that I not only deploy HSMs (in the usual PKI context) but some of the requests include having to do with Password Encryption (aided by HSMs with SEE environment) and you need to judge for yourself although I have written my own hashing utility too.

Hashing passwords using Memory/CPU hard context (spamming memory and CPU space/cycles) to slow down brute force are useful for offline and online attack but if you think about it in terms of cost, offline attacks are more cheaper because you get to scale and do the password cracking at your own pace whereas an online attack, you may not be able to scale well and you have a limited bandwidth (unless you control a botnet or two).

You can password hashing brute force as looking up a table and comparing (querying) an oracle. For the offline brute force, your oracle is in a downloaded file format containing your password hashes and it’s as good as comparing one book next to the other besides you. If you think along the line of a online brute force, it’s as good as working to a security guard and presenting a cloned badge to gain entry. Whenever you fail to present the correct badge, you have to return again with a new badge.

Thus, password hashing is more useful if you are going to give the attacker as much delay (online brute force protection) with the delay on connectivity and also trying to recreate a new hashed password (assuming you might want to add salt, pepper, sugar and spice into password) to query.

When you get into a scenario where you have hacked into a database containing the hashes, it is considered an offline attack where you can take your time to scale with much comfort. Chips are getting more advanced and specialized chips containing algorithm accelerators would make brute forcing offline much more comfortable (of course it can be done online too).

Password encryption comes into play where you have downloaded the database of offline passwords and a properly implemented password encryption scheme, you have to handle an additional task at hand which is to figure out the actual hashed passwords so you can brute force. Without the actual hashed passwords, your comparison would become inaccurate and some password encryption scheme uses a master key to encrypt a per password key (KEK style) and this adds another layer of challenge since in order to re-produce the environment to brute force the encrypted password hashes, you would not only need to setup the correct encryption schemes with correct keys with the correct hashing algorithms and the database of encrypted and hashed passwords.

This leaves the problem of the encryption keys and execution which a HSM with a programmable Secure Execution Environment can be used as a querying oracle but it is a rather expensive feat in itself (thus the low adoption safe for Corporate and Government environments).

The better form of protection currently available in the market for identity is the use of PKI infrastructure and the overhead of maintaining a PKI infrastructure (Organisation CAs, Root CA, HSMs, PKI tokens…) or some biometric security are an overhead SMEs cannot afford. Setting up 2FA or MFA infrastructure (e.g. RSA or Vasco OTP) requires some investment and they are not cheap (they are sold in per batches of OTP crypto keys paired to tokens).

Identity is a hard business.

That’s why the problem with passwords (due to it’s relative low/no cost of deployment) and it’s weakness is always around the corner.

ianf January 7, 2016 8:47 PM

@tyr (you lost me on the problems from the future, Maiden Lynch and Diogenes’ creative use of lanterns, but that’s OK, I don’t aspire to be all-knowing) […]

So the real agenda against Somalia is a bunch of real estate crooks?

I wouldn’t put it quite this way; the way I’d put it is unprintable in such a family-oriented medium as this, but, essentially, isn’t real estate the key to everything?

EXHIBIT A: “The Apprenticeship of Duddy Kravitz“ (story by Mordecai Richler) not-bad 1974 movie by Ted Kotcheff with a young Richard Dreyfuss.

EXHIBIT B: see the tail end of my response to Wael some time ago (+consider elaborating on what follows there ;-))

[real estate being…] what got George Washington into the revolution business since the British Crown wouldn’t allow him to grab indian lands.

I thought it was the breeding of mules, but fine, “acquisition of virgin lands” as well. Quite a Statesman Model to look up to—no wonder the pretender to the throne The Donald already possesses all the finesse of a mule driver [pace pro mule drivers, an honest trade].

Nick P January 7, 2016 9:26 PM

@ tyr

I believe it because it already isn’t straight-forward. Just look at this for 28nm that’s a few generations old. There’s so many ways that physics tries to break the circuit that they’re doing pattern recognition on the wires spotting the breaks. There’s already about 2,000 design, rule checks to do. They re-synthesize not just for performance or size but yield due to breakages. It’s so ridiculous and proprietary that it led me to speculate on the OSS model being proprietary tools to make the stuff with trace-based, OSS verification. Just because I can’t see OSS EDA catching up to all that crap any time soon.

And then there’s the quantum, 7nm, optical, and all that stuff. More fun. 🙂

Wael January 7, 2016 11:06 PM

@Clive Robinson, @Anura, @Curious, …

Speaking of passwords / phrases, this might be of interest,

I’m not sure I get it. What’s to stop an adversary from using an array of independent (don’t share all resources) devices for the attack?

it does not stop the “server as a target” issue which other methods do solve…

I thought they acknowledged this towards the end of the PDF. What other methods are you referring to?

Gerard van Vooren January 8, 2016 6:20 AM

I am trying very hard to not feed the trolls. Maybe others could do that as well.

Clive Robinson January 8, 2016 8:37 AM

@ Gerard Van Vooren,

I am trying very hard to not feed the trolls. Maybe others could do that as well.

I shall treat that as friendly advice for another current thread 😉

Plus it will save my time for more important things, which my doctor would aprove of such as fresh air excercise and keeping my blood preasure down closer to where it should be, oh and making a vegi curry as Dirk suggested 😉

@ Wael,

Methods where neither the password or a hash of it are kept on the server. Thus from a business point of view the server is not a target and any loss can be shown to be at the client end, thus mitigating much risk for the business.

Whilst there is a system that does this, it’s a problem for which thinking up a solution is a constructive use of time.

BoppingAround January 8, 2016 9:12 AM

[re: Win10 tablet] ianf,
No idea mate. OEM versions of Windows 10 Professional include ‘downgrade rights’ as per this page. However I have not a slightest clue as to how that’s supposed to work and if it’s possible to perform in your case.

Anura January 8, 2016 3:02 PM

@Wael

“I’m not sure I get it. What’s to stop an adversary from using an array of independent (don’t share all resources) devices for the attack?”

Cost. It’s really as simple as that; GPUs provide cheap, parallel processing power, but share resources so provide lower gains with higher memory access. An array of ASICs with independent resources can get around this, but significantly adds to the cost.

The best model for password storage (obviously, simply using public key cryptography is ideal, but impractical in many cases) is to use a secret key stored in an HSM to make it unlikely that remote attackers will be able to recover the key, then combine with a slow password hashing function just in case your secret key is recovered anyway. Argon2 above provides a field for a secret key; I do in mine as well. Even if your key is stored in-code, it still protects your password database in the case of an application-level database leak such as a SQL Injection.

Looking at the specs Argon2, I have some issues with it (although I’m not going to call it insecure, and I don’t want to suggest it’s worse than PBKDF2, BCrypt, or Scrypt). What it does is use an indexing function to determine which blocks will be added to the hash; this can be a data-dependent or data-independent algorithm (the former opening the door for side-channels). The data-independent indexing route uses the internal hash function as an RNG. Since hash functions are relatively slow, an attacker can see gains by precomputing the hash values and prefetching the next block while the hash function is executing for the current block.

This is why I chose in my design to use a XorShift-based RNG (it’s about as fast as it gets) and both read from and write to random locations in memory. I looked it up last night, since I didn’t have it stored in-head. My memory mixing algorithm is as follows:

  • ley z be a 64-bit variable initialized to the binary representation of the fractional portion of pi (I might have used e, or phi)
  • let x be the memory cost multiplier (configurable, there’s also a computation cost multiplier – I recommend 1 for both by default)
  • let H be an array of h 64 bit blocks from the hash output (if not a multiple of 64, the last bytes are dropped)
  • let M be an array of n 8-byte memory blocks (initially populated by the hash of the password/salt) where n is a power of 2
for i = 0 to n*x
{
    r = XorShift128em() mod n
    z += M[r]^r
    z ^= H[i mod h]
    M[r] = z
    z = rotl(z,31)
}

XorShift128rm is a specialized variant of XorShift, which uses multiplication and rotation on the output to correct some undesirable properties of the regular XorShfit algorithm. It’s seeded purely with the salt, and the output is assumed to be known to the attacker.

Anura January 8, 2016 3:47 PM

Continuing from my previous post. Based on my testing, I chose to have both memory location and memory passes to be configurable. The idea is that GPUs don’t have dedicated per-core caches, and CPUs do. Since you want your code to run as efficiently as possible on a CPU, but not scale well to a GPU, you want to set your memory usage to be equal in size to the CPUs L2 cache. There is a noticeable CPU performance hit right when you go from smaller than L1 to larger to L1, and a very large hit once you go beyond L2 cache (or was that half the cache size?). I don’t have my benchmarks handy, so I don’t remember numbers.

My i7 has a 64kB L1 cache size and a 256kB L2 cache size. This is what my GTX 970’s memory structure looks like:

http://www.pcper.com/reviews/Graphics-Cards/NVIDIA-Discloses-Full-Memory-Structure-and-Limitations-GTX-970

So there are 1664-cores sharing an L2 cache size of 1792kB, but my design limits the L2 cache to containing the memory from 7 sequential calculations if you set memory size equal to the L2 cache. Of course, if you have all those sequential processes sharing memory, there’s going to also be a lot of latency, and there is a cache of 96kB local to every SMM (128-cores per SMM) which there is a good probability of a miss with a 256kB memory usage size.

Now, I don’t actually know GPU programming, and I should really be testing to determine the ratio of CPU Performance to GPU performance, but I think L2 cache is a safe bet. Of course, it’s been months since I tested this.

Thoth January 8, 2016 5:58 PM

@Anura
Would it be better to use a Bcrypt input and feed it into an Scrypt which you eould get the best of both worlds of CPU and GPU hardness ?

Where are you going to get a HSM to store your secret key considering most of the known HSMs are expensive in cost ?

Anura January 8, 2016 6:35 PM

@Thoth

Bcrypt and Scrypt are both memory bound, so if you want to add CPU to it, you can probably use either with PBKDF2 – Scrypt, being configurable, can consume more memory, but BCrypt is a lot slower on GPUs for the same amount of memory (due to a flaw in Scrypt that allows for a time/memory tradeoff).

As for HSMs, yeah, I’m not suggesting everyone must use an HSM, but that’s just the best case scenario. However, typically, the severity of the leak is roughly proportional to the resources of organization that is storing the data. So, while I wouldn’t expect someone running maybe two or three servers to use an HSM, I would expect that an organizations that can afford 30-50 servers can afford a few HSMs (although I doubt most of them would consider security to be worth the cost).

Wael January 8, 2016 6:48 PM

@Anura,

Cost. It’s really as simple as that.

Not an optimal solution in my view.

An array of ASICs with independent resources can get around this, but significantly adds to the cost.

One time hit! Also bot-nets can be used. I don’t think a constant time delay factor is a formidable barrier. If the object is to protect a hash database, you can encrypt each row with a data encryption key that’s in turn encrypted with a well protected private key or another key encryption key (key exchange key.)

Even if your key is stored in-code, it still protects your password database in the case of an application-level database leak such as a SQL Injection.

Not necessarily! The application can then be fooled to extract the “protected” rows one at a time from the database. Other controls will be needed. You have effectively shifted the problem from a “crypto” domain to an “access control” domain.

I haven’t looked at Argon2 yet. I’ll look at your design later on, when (or if) I have a chance.

Anura January 8, 2016 7:08 PM

@Wael

Not an optimal solution in my view.

Definitely not; the optimal solution is that everyone has a chip embedded in their brain that allows them to compute a shared secret and signature using 256-bit equivalent security, post-quantum algorithm. But we have to work with what we have, and if we are going to allow user-chosen passwords, and we want to provide them a reasonable level of security assuming they are going to choose weak passwords, we need a solution that’s going to be easy to implement and cheap enough to justify it to the business types.

Not necessarily! The application can then be fooled to extract the “protected” rows one at a time from the database. Other controls will be needed. You have effectively shifted the problem from a “crypto” domain to an “access control” domain.

I wasn’t clear enough on this point, but there is no protected data to extract; the secret key is hashed along with the salt and password (and other data) in each iteration of the hash function. So unless your application leaks the key, the hashed passwords in the database are completely useless.

I’ll look at your design later on, when (or if) I have a chance.

I really only posted a portion of the memory mixing function. It’s not enough to analyze the design as a whole. If you are really that curious, I could probably write up a list of requirements to justify my design as well as a semi-formal specification and python implementation.

Wael January 8, 2016 7:18 PM

@Anura,

So unless your application leaks the key, the hashed passwords in the database are completely useless.

Ok, a no frills way to bind the data an to application, and transitively to the platform. I get it.

If you are really that curious, I could probably write up a list of requirements to justify my design as well as a semi-formal specification and python implementation.

I prefer a high level overview, if you have to share.

Anura January 8, 2016 8:53 PM

@Wael

Highest level I can get:

Parameters:

Hash Functionm, Output Length, Memory Length (power of 2), Memory Multiplier, CPU Multiplier, Iterations, Salt, Secret Key, Max Passphrase Length, Passphrase, Additional Data

Seed the Xorshift RNG by hashing the salt and taking the leftmost 128 bits as two little-ending 64-bit integers, w and x. Set the most significant bit of w to 1 (XorShift can’t have an all-zero state).

Concatenate all the paremeters in the specified order, padding the passphrase with 0 to the max passphrase length, while encoding variable-length data by prepending the length (usng the original, not padded length for the passphrase). We will call this string “x”.

Initialize a counter to 0

We use a hash-based RNG, R(seed, n) as follows

R_0 = HASH(seed)
R_1 = HASH(R_0 || 1 || seed)

R_[n-1] = HASH(R_[n-2] || (n-1) || seed)

Where || is the concatenate operator.

Allocate an array M of [memory length]/8, 8-byte integers and write to that by taking the leftmost [memory length] bytes of R(x, floor([memory length]/[hash block length])+1)

We will then initialize z by XORing all 64-bit integers in M together

This is the end of the initialization.

Now for each iteration, we will compute Hash(z | M | counter | x) and then increment the counter, compute R_cpu cost using the above function, and use that as the value for H for the previously described memory mix function, using the corrected C code below

for i = 0 to n*x
{
    r = XorShift128rm()
    z += M[r >> (64-n)]^r
    z ^= H[i mod h]
    M[r >> (64-n)] = z
    z = rotl(z,31)
}

XorShift128rm RNG is here:

http://pastebin.com/y7aukwZ6

After that, we will hash the contents one last time, take the left-most [output length] bytes of R(Hash(z | M | counter | x), floor([output length]/[hash length])+1) as the output of the KDF.

I’ll write up why I did everything the way I did this weekend. There are some sketchy looking Super Mutants I have to deal with tonight.

Anura January 8, 2016 9:39 PM

Slight correction:

Change this:

After that, we will hash the contents one last time, take the left-most [output length] bytes of R(Hash(z | M | counter | x), floor([output length]/[hash length])+1) as the output of the KDF.

To this

The KDF finishes by outputting the left-most [output length] bytes of R(Hash(z | M | counter | x) || x, floor([output length]/[hash length])+1)

Wael January 8, 2016 10:15 PM

@Anura,

What’s wrong with doing something like this:

SHA-2X iterations (SHA-2Y iterations(data | secret key) | SHA-2Z iterations (RN))? Save the random number (RN) and the key in the application. Isn’t that enough to slow down rainbow table calculations? X,Y,Z are large (100,000+) and confidential. Does Argon2 offer better characteristics?

Anura January 8, 2016 10:40 PM

@Wael

That’s computationally equivalent to PBKDF2, which you can get speedups of hundreds or thousands of times from a single GPU over a CPU (that is, if it takes 1 year to crack a password on a single CPU, it will take less than a day on a single GPU.

The relatively large memory usage with random memory access is key to making GPU cracking infeasible.

Wael January 8, 2016 10:50 PM

@Anura,

I thought so too. The proper description of the problem then is to develop a crypto hash function that resists parralelization. It’s not just about a delay.

The relatively large memory usage with random memory access is key to making GPU cracking infeasible.

Has this been proven? Or is it just a conjecture? I haven’t read the Argonn2 spec, btw.

Anura January 9, 2016 12:56 AM

@Wael

It’s been proven in practice, although I’m not sure it’s been proven formally. It’s well known that GPUs are efficient for computation due to massive parallelization (as many as a couple thousand cores on modern GPUs), but does not have the per-core cache sizes that a CPU has. Given that memory is a lot more limiting than processing on modern hardware, by forcing most of the cost to memory-bound operations you limit what kind of gains you can get through a GPU.

The best way to increase the cost of memory is to force a lot of trips to main memory, and the best way to do that is to ensure that there will be a lot of cache misses – this means using a lot of memory, and randomly accessing memory so it can’t prefetch large blocks – the more cores you try to use, the more trips to main memory you make, until you can’t get more performance.

Keep in mind, that you can never prevent massive parallelization, only increase the cost of it – if you want to run it on a single server, then someone who can afford a million of those servers can obviously run it a million times faster than you, although at that point a huge array of ASICs is your best bet.

Of course, the other option is to use a GPU to hash the passwords in the first place and write an algorithm that maxes out the parallelism of a GPU. It’s certainly possible to send PBKDF2 to a GPU and require it take 100ms to compute thousands of hashes simultaneously that then get hashed together. The problem with this approach is simply that most servers don’t have graphics cards.

Wael January 9, 2016 1:18 AM

@Anura,

Of course, the other option is to use a GPU to hash the passwords in the first place and write an algorithm that maxes out the parallelism of a GPU…

Makes sense. Fight fire with fire. I am aware of work in progress to parallelize crypto algorithms, which are currently serial in nature.

Anura January 9, 2016 2:29 AM

@Wael

AES-GCM mode is ridiculously easy to parallelize; it’s obvious for CTR mode, since each block in the keystream can be computed independently, but it’s also possible for the MAC due to the simple arithmetic. Essentially, the GHASH function (used for the MAC) does a calculation similar to this:

y = k(k(k(kx1+x2)+x3)+x4)

Where, xn are the blocks and k is the GHASH key.This can be expanded to

y = k4x1 + k3x2 + k2x3 + k1x4

Each of those terms are computable in parallel – there’s all sorts of ways you can break that out.

Of course, I don’t like GHASH for a variety of reasons (mainly, the linearity and weak keys), and designed my own fast algorithm that has the mode of operation, encryption, and MAC built in which are all fully parallelizable – but while I trust I have done enough research to be knowledgeable about how to design a hash-based KDF, I know for a fact that I don’t know what I’m doing when it comes to designing an actual block cipher/hash function.

I do, however, think the high level construction is sound. Essentially, you make a tweakable block cipher that takes two tweak values, and pass a counter for one tweak, and a nonce for the other. You encrypt each block with the same nonce but a different counter and you set aside the result of encrypting the data half-way. You then XOR the half-way encrypted data together, and compute the MAC by fully encrypting the XORd-together, half-encrypted data and passing the nonce and data length for the tweak values.

It should be secure as long as the cipher is secure, and as long as an oracle that encrypts the data with arbitrary tweaks and plaintexts results in output that is indistinguishable from truly random data and does not allow you to recover the key for any computation that costs less than a brute force. Only downside is that it requires you pad the input.

Wael January 9, 2016 3:03 AM

@Anura,

I know for a fact that I don’t know what I’m doing when it comes to designing an actual block cipher/hash function.

Publish it in an RFC, you have enough source code to start with. Maybe you can generate some test vectors and include them as well.

Curious January 9, 2016 5:43 AM

Somehow, I started wondering if the internet-of-things will find its way into pencil pens, ink ball pens, any pen used for writing at all. 😐

I am not that technical inclined, but I can imagine that someone might want to try installing some kind of motion sensor into pens, overtly or covertly, and record the movements. On the other hand, I can’t help but wonder if someone else must have come up with this idea already, years ago. 🙂

Clive Robinson January 9, 2016 7:03 AM

@ Curious,

I started wondering if the internet-of-things will find its way into pencil pens, ink ball pens, any pen used for writing at all.

Long answer short “it will”.

Back in the 70’s and 80’s technology was developed to allow the easy input of graphical data. Back then it consisted of a grid of fine wires behind the surface of a drawing board, and a small coil in the tip of a stylus or cross hairs puck, which had a switch mechanism to activate a registering of a point. These “graphics tablet” devices were common in CAD systems of the time, even though they cost thousands of dollars.

For a project I even built my own system using “ribbon cable” to make the crossed wires and later a thin double sided PCB. Although it worked –sort of– it was insufficiently “fine grained” and had reliability issues. I thus went to using a “ploting table” mechanism in reverse.

Move forward a third of a century and you now have “note taking” pens that have accelerometers a micro controler lots of memory and a USB interface all for less than 100USD. They record the movment of the pen which can then be reproduced on a computer screen, put in a graphics file, or fed into charecter/word recognition software. For such devices to join the IoT all you need to do is replace the USB interface with some kind of radio interface that can communicate either directly or indirectly to a local network. Not an overly difficult task these days.

Thoth January 9, 2016 7:09 AM

@Curious
A literal smart pen that tells your nearest stationary shop you need a certain ink cartridge for your fountain pen when it is finishing and automatically bills you 🙂 . It’s very possible.

@Anura, Wael
Attacks on passwords typically take place under two types of circumstances namely offline attacks and online attacks. Offline attacks usually take place due to access of hashed passwords (a.k.a hacked customer or login databases). The online attack is usually someone spamming a web form.

To prevent spamming of online login attempts, monitoring connection sessions and freezing out accounts or causing a delay on the login attempts from the web server side would be more than enough coupled with a decent password hash like Bcrypt, Scrypt or PBKDF2 done on the web browser (not web server) side and submitting the hash to the web server. This will slow down the web browser and freeing up resources on the web server to receive more connections. Locking accounts on too many bad password attempts would be most favourable against online spamming of login attempts as well.

The hard part is the offline bruteforcing when the databases have been leaked. The best method to protect a leaked database is to render it’s information unusable via encryption so that no one gets to know the password hash (as the raw materials for bruteforcing).

No matter how complex or strong the password hash algorithm is, the fact is someone is going to find a way around the password hash algorithm to lessen the time to bruteforce the password hashes including making chips capable of doing so. The current proliferation of high speed on-the-fly programmable FPGAs allows anyone to build specific circuits to speed up bruteforcing and FPGAs are getting cheaper and easily available.

By using a password encryption on top of a hashed password, it provides additional protection against exposure of sensitive password hashes.

One way is to build a poor man’s Authentication Server (Password Encryption Server) with a highly stripped down kernel to only having the necessary drivers and kernel codes or even a microkernel (seL4 ?). On top of the kernel you have the specialized cryptographic software to only respond a Boolean if a password hash is correct when fed with a user submitted password hash and an encrypted stored password hash.

For basic network protection, you can only delegate a known port and hard code into the network stack to close all the other ports (if you find network firewalls less assuring). If you want to have a faux “Data Diode” style setup, you can delegate a port to only receive request and a port to only respond request and a software network Guard to ensure that the request and response should go through the correct ports and flow according to the correct direction.

The poor man’s Authentication Server should be deployed on a separated networked physical machine in the owner’s possession in some ways.

Of cause all these a low assurance software programs that do not stand a chance against MSAs and HSAs but for the LSAs which are the common daily threat, they do work pretty decently and is a cheaper alternative to highly expensive programmable HSMs containing SEE environments to run business sensitive password verification logic codes.

Whenever possible, it is advisable to promote the use of dedicated hardware security devices (e.g. HSMs, Smart Cards, PKI tokens, ARM TrustZone TEEs… etc…) to do PKI authentication and save the password/PIN or biometrics for logging into security devices which would protect against MSAs.

Figureitout January 9, 2016 1:39 PM

Nick P RE: miTLS
–Looks nice, they have an implementation, correct?

Know if it’s being deployed anywhere? FlexTLS was also apparently used to find some big attacks. Does F# have a lot of dependencies though?

Figureitout January 9, 2016 4:40 PM

Curious // Clive Robinson // Thoth RE: bugged pencil/pens
–If I were designing one, for a pencil I’d put the tiny RF transceiver SoC (w/ integrated accelerometer and any other needed peripherals) just underneath the eraser, you’d likely see it pulling out the eraser. And having an antenna along the lead. The performance would get worse as you sharpen it lol…And having an antenna getting in the way of writing would be way too annoying.

There’s still too many problems (what about a battery? lol, can you say lawsuit if someone puts a battery in a pencil sharpener?). You’d obviously have to raise the price for such a product so still would be easily avoidable.

A pen would be a different story (not cost-wise) b/c you don’t have to grind it away to use it.

For the security freaks, having a “verified” (LOL this is retarded) mechanical pencil and just reload graphite in should be good if those scenarios ever actually happen. Think it’s safe to say there’s bigger problems to worry about…

Wireless Security: nRF24
–Nice post here, one of the very few pentesting blogposts I’m seeking out to find big holes in this protocol and what to tweak (I initially thought having pre-set payload lengths would be an obvious security choice, but for eavesdropping it may be harder to capture changing lengths). Nice to see that Travis Goodspeed needed essentially “hardcore” physical access, to reprogram the chip w/ security breaking parameters (nice hack he found was forcing the address length to 2 bytes, instead of the 3-5 in the datasheet (I’m defaulting to max 5 bytes of course, so 1,099,511,627,775 possible addresses that need to be bruteforced. The address length is probably one of the biggest security features that authenticate, I wish there was a way to offload this to like a smartcard and increase that length considerably.)).

It’s relatively hard to eavesdrop on this chip due to not supporting “promiscuous” mode, which means capturing all data going back and forth, which also makes debugging harder. It’s already not recommended for new designs (sigh, always in this space!), the newer chips have strong integrated crypto in the protocol, think in the hardware, I think. I’m not under an illusion whatsoever the protocol is unhackable (remotely), there must be a big hole somewhere…

But I don’t believe this attack would work on how I’d configure the chip. They’d need to bruteforce the address space or some endrun attack. I can counter that w/ a check and a delay (even though I think the check is done automatically after loading in addresses, I’d have to think of a way to check this, and “nope out” by flushing buffers, delaying for 5-10 seconds, and log that in EEPROM lol).

http://yveaux.blogspot.com/2014/07/nrf24l01-sniffer-part-1.html

Thoth January 9, 2016 5:32 PM

@Figureitout
A good amount of smartcard chips have USB and/or single wire protocol pin interfaces which you can take advantage of.

Figureitout January 9, 2016 6:30 PM

Thoth
–Wouldn’t doubt it (haven’t used smartcards from development standpoint much, just used, so I don’t know how “they talk”), but I think a hardware change is needed for what I’m wanting (these are loaded into nRF24 chip). Unless I can load an obscenely huge ID number into like a 32GB smartcard then transmit that each time to check each activation and authenticate (that’s probably way too huge, would take forever,the max datarate is like 2MB/s). Lots of the addresses used for these RF chips aren’t very big at all, like 4 digit numerical pins, these address spaced would be bruteforce cracked almost instantly if hooked up to internet (or some easy remote access).

Kind of a stupid attack/scenario, but for something at my work, if someone just looks at the receiver and sees the paired address (no RF hacking really necessary) we display on screen. Then using another program going into serial port to load in ID, you could rack up false activations and make a general ass of yourself…You need a lot of external info, programmers, toolchains (which you can get trial version for free, and guess what, just reset the system time back everyday and use it for free indefinitely LOL) etc. so that attack is a real stretch and generally not worth the effort. But I bet the underlying protocol has some real weaknesses…

Figureitout January 9, 2016 6:41 PM

Thoth
–On the address thing, what I’m trying to say is even if I was checking some huge number on an external memory device, right now if someone had the address they could still talk w/ the chip regardless and maybe lock up w/ a DOS attack or sprinkle false activations in logs. I want a bigger address space there.

Thoth January 9, 2016 11:16 PM

@Figureitout
32GB Smart Cards would be a dream. The highest end have a Flash of 1.5MB and 25KB RAM and it’s based on ARM’s SecuCore SC300 design. Why not offload data to a RAM or Flash chip and use the Smart Card as an encryptor. The logical protocol of a Smart Card is the APDU. You can read ISO-7816 and ISO-14443 for more information. I m not sure how each type of Smart Card chip handles their addresses and each is different in their architecture. Don’t forget these chips have a ton of NDAs on them and are simply blackboxes. If you don’t need tamper resistance, use other types of chips but if you need tamper resistance, use the Smart Card chip simply as a security engine.

Figureitout January 10, 2016 12:26 AM

Thoth
–I thought 64GB cards have been on the market for awhile: http://www.amazon.com/Professional-SanDisk-Nokia-Lumia-1520/dp/9876050885

A constant concern for those is something like that HDD that just had a big screw in it as a weight to make it “feel” like a mini-HDD and had like a 8 or 16GB USB flashdrive in it actually, and would just overwrite. This one was for 128MB: http://www.neowin.net/news/fake-chinese-500-gb-external-drive-is-one-clever-paperweight-literally

If someone is that much of a scumbag w/ no self-respect, it wouldn’t surprise me to embed some malware in there too (getting a “fresh” USB stick for instance, it’s definitely known they can contain malware).

As far as the “encryptor” idea, that’s what I’m thinking of doing next. Pretty simple/straightforward project, the ATmega would be doing the crypto though. As I move forward, I’ll be using a lot of MCU’s (mostly AVR’s for now) to do separate tasks. If I need to sanitize a smartcard after a file transfer, I feel better doing via an isolated MCU than some program in Windows or Linux. I’d be really interested to see some persistent malware (that does nontrivial exfil and can take commands) hiding in MCU’s.

Thoth January 10, 2016 4:35 AM

@Figureitout
I thiught youbkeant 64GB Smart Cards. It should be termed 64 GB microSD cards.

Once you have got your encryptor experiments going, do show us something imteresting 🙂 .

Talking about Smart Cards, I need a reliable way to remove the tamper evident epoxy and I will have full access to the IC circuit of a Smart Card chip I am attempting to play with. Most Smart Cards are pretty easy to decap by simply removing the tamper evident epoxy and you have the circuit in front of you.

Figureitout January 10, 2016 3:48 PM

Thoth
–Ah ok. Yeah, I will (had some difficulties getting crypto to work etc.). Check out the latest squid page, I just finished version 1 of what I’ve been working on (kinda) over winter break. Hmm, if I see anything on decapping epoxy I’ll let you know (I think you’ve seen bunnie’s talk on them, probably the sickest hack I’ve seen on them yet: http://www.bunniestudios.com/blog/?p=3554 ).

ianf January 11, 2016 1:49 PM

Update to OT @ tyr,

last I wrote in half-rhetorical fashionisn’t real estate essentially the key to everything?

I just came across this revolting item that is nowhere near… rhetorical.

PROMINENT ISRAELI LEFTIES CAUGHT ENTRAPPING PALESTINIANS TRYING TO SELL THEM LAND

    A prestigious Israeli TV news magazine caught a famous civil rights activist bragging about handing Palestinians [wannabe land speculators] over to the PA’s secret police. The Palestinians will now likely face torture or death tabletmag‍.‍com

For those who still don’t get it: strange bedfellows of the Palestinian-Israeli conflict in a nutshell. Over and out.

Fozi January 22, 2016 3:25 PM

German Heise news: Someone managed to “copy” EMV credit cards and go shopping with them.

The cards are loaded with a special app that identifies as a credit card, accepts any PIN and sends a more or less static response to the servers.

This works because the susceptible banks don’t bother to do the crypto required to check if the transaction is valid. Banks affected are “in the US, South America and Asia”.

Source article on heise.de (German)

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.