Squids on the Economist Cover

Four squids on the cover of this week’s Economist represent the four massive (and intrusive) data-driven Internet giants: Google, Facebook, Apple, and Amazon.

Interestingly, these are the same four companies I’ve been listing as the new corporate threat to the Internet.

The first of three pillars propping up this outside threat are big data collectors, which in addition to Apple and Google, Schneier identified as Amazon and Facebook. (Notice Microsoft didn’t make the cut.) The goal of their data collection is for marketers to be able to make snap decisions about the product tastes, credit worthiness, and employment suitability of millions of people. Often, this information is fed into systems maintained by governments.

Notice that Microsoft didn’t make the Economist’s cut either.

I gave that talk at the RSA Conference in February of this year. The link in the article is from another conference the week before, where I test-drove the talk.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Posted on December 7, 2012 at 4:04 PM44 Comments


RobertT December 7, 2012 6:37 PM

I’m curious what others think of Apple’s plans to bring some manufacturing back to the US.

My first thoughts was:
Is this move to address supply chain security issues with iPad’s, iPhones etc?

It is clear that iPhones are extremely popular with all sorts of groups ranging from Active duty soldiers to politicians, all makes me wonder if some TLA has suggested the US needs supply chain security.

It’ll be interesting to see which products are locally produced and who the buyer is.

Shawn Smith December 7, 2012 6:56 PM


I had the impression that the U.S. manufacturing was for Macs only. My guess, since those are the highest priced items Apple produces, they could more easily absorb the increased labor costs. And who’s going to care if their $2000 MacBook Pro suddenly costs $2500? (Numbers pulled straight from my nether regions.) Are they suddenly going to switch to a Windows 8 box, or worse yet, some GNU/Linux variant?

Clive Robinson December 7, 2012 8:56 PM

These global companies may appear to be The Four Horsemen of the Data Apocalypse [1][2], but are they the men, the horses or just a vanguard for worse to come as in the Hades that follows Death[4]?

The first and perhaps important thing to note is that the attitude of these organisations to peoples data is not new it is as a result of an existing market for personal and other data. And arguably this market has existed ever since man started to trade.

As with any trade the market cut’s both ways, we tend to view it favourably or positivly when it helps us get what we want and unfavourably or negatively when it hinders us. But in the past thirty years it’s scope to do harm has vastly increased and we see that being a hinderance is almost as nothing or just the tip of the iceberg compared to the very real harm data collection and analysis can do not just to individuals but entire groups of people.

Why have we alowed this to happen? could we have prevented it? And can we stop it? are the most obvious questions that arise. But it is understanding historicaly why the data market exists and then why it has apparently mushroomed out of control that are the important first steps that explain the existance of these four horsemen of the data apocalypse.

Historicaly minimal data was kept as it was a high cost activity that was only considered as a necessary part of some other activity such as book keeping or ordering of stock etc. And historical it only occured in larger businesses to prevent the employees robbing the owners.

Detailed personal records only realy started with taxation and law where there was a liability and judgment involving identifing people outside of a small community.

It is only in more recent times that collecting personal data has spread into science, medicine and other proffessions. As this increase in personal data collection has occured so has our ability to treat information in an abstract way and thus make it amenable to the application of the tools of logic and mathmatics.

But why has this occured within the past few hundred years. Oddly perhaps some of these tools arose because of other business needs such as in the trades of gambling and insurance to find information within the information that differentiated between profit and bankruptcy.

Thus the primary driver until very recently of collecting data has been as part of a business or legal process, not as a business in it’s own right.

It was the industry of marketing which started the collection of data as a business in it’s own right. But that raises the queston of why?, that is historicaly why was there money to be made in collecting and analysing data?

A little discussed point is that in our modern world of competing choice, more than 90% of new products placed on the market never succeed. And these are products are actually just one or two of the hundreds of product ideas that are proposed and analyzed. Thus at best only 1 in 1000 product ideas actually earn money in a competative market. Anything that can improve the odds is going to be of value, thus the idea of asking people what they wanted and what they prefered and why became of significant interest and market research was the result.

But although the data was usefull in the general case of trends it was not usefull in the specific case because what was recorded was not a direct personal indicator such as name and address but the more general “market class”. In the main this was to make the data more comprehensible to the human users.

However in the area of Government personal data was recorded for taxation or census reasons and the volumes of data required armies of individuals.

The scope has increased due to the fact that the cost of collecting and storing data has fallen dramaticaly and likewise the cost of sifting it for valuable nuggets or jems of information has likewise fallen to near zero cost. In both cases due to the rapid progress and rapid price drop compared to power in information systems.

It is thus the technology that is the tool in the hand of the horseman and it is the horseman that sets the direction of his stead.

But it is important to notice something important othher than the near zero cost. In recent timess there has been a change of direction.

But it is the hords that follow where the real pain occurs, that is it is tthose who take the raw clay of the information and mold further tools from it which they use as their tools of their trade.

[1] In the Christian viewpoint the four horsemen of Conquest [3], War, Famine and Death [4] appear as a result of “the Lamb of God” (Jesus) opening the first four of the seven seals on the scroll in God’s right hand. And they are sent down upon the earth as harbingers of the Last Judgment.

[2] There are many views held by religious scholars and others as to the meanings of the Horsemen most are by historical context. So viewing them for peronal “Data” would just be an additional view in the light of our modern world.

[3] Some view the first (white horse) as Pestilence not Conquest, the origins of this appear to be some considerable time after the original hebrew texts.

[4] The first three horsemen carry some kind of tool (Bow, Sword and Scales) the fourth Death is followed by the legions of Hades.

Clive Robinson December 7, 2012 9:37 PM

Aghh it’s done it again a thousand curses on this smart phone…

Sorry for the above it got mangled I shall try and put it right.

The section with all the oooo’s should have read,

However in the area of Government personal data was recorded for taxation or census reasons and the volumes of data required armies of individuals to process. It was in the US census beurau that an individual (Herman Hollerith) had the idea of punching it up on cards that could be read and counted thousands of times faster on a machine. It was this automation and speed of this early Information Systems Technology that made the possability of asking more searching questions that gave rise to the first steps of the mushrooming of personal data collection.

The rest of my post follow the paragrah,

But it is the hords that follow where the real pain occurs, that is it is tthose who take the raw clay of the information and mold further tools from it which they use as their tools of their trade

To continue,

Who is to blaim for the recent change, the four horsemen or the hords that consume the data for their own ends?

I’m sure the four horsemen would say they are simply responding to the needs of the market. And likewise I’m sure the hords would say they are simply using the data the four horsemen have provided. But this finger pointing is a pointless circular argument, you need to view it in a hierarchical way starting from the bottom.

It is those actually formulating the questions that cause people harm that have decided how the tool is to cut. They can only do this if the data supplied alows them to ask their questions about individuals. Thus those supplying the data have an implied duty of care to prevent individuals being identified.

But this is a problem, it is in fact very difficult to provide data that is sufficiently anonymous but still be of use. Time and time again supposadly anonymous data has been found to be anything but. Supposadly ethical researchers claim that they need more detailed data and the data providers are more than happy to provide it for money.

We have seen this in the UK with the UK Government selling quite indepth medical records to companies for research. Unfortunatly as has been demenstrated it is relativly trivial to cross compare data with other data sources and reveal the people beneath.

We must accept the fact that in our current political systems there is never going to be legislation to stop the data gathering or it being sold. The best we can hope for is legislation that ensures people have ownership of their personal data, but even that is unlikely. Maybe we will get legislation to ensure effective anonymisation of data but I’m not hopefull that either it’s possible or that the legislation could be implemented.

Clive Robinson December 7, 2012 10:02 PM

@ RobertT,

I’m curious what others think of Apple’s plans to bring some manufacturing back to the US?

Two immediate thoughts spring to mind,

The first is to avoid the costly legal battles and import control tactics that have beset Apple on their war against Google et al.

The second is a slice of US Government and military budgets. It is no secret that the ordinary workers and soldiers use Apple products in prefrence to the equipment they are supplied with for the simple reasons of “it works”, “it’s usable” and “it’s the right size” (most of which is untrue for Mil/Gov equipment).

However there is a significant problem with the Apple Kit, compared to the issued Mil/Gov equipment it’s not secure. Now we know from the “Ohbamaberry” that it is possible to get a level of security on mobile devices that the NSA considers OK for the US President to use and it would appear to be mainly a software not hardware issue. Currently of the mainstream OS’s on mobile devices Apple’s iOS is the one that is most likey to be made more secure. However I can’t see the likes of the US Gov (ie the NSA) wanting the unprotected code and finished hardware to be in premises in China or other Far East nations such as Korea where Chinese agents have been found to be carrying out espionage.

Of course this does still leave as an open question the use of Far East Silicon, currently Apple use parts from a fairly major Korean manufacture that they have had a few legal disputes with. However the siting of the new Apple plant in Texas not far from other silicon Fab plants leaves open the possability of them actually “rolling their own” or atleast using parts the production of which the US Gov has approved of.

RobertT December 7, 2012 11:46 PM

@Clive R
” However the siting of the new Apple plant in Texas not far from other silicon Fab plants leaves open the possability of them actually “rolling their own”

Nice idea BUT I dont believe there is a single FAB capable of sub 90nm volume production anywhere in Texas. New smart phone designs are targeted at below 28nm, so there is absolutely no way to retarget these designs for a 90nm fab.

It will be interesting if Apple starts competing in Moto’s traditional secure government radio market place. Obviously they will win the desirability game if they arrive with iPhone as a starting platform for a secure radio product.

jfw December 8, 2012 3:18 AM


wrt. Feudal Security – about 20 years ago this analogy has been taken one step further in a discussion here: “Could we apply the security advantage a civil state has over a feudal one to computers?”

Has been a pet project since.

Turns out we can. Find the requirements we would recommend everybody to to demand from their IT at askemos.org. (There’s also a proof of concept implementation linked.)


Clive Robinson December 8, 2012 4:00 AM

@ RobertT,

It will be interesting if Apple starts competing in Moto’s traditional secure government radio market place. Obviously they will win the desirability game if they arrive with iPhone as a starting platform for a secure radio product

It won’t be just the “desirability game” they will win. It’s the level of integration of function other than just “radio”. We have seen many police forces and other “security” organisations trying to coble together systems involving presentation of high quality pictorial information, and sending back of close to live image feeds etc, all based on what is in effect a small PC using IP and USB to communicate and talk to peripherals. The “intergration game” that has given us our trendy smart phones is going to happen to secure radio systems for exactly the same reason it did clunky mobile phones, “it’s what the customer wants” and in many cases “needs”

A couple of years ago I saw a very prototype system based on an HTC phone that had a built in fingerprint reader. Somebody had hacked together a demonstration package to act as a criminal records search system. When. you questioned some one the first thing you did was scan their fingerprint and send it back via the GSM network. The backend system would check for probable matches and push the photos back for on the spot ID checking. It was estimated that it would get outstanding arrests and fine sort out rates up to 70-80% simply due to the nature of the repeat offences (speeding drivers will always speed, drunk drivers will always drink etc).

The second advantage comes from the fact that arab spring is opening up a whole new set of markets and Apple is not seen the way Motorola has been in the middle east. It will be interesting to see if Android inherits the anti-Motorola mantal in the Middle East now Google has purchased that part of Motorola to fend off Apple in patent litigation.

I’ll tell you one thing for free, trying to work my head around the “wheels within wheels” of the Smart Phone Litigation frenzy and how it effects regional thinking make my head hurt 🙁

Blog Reader One December 8, 2012 4:20 AM

Freedom to Tinker talks about the situation where a vendor isn’t interested in dealing with a security issue, causing the process of “responsible disclosure” to fail.


Also from Freedom to Tinker is an entry about trade sanctions that restrict the export of certain IT hardware/software to repressive and/or hostile regimes. From this, there is the question of how to differentiate technologies that help dissidents and activists versus technologies that help to increase oppression and enemy power.


e December 8, 2012 12:50 PM

Posted this previously, but I presume it got moderated for quoting the original article.


I don’t have much sympathy for the victim here: I’ve had a visceral
dislike of filtering software since, oh, middle school.

But, from a security standpoint, the story of CYBERsitter and its founder vs. one of China’s most prolific black-hat penetration teams offers a chance to ask the question — what should he have done?

For three years they waged a war of harassment and attrition against the guy’s firm, trying to bleed them into bankruptcy in order to end the lawsuit he’d filed against the Chinese government (and many Chinese companies). They all but succeeded.

The guy spend those years responding with band-aid homebrew solutions to every problem, and in the process came away personally worse for the wear and a fair bit poorer thanks to the $58,000-month drop in revenue the Chinese mischief caused.

Here’s my suggestions to anyone finding themselves in his shoes. What are yours?

o) There’s always a point where you realize you won’t walk away from this the same as you came in. For the CYBERsitter founder, that seems to have been when he got a professional analysis done of some of their malware.

At this point, realize that taking a loss now, on your terms, is infinitely better than dying of a thousand papercuts on theirs.

o) It seems like he had a fairly small company, probably under 20 employees. So go around and do a survey of what everyone uses their computers for, what software they run, etc.

o) Reboot your servers — to Linux or BSD LiveCD/DVD images. Assuming you don’t have the expertise to put together custom/preconfigured ones, put all the configuration steps into a script that you can run on boot. If the server starts acting up again, reboot.

o If migrating from your previous configuration takes more knowledge than you have time to learn, bring in a consulting firm.

o) Get a Linux support contract, not too long term, but the platinum-plated model with more bells and whistles than a carnival in a carillon.

o) If you or the support company can do custom LiveCDs, put some together with maximum practical hardening. No services listening, no Bluetooth or wireless support, boots to a user without sudo access, etc.

o) Call your staff together, and explain to them that the company is being electronically attacked by people that want to drive the company out of business. If they succeed, everyone will lose their jobs. You don’t intend to let them suceed. But fighting them off will take some big changes, and inconvenient ones. Dealing with them is the price the company must pay to survive. “We will fight them on the beaches, we will fight them on the landing grounds…”

o) Hand out pre-paid burner phones. “Forget the phones on your desk, use these.” PBXes are death, at least for the moment. For style points, get some “THIS PHONE IS TAPPED” stickers from 2600 and put them on people’s desk phones.

o) “Everyone, this is Linux. Here’s where you click to browse the web. Here’s where you click to edit documents. Here’s a few printed-off screenshots with “DOCUMENTS” and “WEB” labels hastily Photoshopped in large friendly letters. If you get confused, I photoshopped the number to call at the bottom.”

o) Hand out Linux LiveCDs, hard drive docking stations, and USB CD-drives to everyone. From now on any computer that touches the Internet runs on a LiveCD. “If you make a mistake and they get in your computer, just save your work and reboot. When you go home for the day, lock the hard drive in your desk.”

The hard drives should be formatted to Ext2/3/4 (or another open-source version): this a) ensures nobody ‘backslides’ and boots up Windows, b) cuts the odds of a filesystem-level autorun vulnerability, and c) makes forensics easier if the hackers do find one.

o) Using your earlier survey, get the people most likely to need help in touch with support. Come up with field-expedient (read: “not pretty but it works”) ways to meet their computing needs.

o) People that NEED to run Windows applications get VirtualBox and a VirtualBox image, configured to have no network access (but with the add-ons so you can drag and drop files). If possible, set up the applications to write all data to a “shared folder” instead of the VirtualBox image, so if the Windows system gets messed with — just copy over a fresh version of the image and reboot it.

o) Step back, take a deep breath. Immediate disasters (hopefully, mostly) averted. Now go hire some forensics geeks, pen-testers, security experts, and TSCM sweepers — and spend a few weeks of intense conference-room sessions re-engineering the company’s network.

Figureitout December 8, 2012 5:42 PM

I’m curious what others think of Apple’s plans to bring some manufacturing back to the US.
–Smart PR move, associating suicide nets w/ that iThing isn’t an image they want. Cook came out saying they have responsibility to create jobs, but not certain jobs.

The best we can hope for is legislation that ensures people have ownership of their personal data
@Clive Robinson
–I certainly do not want to place my hope in a legislature. So if that’s the only option your wisdom sees, yikes. Maybe we should stop caring about privacy.

kashmarek December 8, 2012 6:04 PM

@Clive R

To better understand the marketing aspect of this entire ordeal, view the 4 part BBC series “Century of the Self”.

Nick P December 8, 2012 11:24 PM


I went back to check on those NSF-sponsored projects that started this year. One already has an interesting deliverable: a formally verified web browser. The verification targets implementation more than an abstract model. How much it proves is subject to debate, but it’s already usable.


They use modern components in the design by imitating Google Chrome’s strategy. Chrome itself was inspired by the OP Secure Web Browser, which was written in Java and used some verification technology. The DARPABrowser is another project from past secure browser research.

Naveed December 9, 2012 3:13 AM

I smell an attack on US corporates. The point is all four of them are US based companies and the benefit goes to the US competitors and the loser will be the US itself. The enemies will try (may be they were trying or are still trying) to put flame i.e. give multi million (maybe multi billion) projects to these four companies. They may have used old techniques like give one company a project to build, say project xyz and another project to the other company to build, say project abc. Only the enemy knows that project xyz and abc have interest clash. They are designed in such a way that both companies will in big picture eat another company’s profit. A company whose objective is profit may unable to assess it & a nation secret agency won’t identify long term damages. I think, Schneier is right to point this clash i.e. one company is eating another company’s profit.

Whose is behind is the real question? I am not pointing my finger on another country or nation. Rather, US and its corporates need to open their eyes and understand what is happening behind the scene? A formula to understand who is behind is simple. A country who is publicly not present in those projects is the culprit country… That country is no doubt a super power with clever people (geeks) and with lots of available funds… You know the answers (probably a country with veto power or a G-8 nation) ….

If one nation (or a group of nations) can build something like Flame, Duqo and stuxnet because of their expertise then another nation can build something like this (A lesson to be careful about next time).

The key is that each company must identify the area of its expertise. If Apple is good in computers and phones manufacturing and can develop softwares for it then it should remain within those areas. If Google is good in application and system software for computers and mobiles then it should remain within those areas. If amazon is good in selling consumable items and Facebook is good in social networks then they must remain within those areas.

I think that US needs a regulatory body here, a national organisation which is solely responsible for monitoring the services offered by a corporate and its products. The key is to avoid the clash of interest & fundamentalism within US corporates. That organisation will acts as an advisory board.

I leave the discussion to some other day, but, I think people like Clive & Schneier understand what I mean… They will do the rest of the job i.e. informing people about it…

All the Best.

Clive Robinson December 9, 2012 5:58 AM

@ Figureitout,

–I certainly do not want to place my hope in a legislature. So if that’s the only option your wisdom sees, yikes. Maybe we should stop caring about privacy

Or take personal responsibility for our privacy and stop giving personal information in it’s various forms away.

However it’s not easy, because according to forensic theory where ever we go we leave a little piece of us and we take a little piece of the places with us (a re-wording of Dr. Edmond Locard’s Exchange Principle “Every contact leaves a trace.” [1]). However as we know things ware with time (entropy) so if I leave a fingerprint on a glass or get dirt on my shoes with time these traces will become so small as to not be measurable [1] in the general “noise”.

Thus a forensic scene needs to be fresh to exploit Locard’s Principle, thus a clever theif will know that stealing something “out of sight” will not atract immediate attention and thus time and other contacts made by other persons will obliterate any recognisable traces linkerbal to them (It’s been found that the MO of some female “cat burglars” is to take things that won’t be quickly missed, because the chances are it will never get reported as a crime and even if it does other people will have obliterated their traces).

This idea actually aplies to money, not just in it’s use for transactions but also in “Stealing the King’s Gold” by shaving fractions of the gold of coins. One of Sir Issac Newton’s inventions still very much in use (along with the cat flap 😉 is the milled edges on coins to make the theft more obvious.

However for most practical purposes cash used in transactions is even today not realy traceable after a couple of transactions because coins don’t have serial numbers, and even though bank notes do people generaly do not record them. But although the cash itself may not be traceable transactions are and have been for many thousands of years. But it’s only in more recent times that transactions have been tied to individuals in some way.

In times past people working for other people realised that if they took a little here and a little there they could get away with it. In sole trading business they didled the customer (look up “Miller’s Thumb”) which is why with time we got certified weights and measures and random inspections. However in larger businesses it was the employees stealing from the owner or one partner stealing from another.

So ledgers of various forms were introduced where all sales and purchases etc were recorded (the earliest known are on clay tablets), this was in effect the first use of information to stop crime. However the more clever crooks realised that they could make false entries into the ledger and carry on their crimes. So in Italy at some point prior to 1494 (when Pacioli published a description of the system in Venice) somebody came up with the idea of double entry bookkeeping where two different and unrelated employeess filled out two ledgers that were then reconciled in a third, if they did not balance then somebody was making incorrect entries for some reason. It was obvious at this point in time that even cash transactions were recorded but usually did not include by whom (apart from in the seperate sales and purchase ledgers).

However the smartest of the crooks worked out other ways to didle their employers. As most people are aware as a general case the more you buy of something the less you pay per item. What is less well known is that large organisationss get discounts on all items they purchase. So a clever crook would sell for cash without discount to an individual, but then register the sale in the ledger at the discounted rate and pocket the difference. To stop this business owners had the actual purchasors name recorded in the ledger for discounted sales. Likewise goods with guarantees above a certain value had the purchasors name and other details recorded.

However even with the details recorded it was very very infrequently in the past that such information became known to those in authority. However those times have changed, and some sales (for guns, vehicles, televisions etc) have to be recorded and sent to the appropriate authorities.

Thus in times past befor CCTV and Credit Cards and easy audio recordings privacy was in effect a given, you simply had to make an agreement to meet some one and walk away from any place of potential concealment or covert observation to have privacy. When the founding fathers came up with their ideas the loss of this sort of privacy did not even occure to them.

So to regain that sort of privacy, you have to in effect take yourself back in time and live your life that way in a place where there is no CCTV, where tendering cash where you are not known is concidered normal and you avoid any method by which your words might be recorded and thus converted to information.

But there are two problems, firstly is it possible to live like that? Yes but it’s not easy these days. The second is will those who claim to represent society let you live like that? And the answer to that is increasingly no.

As an indiidual I don’t “use plastic” only cash and I don’t have any loyalty cards etc, as I’m not allowed to drive any longer (due to medical reasons) I use public transport. Which was once fairly anonymous but sadly in recent times has ceased to be anonymous due to the proliferation of CCTV. However I’m forced for various “social inclusion” reasons to use various communications technologies none of which can be made truely anonymous and remain useful for more than a very short while. So the level of privacy is dependent on what I disclose over it (to whom is irrelevent in these days of mass surveillance, you should assume it’s been recorded as a given[2]).

Further we are being encoraged in one way or another to use electronic communications (Email Fax etc) for what once would have been a hand written note or typed letter. Sadly few of us realise just how much extra information gets included in file formats of printed documents. We send these files with their hidden information and in so doing leak information that may be oof use to harm us. Thus my advice to people of “Paper Paper NEVER Data” when submitting documents to people (if they insist on data then print the document out, take it to another machine scan it in to a standard graphics file format and send them that instead).

So we have to be guarded not only in what we say but also how we divulge information in other ways as well. So maintaining your privacy is very much down to you as an individual and it’s not as easy to do as it is to say in fact it is actually very hard to do.

A while ago a bunch of journalists and editors at a well known UK newspaper had reason to try and communicate anonymously. You would have thought that they of all people should have been very much aware of the issues involved and thus could have made it work. No they couldn’t they lacked the discipline to make it work and one of them acknowledged this in an article they wrote (I did post a link to it a while ago on this blog marked for Nick P’s attention but cann’t find it currently).

So the choice appears to fall very much to us as individuals, do we want privacy, and accept the consiquences of trying to achive it or do we just give up and hope that we will get legislation to protect us from ourselves?

Lets put it this way I’m not holding my breath on legislation, the last lot of real privacy legislation you had in the US was after revolution and the attendant wars and loss of blood, limbs and life as well as economic down turn.

And even if you did get legislation it would only effect one jurisdiction with modern global companies they can chose which jurisdiction the data ends up in as we have seen in Europe. Europe has data protection legislation that whilst lacking is way better than in the US or other jurisdictions. The US Government was supposadly guaranteeing these rights with “safe harbour” agreements. Well International companies simply “outsourced” the data collection and storage to another jurisdiction where there were no agreements (one of the above four horsemen were caught out apparently doing this some time ago).

So if you wan’t privacy start planning now and behave appropriatly 100% of the time and don’t sway or be coerced from this position in any way, and good luck because I think you will need it.

[1] Whilst the Locard Exchange Principle is true, for physical objects it also suffers from a form of entropy [2]. That is further contact dilutes it and time decays it in various ways, which is an important consideration for those wishing not to leave a usable trace.

[2] Due to the near zero cost of duplication and reliable storage non physical information does not suffer from this form of entropy. The storage does however suffer from the effects of time but with reliable storage technology not usefully in time scales meaningful to humans (ie 100 years or less).

OzJuggler December 9, 2012 10:05 AM

Sad story from last week when two radio DJs in Australia called up the hospital in Britain hosting Princess Catherine and then impersonated Prince Charles and Lady Camilla’s voices to successfully obtain information about the prospective mother and baby.


According to AAP: “Nurse Saldanha had answered the phone and transferred the call to a colleague, who went on to give sensitive information to the pair about the duchess.”

Sadly, Nurse Saldanha was later found dead in her apartment allegedly from suicide.

The chairman of Southern Cross Austereo declared “the outcome was unforeseeable and very regrettable”.

There’s numerous social balance issues which arise out of this case, but adverse reactions like this are so rare that it may not be worth instituting costly countermeasures proportional to this one case. Even so I’m sure you Scheierphiles can see some information security lessons to be learned here.

Perhaps as a start: no matter how expensive the sender authentication step may be, it should not be significantly more expensive for some claimed identities than others. Otherwise the Authenticator may skip authentication or use cheap heuristics when encountering expensive identities that can be forged by an attacker skilled in the heuristic.

rocky December 9, 2012 3:59 PM

U.N. report reveals secret law enforcement techniques

“Point 201: Mentions a new covert communications technique using software defined high frequency radio receivers routed through the computer creating no logs, using no central server and extremely difficult for law enforcement to intercept.”


cred: http://www.hacker10.com/other-computing/u-n-report-reveals-secret-law-enforcement-techniques/

Jeff December 9, 2012 7:37 PM

@Clive: “Aghh it’s done it again a thousand curses on this smart phone…” And your smart phone’s autocorrect is frequently doing “it’s” part, not there but elsewhere. 🙂

kashmarek December 10, 2012 6:41 AM

Automobile Black Boxes and GPS Flaws

Of course, we knew this was going to happen (found on Slashdot)…because it was created by the close minded military (or some such). The same happened to that DARPA funded thing which evolved into the internet. Full of holes…like the spelling error (‘unamed’ which should be ‘unmanned’).

I originally discovered this problem (for myself) several years ago, when in my former day job, a dealership in Chili was moved 12 feet by an earthquake and could no longer be found or VALIDATED as the specific dealership based on its location. While this was a case of unintended consequences of natural geological change, I expect other such results are in the offing.

This kind of flaw should make network tracked automotive black boxes worthless (could take hundreds of billions of dollars to correct)…


My personal observations on the above:

They wanted to have all cars fully equipped with communications by peddling internet, radio, in car TV, Wi-Fi and such, BEFORE requiring the black boxes, which when it happens, WILL be connected to the communications network as well. Who needs stop light cameras and speed cameras? Just have the onboard recorder email the ticket directly to your bank account for the fund transfer. No court proceedings, no appeals, when your recorder has hit the legal limit, shut the car down, order the impound truck, revoke your license. Of course, you will never get to see your own recorder data. License plate readers will become so passé (you won’t even need license plates). When you enter the car with your electronic key, the recorder will know who is driving (car won’t start if there is a problem with your license or your age or your physical condition based on you last medical update).

However, when the cars become fully automatic, the manufacturers will have a new battle, to eliminate their culpability in legal matters when all that s**t fails.

With the GPS flaws, that failure is already here.

Researchers Find Crippling Flaws In Global GPS
Posted by samzenpus on Sunday December 09, @08:59PM
from the where-in-the-world dept.

mask.of.sanity writes

“Researchers have developed attacks capable of crippling Global Positioning System infrastructure critical to the navigation of a host of military and civilian technologies including planes, ships and unamed drones. The novel remote attacks can be made against consumer and professional-grade receivers using $2500 worth of custom-built equipment. Researchers from Carnegie Mellon University and Coherent Navigation detailed the attacks in a paper. (pdf)”

Clive Robinson December 10, 2012 8:47 AM

@ Jeff,

And your smart phone’s autocorrect is frequently doing “it’s” part, not there but elsewhere.

It would be unfair of me to blaim the phone for that, because it’s not enabled (as it was driving me mad) so by and large typos etc are down to me.(I do sometimes use the spelling it offers up but it’s still down to me 🙁

That said the phone is now well over a couple of years old and the keyboard is begining to crack up (literally) and give “doubles for singles” on key presses.

However the worst thing I find is a quick “proof read” does not show anything amiss to my jaded old eye, however the second it’s posted I general spot something cringe worthy.

I gather I’m not alone in this as others have sugested an “edit feature” in the past.

Clive Robinson December 10, 2012 9:44 AM

@ Nick P,

This might be of interest,


It appears to effect all tabed browsers so it’s quite a general as oposed to specific attack which means very much less work for the attacker and as such means it’s going to be easily packaged for script kiddies etc to use.

Although the author is right Unam/Pword on opening a channel is very poor authentication the current proposals for change are nearly all “user” not “role” specific. We realy need “role specific” in the modern world as in essence whilst we might be a single physical body we individualy have many unrelated roles (employee, bank account holder, credit card holder, member of a club, parent etc etc).

Nick P December 10, 2012 10:30 AM

@ Clive Robinson

Excellent catch! My page actually changed using his technique at the beginning of the video. It was creepy.

I confirmed it works on Chrome, too. The old advice of making sure the domain matches (and SSL is on) before doing anything sensitive still works. The problem, as author noted, is that users aren’t that paranoid b/c they believe in tab immutability. The good news is that we can still be semi-safe using whitelisted sites and by blacklisting domain operators doing this.

I’m sending it to Krebs.

Figureitout December 10, 2012 12:37 PM

Stallman shelling out some hate for Ubuntu. Personally, very disappointing (if true); and a black eye for trust in the free software movement.

Clive Robinson December 10, 2012 1:43 PM

@ Figureitout,

Personally, very disappointing (if true); and a black eye for trust in the free software movement

I haven’t loaded up the newer Ubuntu’s in quite some tiime (I think 9.4 was the last) for various reasons it does not offer what I need or for that matter want.

I have been told that there is unexpected “call home” packets by some one I know who for reasons to do with having fourty or fifty (lab) machines does not want Ubuntu trying to pull large updates from the site across the Internet but use a local server instead. Their router blocks and logs packets from all machines in the lab to various sites to keep traffic down, and stuff has been showing up in the logs.

So yes it looks like Ubuntu is guilty of calling home, but my friend did not detail exactly what (so it could be this or something else) just that it was “ETing”.

For Linux I of habit for the past few years tend to use Debian, and one or two of the “tiny Linux distro’s” derived/compatable with it for doing certain types of (embedded) development and the RH/CentOS for people I have to support. I used to use SUSE but it’s got so many spare tires these days it looks like the “Mitchelin Man on Gross Out Duty”.

I’m looking at shifting over to one of the BSD’s with an appropriate VM to run guest OS’s in for doing development in and it appears to have some advantages for embedded designs for “known hardware” as opposed to “the latest hardware”.

All that said it’s not just the ET potential that’s getting people scratchy about the more commodity GNU/*nix derivitives. It appears that much of the “open source dream” is turning slowly but surely into a nightmare for several reasons


In part it’s very reminiscent of DLL-Hell we used to see on Windows (VBrunXXX to go ;-).

To be honest my view is “static linking” on binaries, and just the bare minimum to make the standard tools work (Clib) it’s not as though “memory” is the issue it once was that caused the world and his dog bedroom coders to have their Own-libs ontop of other Strange-libs ontop of GAK-libs which might at some point touch down on one of the standard and stable libs, where to get things to work you had to get the exact same set of Weird-libs one or more of the developers used at some point…

Clive Robinson December 11, 2012 1:38 AM

@ Nick P,

I’m sending it to Krebs

I suspect that he will post it up, it is one of the things his site specializes in.

Something else that may be of interest to you,

As many people know Auditors and Rating’s Agencies provide very very expensive services to investors and the like. In essense they are capitalizing on their presumed extensive knowledge and expertise.

However there has always been a sting in the tail, in return for taking a large chunk of cash for this supposed expertise, you have to in return “not rely on it”. That is if you use the reports they issue and you have paid large sums for them you agree that the reports are actually worthless if their expertise is flawed and their reports wrong and you lose even larger sums of money based on it…

Well Australian Courts are now saying they cannot have their cake and eat it and not expect to suffer the effects when it goes wrong.

This potentialy means that the rug has been pulled out from under the Auditors and Ratings Agencies feet, in that they cannot sell their products without incuring liability for the content, because they claim expertise and knowledge and should thus be both duly diligent in their actions and liable if they are found wanting in that respect.

It will be interesting to see where this ends up because in effect the court’s are telling them to clean up their act and accept the responasability of doing an acceptable job and standing by what they say and paying up if they get it wrong.

At the very least I would expect the actual reports to become even more devoid of information, whilst the disclaimers become so numerous that by the time you have read them the report will be out of date.


Figureitout December 11, 2012 2:55 AM

However there has always been a sting in the tail, in return for taking a large chunk of cash for this supposed expertise, you have to in return “not rely on it”
@Clive Robinson
–“Past results are not indicative of future success.” Why do people let their money be managed by people they don’t know and really aren’t as intelligent as they make themselves seem? My father was able to outperform his financial adviser multiple quarters (with a separate day job), it made me ask him, “Why are you paying this guy?”. My brother is able to make decent profits with ≤ $1000. I don’t like to gamble and never had much interest in learning what the financial industry has to offer; as I believe electronics/power (properly controlled) will advance humanity more.

kashmarek December 11, 2012 6:00 AM

In TabNabbing, you gotta love this statement at the end:

“The Fix

This kind of attack once again shows how important our work is on the Firefox Account Manager to keep our users safe. User names and passwords are not a secure method of doing authentication; it’s time for the browser to take a more active role in being your smart user agent; one that knows who you are and keeps your identity, information, and credentials safe.”

In that last line, when you give ALL of your identity, information and credentials, to some online tool, that is like giving away the crown jewels. It appears to be a form of phishing attack. You have to be your OWN smart user agent and keep that information to yourself (not your browser because if it is in your browser, it can be stolen).

vasiliy pupkin December 11, 2012 9:50 AM

Example of Microsoft’s attitude towards privacy: as I recall their web-based e-mail sign in screen had by default box Remember Me checked. You need to opt out, not opt in.
Google’s policy is to provide information on account to LE by warrant, but Microsoft is ready to do that without such protection of account holder.
Amazon by default declared that all personal information you provided just to complete particular transaction only is becoming by default property of Amazon, and you have nothing to do with that.
All businesses are profit oriented, but I would label Microsoft as LE/government friendly, Google as customer friendly (only in compariosn with the Microsoft)and Amazon as predatory based on their corporate attitude towards privacy.

Nick P December 11, 2012 11:50 AM

@ Clive Robinson

Krebs wrote me back on the tabnabbing. It was first posted in 2010. He mentioned that I should look at the news reference in the article.


(Rolls eyes)

Re Financial Cryptography article

That IS interesting. The experts being held accountable for their expertise? Unthinkable! The equivalents over here might be housing code rules that make sense, SOX audits that prove something, or a reasonable EULA. I still think we can do a software or service liability scheme where at least the basics must be in place. They cost little to nothing. I’m sure inexpensive local contractors would spring up everywhere (and we already got certs waiting for them).

Clive Robinson December 12, 2012 11:17 AM

@ Nick P,

[Brian] Krebs wrote me back on the tabnabbing.

Hmm there are no dates on Aza’s blog pages or where people post comments, and I found the page whilst searching for some information on a different type of attack on browser tabbed windows.

Oh I don’t know if you know but the Aza’s father Jef Raskin was the designer/father of the Apple Mac ( http://en.m.wikipedia.org/wiki/Jef_Raskin )

OK so as an attack it’s a little old, but as you found (as I did) it still works. Still no harm done, so as the Aussie’s say “No Worries”

So perhaps something a bit more uptodate and certainly quit a bit more creepy “hashtag-retargeting” for raising advertising revenue,


I can see this quit easily becoming a new attack vector.

Speaking of new attack vectors Near Field Communications (NFC) which is used in payment cards such as those for Credit / Debit / Travel etc. A big chunk of their security to stop “value being misappropriated” rests on the inability of people to get insside the chips on the cards to find the encryption keys etc (though in the case of MiFare cards used for Oyster and other travel cards researchers have got at the keys).

Now ask yourself what sort of security you get if instead of using a bespoke tamper difficult chip you had an app running on a mobile phone which had a NFC Rf interface in it?

Well I’m guessing we will find out fairly soon. Some phones (from RIM etc) have NFC built in but most don’t. Broadcom however are planning on changing this big style with a new phone chip they have developed and anounced,


Clive Robinson December 12, 2012 12:24 PM

@ Nick P,

This might amuse 😉


But sometimes a hacker gets paid to do their stuff and the get called a pen-tester, however many arn’t even close to being what a hacker should,


Then of course some people let the side down yet again (MS and IE6-10 quite worrying vulnerability),


scorn December 13, 2012 9:27 PM

CIA Head: We Will Spy On Americans Through Electrical Appliances


“Items of interest will be located, identified, monitored, and remotely controlled through technologies such as radio-frequency identification, sensor networks, tiny embedded servers, and energy harvesters – all connected to the next-generation internet using abundant, low-cost, and high-power computing,” Petraeus said.

Clive Robinson December 14, 2012 3:57 AM

@ Nick P,

Whilst on the ACLU site I noticed this little gem and I thought it might be of interest,


Basicaly a guy called Milo has built a DIY drone with a gun mounted on it and has tested it out on static and (consenting) live targets.

Now the gun he used is one from Paint Balling so has a shorter range than a conventional hand gun and is unlikely to inflict injuries. But importantly it’s about the same size and weight…

So if somebody can DIY build an armed drone you have to ask how long before somebody goes the extra step and uses a real gun…

Oh by the way the thing about these small drones is unlike light aircraft and biger they are very difficult if not impossible to shoot down with current anti-aircraft missiles (the drone is to small to slow and does not generate enough heat or radar reflection).

Nick P December 14, 2012 12:04 PM

@ Clive Robinson

Re ACLU article on data mining

Yeah, it’s the same information more or less.

Re drone with gun

Already done with a live gun. Paintball guy is late to the party. And this was on a heli. 😉


Re previous comment about pentesters

“There are so many errors that JSLint gives up on this code at 39%.”

LOL. That actually takes work to do.

The piece about stealth hacking was nice. The white hats are too clumsy for their own good. Of course, I see little value in pen testing as it’s done in most organizations. Two gurus agreed in the past.

“There are two reasons why you might want to conduct a penetration test. One, you want to know whether a certain vulnerability is present because you’re going to fix it if it is. And two, you need a big, scary report to persuade your boss to spend more money. If neither is true, I’m going to save you a lot of money by giving you this free penetration test: You’re vulnerable. Now, go do something useful about it.” (Schneier)

“I like to restrict penetration testing to the most commonly exploited critical vulnerabilities, like those found on the SANS Top 20 list. If you have any of those vulnerabilities, you really need to fix them.” (Schneier)

Marcus Ranum isn’t so nice about it:

“The only favorable or useful outcome of a pen test is the worst one: the pen testers walk in and demonstrate, conclusively, that system security is horrible. Then you’ve got a 50/50 chance you’ll end up with a mandate to fix it, or nothing will happen. Because here’s the sad fact: organizations with sucky security already know it, and it is not going to be improved a great deal by having an outsider show up and point that out….”

So what’s the realistic alternative to pen testing? It’s obvious: have a good security design, and then verify that it is in place and working correctly… The pen testing approach is to look at your network as a great big unknown, from which you try to derive clues using ping sweeps and port scans. I’ve got bad news for you, Dear Reader, if your network is so uncontrolled that the only way you can figure out what’s on it is by scanning, then your badness-o-meter is probably pegged on “sucks” already. All you are going to find is large, uncontrolled tracts of TCP/IP swamp-land, great unknowns populated with backdoor wireless access-points, keylogger-infected laptops, and wide-open hosts.”

I promote “security assessments” that try to improve the situation. The black box testing isn’t like traditional pentesting. You can figure most of that stuff out with an analysis of the network, apps and configuration. The black box testing is to check/generate the model of network behavior, find deviations, and determine what an uninformed attacker has to work with. I find more important data just looking at the configuration, the apps used and the design. And the network won’t go down doing it.

Nick P December 14, 2012 2:38 PM


I was looking at the 2012 ACSAC papers. They had some good ones. Here’s a few highlights for readers.

BetterAuth Web Authentication

Seems better than SRP. I know Clive is a fan of SRP so I’d like to hear his thoughts on this one. Easy deployment is one of its key advantages: browsers can support it natively in the future, but with JS in interim.

JSand – Client-side sandboxing of third party JS w/out browser extensions

Pretty much what the title suggests. Combine with technologies like Native Client and RockSalt. May not be able to built a totally secure web platform. However, can combine pieces of different projects to tackle most of the issues. Most practical approach.

Separation Virtual Machine Monitors

My favorite. Long time readers know I advocate high assurance software and pushed separation kernels in the past. I also critiqued entities like Green Hills and NSA saying a EAL6+/7 SK running on x86 processors, with their known level of quality, has no valid security claim. Well, govt seems to have finally agreed and some of their people are trying to determine exactly how secure a COTS VMM can be made. And still be practical.

The good news is Xenon project pretty much did it. They refactored Xen and modified the design. They have stronger MAC enforcement with less code. They put various protection measures, yet their codebase is less complex and just as fast. They maintain compatibility with Xen management tools (i think) and complex guest OS’s. They run real production software to make sure of it. They’re further breaking down dom0 and simplifying the code. All in all, they’re doing great work and I see something like this having impact in commercial circles. (QubesOS would benefit greatly from this, assurance-wise.)

Clive Robinson December 14, 2012 7:27 PM

@ Nick P,

A trip over to the UK’s Camb Labs website might provide you with some extra reading 😉



The work Robert Watson and colleagues are doing interests me because they recognise the “gulf” I occassionaly go on about between the software and hardware security. For various reasons I don’t agree with some of the methods (I find them inefficient and potentialy problematic) but I find the aims and objectives make for good alternative viewpoints.

A second thing to look at is,


As we know Authentication (AuthN) is something the industry tends to talk about a lot, but the conversation almost always goes around in circles and frequently we are left with the lowest common disaster “passwords”. AuthN is a fundemental issue for ICT system use, which of necessity is the precurser to Authorization (AuthZ).
Why we almost always end up with “paswords” or “tokens” or the worst of the lot Bio-metrics is an interessting side issue in it’s self, especially as all of them are known to have failed not just once but repeatedly no matter what we appear to try.

That is we appear to be skating around the edge of a fundemental problem that nobody realy want’s to address. Well, Joseph Bonneau has dared to speak it’s name and it’s “machine learning”, that is contrary to what we want AuthN is not a “Black or White” style binary problem it’s much harder and almost fuzzy in nature esspecialy since our “access methods” are much much broader than they once were.

The problem with machine learning and individuals however is it has some fairly significant privacy issues.

Nick P December 14, 2012 8:13 PM

@ Clive Robinson

Re Authentication

It was a good perspective, esp. machine learning comparison. I’ll add another problem with machine learning: it’s unreliable. I think we tend to think in binary terms because we WANT a binary RESULT: pass or fail. The result “seems authentic so far” is one many people would rather not even think about, much less design their infrastructure around. Now, many are getting a reality check.

I agree with you, though. I think the authentication mechanisms people use just suck. Certain authentication strategies work with very high probability for certain use cases, then there’s the rest which go mainstream. 😉 I think problems with authentication are simply evidence that modern systems need a Trusted Path, including over network, more now than ever. There’s already research prototypes for putting trusted path from user to apps on untrusted OS. There’s remote attestation, better authentication, etc. on untrusted OS’s, too. Someone just needs to put them together the right way and make tools that automate the drugery of it so average developer can use it.

re CFP: Runtime Environments, Systems, Layering and Virtualized Environments

The name alone gives me a reaction: I love it! The funny part is that we figured out many important security techniques in those categories in 1970’s-1990’s. Just gets forgotten. Then remember again. Then evolved (or devolved).

RESOLVE 2012 was interesting. There were some good papers. The CHERI processor project, TrustedBSD and Capsicum were all mentioned in papers there. I can’t recall the others right off the top of my head. Can’t wait to read the papers for that year. (Or maybe I should write one.)

Leave a comment


Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.