Friday Squid Blogging: Argentina Attempts a Squid Blockade against the Falkland Islands

Yet another story that combines squid and security.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Posted on January 13, 2012 at 4:19 PM45 Comments

Comments

Daniel January 13, 2012 5:50 PM

Manufactures have developed a machine that can decode the entire genome of a person for less than $1000 and within 24 hours.

http://abcnews.go.com/Technology/wireStory/company-announces-low-cost-dna-decoding-machine-15332169

It’s probable that we will see hand held devices that can sample and decode within a few hours appear on the market within the next five years.

While the article places emphasis on the medical aspects of the breakthrough it’s worth contemplating how these rapid advances will change security when every LEA and even rent a cop will have one in their possession. Will eye scanning or facial recognition even be necessary. Is there a science to faking a genetic test????

MikeA January 13, 2012 6:54 PM

Is there a science to faking a genetic test????

At least two police-procedural TV shows have covered this topic. The first step is to be born a chimera….

More seriously, the real upcoming issue with DNA evidence is going to be how simple it is to plant it, given access to the ever-growing sample library. This contrasts badly to how hard it is to convince a jury of CSI watchers to pay attention to actual science from expert witnesses (assuming non-bent ones can be found)

Maximal scariness comes from “designer plagues” targeted on genetic commonality among members of an unpopular ethnic group.

A blog reader January 13, 2012 9:37 PM

At Freedom to Tinker, James Grimmelmann conversed with Professor Jonathan Zittrain about “gated community” app stores, including issues of security (i.e. sandboxing combined with code review) and control (i.e. remote removal of software from a user’s system.) Among other things, Zittrain mentioned a security idea wherein a computer system would have two virtual machines: a “green” one for important data and a “red” one which would be for questionable software and which could be easily reverted to a safe state. The “green” virtual machine (and possibly also the “red” virtual machine) would not require third-party approval for software.

Among the various factors that influence official policies, one might question as to whether concerns over “turf” or avoiding adverse outside attention or publicity may have an affect. David Ross, who was formerly on a school board, mentioned a case in Saugus, California where a junior high student was suspended for taking a bag of marijuana home to his parent after receiving the bag from a scared “friend.” Even though the parent had notified the Los Angeles County Sheriff’s Department, school policy required that the marijuana be turned over to a school staff person. A more recent issue is whether New York City transit bus drivers are prohibited by policy from calling 911 on the job (it may be that drivers are supposed to notify a dispatcher via radio instead.) Among other concerns expressed was that the policy might make it harder to obtain information from witnesses in the event of a vehicle collision where persons where injured, for example.

A blog reader January 13, 2012 9:59 PM

As hardware devices go, Arstechnica has mentioned a drive enclosure for laptops that allows a micro-SATA hard drive to be used as a removable and automatically encrypted storage device. For the key, a USB dongle is used, with the option of also requiring the user to enter a password.

Oz January 13, 2012 11:46 PM

The EFF appears to have a new SSL cert. I had to look twice when it appeared that they had a new owner, making it look as though they were hacked:

See the certificate details here:
https://imgur.com/LkdYH

The EFF appears to be running copyright-watch.org, though, so I guess it’s just an anomaly. I mention this because I use SSL everywhere, so that warning came up when I visited this blog and I suspect I’m not the only one who saw it.

Nick P January 13, 2012 11:54 PM

@ A blog reader

“The encryption module is available in 128-bit electronic codebook (ECB) and 256-bit ECB or cipher-block chaining (CBC) versions. ”

It includes ECB modes? Are you kidding me? That ECB leaks information invalidates the very properties the encryption is trying to create. It should be CBC, Counter mode, or XTS. It’s also certified to the lowest FIPS 140-2 standard, requiring no tamper resistance. I’ll pass.

Clive Robinson January 14, 2012 2:43 AM

@ A blog reader,

A more recent issue is whether New York City transit bus drivers are prohibited by policy from calling 911

That is probably the case, but the reason is probably due to health and safety reasons and is thus another example of the law of unintended consiquences….

I don’t know about NYC transit but in most other places drivers are baned from using or even having mobile phones in thier cab/driving area.

If you have a think back Bruce covered one such occurance where not only were phones prohibited but passengers were encoraged to report any usage they saw. And to stop confussion by passengers between phones and radios the radios were put in bright orange cases…

Clive Robinson January 14, 2012 4:10 AM

@ Daniel,

Will eye scanning or facial recognition even be necessary. Is there a science to faking a genetic test?

The simple answer to your two questioons is yes and yes.

With regards eye iris and facial recognition they can be done now relativly reliably with a smart phone app communicating to a back end online DB that does the identification.

A current artical on facial recognition that also mentions the improvment rate given by NIST is,

http://money.cnn.com/2012/01/13/technology/face_recognition/index.htm?source=cnn_bin

The DNA system will also require a backend online DB to work so effectivly the costs of these two parts cancel out. Leaving the main cost differences being the front end hardware and the cost of the communications link. It is easy to see that the DNA system will be in addition to the cost of a mobile phone, and that economies of scale will always ensure that the cost of the camera inside the phone will be less than the DNA sampling hardware.

As for faking the results with eye iris and facial recognition there are two things masking and stimulating. That is making your identifying features unavailable and making your features appear to be somebody elses. Masking surface featurs is fairly easy to do with contact lenses, clothing and makeup prosthetics, and if you realy are crazy enough you can get your eyes tattooed.

Simulating somebody else is possible, and it breaks down into making you not you and making you somebody else. Making you not you ie disguising your feature sufficiently that you cannot be identified is easier than making your features appear to be somebody else and requires a knowledge not just of the other persons features but fairly detailed knowledge of the way the identification system works. Full Face prosthetics are known to have worked for getting through passport control and provided the materials used respond in a similar way to the cameras used by the system may well work.

As for faking DNA yes it’s possible, there are two basic ways fake the actual DNA (dificult / time consuming) and fritz the test (requires good knowledge of the testing procedure).

Firstly you need to accept that over a period of time your body effectivly compleatly replaces most of it’s self the obvious ones being skin and hair and blood. Which is why peoples DNA is known to change after having bone marrow transplants… The question then becomes one of getting the right bone marrow having the required surgery (realy not nice) and allowing sufficient time for the change to occure.

The other way is to fritz the test. All testing processes have weaknesses all you have to do is find them and know how to exploit them.

Years ago I worked out there was a flaw in the basic DNA testing proceadure that would alow you to contaminate a crime scene with duplicated fragments of another persons DNA. Predictably I was told by various “experts” it could not be done, then a couple of years later a Prof in Australia demonstrated the process and published the results, the result is that now the DNA testing process SHOULD have changed… But as it’s much more expensive to do it the new way, guess what the last time I checked many places were still doing it on the cheap (basic “free market” economics “race for the bottom”).

I’ve had an interest in breaking bio-metric systems for many many years. Over fourty years ago as an experimenting youngster I worked out how to make fake fingerprints using the red wax from Edam cheese to make a mold a commonly available spray on oil as a mold release agent and rubber solution (laytex) glue to make the fake skin. Years later when working as an electronics engineer for a company that made security devices that later included finger print scanners I pointed out that I knew how to make false finger prints. The experts predictably told me I did not know what I was talking about, I didn’t shut up or back down, demonstrated I did know what I was talking about and shortly there after found myself looking for new employment.

I’ve since gone on as a hobby to work out the defects in other bio-metric and evidence systems and how to exploit them. The one I’ve not broken to my satisfaction is retina scanning, mainly because I wouldn’t let one of those devices anywhere near my eyes as I don’t consider them to have been sufficiently tested to be safe.

Nick P January 14, 2012 1:14 PM

—–BEGIN PGP SIGNED MESSAGE—–
Hash: SHA1

@ Clive Robinson

Good catch. Seems like I’m going to have to redesign some things. I think this speaks worlds for my multi- and poly cipher proposals of the past. (Which I’ve used for the most critical designs.) Had they been using them, these attacks might have been prevented or useless. Might. Shouldn’t be a problem to implement multiple encryptions as fast as some of these ciphers (e.g. Salsa20) are.

One idea I had was just using a hardware-assisted, superfast cipher in disk encryption. Either a stream cipher or a block cipher acting like one. Each atomic chunk of HD data is encrypted with its own unique key. The actual key is constructed from two parts: the master key & a CRNG-made public key, hashed together appropriately. For security enhancement, the master key can be produced by PBKDF’s, onboard TRNG, tamper-resistant hardware, dongles or any combination relavant to the likely attacks & amount of security desired.

The security will come from the fact that the stream cipher shouldn’t be leaking things like CBC does & a new public key is made with every re-encryption (which takes less time that might appear). If we add in something like SecureCore & it’s an IME device, then caching + plenty of RAM can be used to speed things up as the RAM is also encrypted.

Clive, what do you think of my main proposal of using hardware-assisted, ciphers in an IME with CRNG/TRNG-generated, private keys?

—–BEGIN PGP SIGNATURE—–
Version: GnuPG v1.4.10 (GNU/Linux)

iQEcBAEBAgAGBQJPEdP9AAoJEFvQ0aBVJJxW7i4IAI+91u+98OkT9gzuP/qGtSr6
M2fRz50RDDekKKmhbwUTaOX8dEHyxJFUVKbzeqJEd5wjxZZtjzCuUySzyOUDkwrU
FM3TsjX7kxzZsN7AfiIrLDdzGARaTpW+hny13+PJAShrVvGzSub+O5D9bZ7Vv0h4
ynlH6VCsIsQKGiLTgrTLqDtBUldt138KFxw5WSAd1yGmtw6zi00p7nV6S9lWa77r
1JmmVJ8oRqfFjxLwK/b5J7YkxD/V6u61fzewMXk2p4yo4S9njewSx/XWts6inBlj
pTJUYsH/QroTNZyeGsRtYTVPacdY4RELXoD1uFjo0Wfijh18xTLTE7hltl19hYk=
=J4rh
—–END PGP SIGNATURE—–

Clive Robinson January 15, 2012 12:48 AM

OFF Topic:

In amongst the winter solstice celebrations did anybody notice StratFor’s woes (supposadly) at the hands of Anonymous?

For those that haven’t briefly StratFor has billed themselves as the “shadow CIA” and their paidfor membership includes quite a few senior NATO and other people. The details released on them include their names credit card numbers and that little three digit code used as a security check on the credit card info as well as a MD5 hash of the users access password. Apparently an off the cuff comment from a spokesperson for Anonymous ‘that they would use the CC info to make charitable donations’ so tickled some members of the press they called it a “Robin Hood” gesture (rather than the “Robing Blind” act it would be).

Well on the ITSec side people have started doing some analysis of the password data and it does make interesting reading.

One site I drop in on occassionaly, in an article,

http://nanoexplanations.wordpress.com/2012/01/13/password-analysis-from-the-stratfor-hack/

had this to say,

This opportunity is the list of 860,000 (MD5 hashed) passwords to accounts of people in journalism government contracting, the military, etc. — in short, people who “should know how to create and maintain strong passwords.  Most of the MD5 hashes have now been cracked, and preliminary analysis indicates that even people who “know what they are doing” use weak passwords

The article is quite an interesting read in of it’s self, for instance it indicates that many of the sites “auto-generated” account opening passwords were still in use and were in of them selves weak that is they were what some people call ‘Camembert Passwords’ (that is just like the French cheese they look solid at first glance but in reality are weak and yield with minimum effort when put under preasure giving rise to an awful stink and mess that is difficult to clean up).

Now I know quotes from StratFor have appeared on this blog from time to time usually with regard to AQ / OBL and global terrorism, but no I don’t know if Bruce or any of the blog posters are paid up members, but at the bottom of the article is a link to a page by Nick Selby who’s a journalist who is and his response to StratFor’s woe’s is a priceless read,

http://policeledintelligence.com/2012/01/03/with-that-revealing-shirt-he-was-just-begging-to-be-hacked-blaming-the-victim-in-the-stratfor-hack/

Daniel January 15, 2012 1:02 AM

@Clive

While I found you other comments interesting I disagree with you opinion about facial recognition. The article you linked too was very disingenious. It’s worth noting that despite all the so-called improvements in facial recognition technology there has yet to be a legal case in the USA where that evidence has proved decisive. In fact, it’s the FBI’s official opinion, to which their experts have testified in court, is that automated facial recognition technology is not competent.

The fundamental truth is that 2D facial recognition will never be acceptable especially when the person is actively trying to foil it. I don’t know where the article got it’s stats from but they are just plain wrong.

Clive Robinson January 15, 2012 1:36 AM

@ Nick P,

“Clive, what do you think of my main proposal…”

I’ll have to give it some thought, but right now I’m waiting as I have been for the past few hours for the power to come back on to recharge the UPS’s before I bring the computers back up…

I’m not sure what the cause of this power cut (outage) is, but in the UK we have recently had a quite severe spate of “cable theft”. Presumably this is for “scrap value” by organised and well prepared criminals (so not your usual “pikey” “Church roof lead thieves” unless they’ve been to night school ;). Who are not worried by grabing hold of live very high voltage AC (50,000V and up) overhead cables and driving off with a kilometer or so of it in a very short time.

The “stolen scrap metal” problem is getting so bad even the “Desperate Business” cartoon in Private Eye Magazine has made fun of it, with a scrap yard owner being interviewed by the police about the fact he had the “Angel of the North” on his heap.

[The “Angel of the North” is an Antony Gormley statue that is made of 200 tonnes of steel embeded into the rock of a hill above Tynside, and it’s possibly his worst work on public display. Butt ugly as it is, it has been named an “Icon of England” http://en.m.wikipedia.org/wiki/Angel_of_the_North ]

Clive Robinson January 15, 2012 6:48 AM

@ Daniel,

The article you linked too was very disingenious. It’s worth noting that despite all the so-called improvements in facial recognition technology there has yet to be a legal case in the USA where that evidence has proved decisive.

You may be right on both points, however the halving in false positive rate every two years has been quoted in the past.

I tend to look at facial recognition in the same way I do photo and security video recognition, both of which I view as using the same process as fingerprint recognition.

That is a forensic scene of crime officer collects the physical evidence of the fingerprint from the scene in a recognised and aproved way and produces a document cataloging it into the evidence chain (so far so good in most cases).

This image is then examined in some way to link it to one or more probable people who’s fingerprint records are pulled and a human then does a visual comparison between the scene of crime print and the probables records.

The human doing this comparison is the most fallible part of the system which is why there has been an investment in the likes of AFIS technology.

However in the UK fingerprints are now losing their “evidentiary value” because lawyers are beigining to question them after a number of cases which have shown the system to be badly broken. One of the most prominent was of a police officer who was found guilty of providing false testimony (she claimed she had been at all times outside the actual crime scene when her elimination prints said she had been touching items in the scene). Through various trials the fingerprint examiners only provided tiny bits of evidence. When finaly forced to reveal all the evidence it was obvious to anyone examining it they had been at best very badly inept.

The simple fact is the court systems are not interested in primary evidence just the documents that support it. Judges also get very upset when defense council cross questions “expert witnesses” because they are supposed to be “impartial officers of the court” and thus permitted to enter “opinion” which is effectivly the equivalent of “hearsay evidence”.

Thus I can easily see facial recognition getting the same faux gold standard as fingerprints and DNA had and once upon a time the now totally discredited “witness identification line up” had because in essence they all rely on the same fallible method of human interpretation that cannot be tested, and easily gamed.

And because for all the rhetoric at the end of the day modern court systems are not about justice or even the pretence of it, they are sausage machines designed to get defendents through the system as quickly and cheaply as possible. With only the theater of “justice being seen to be done” for either political or monetary reasons or both. Thus arguably many people go to jail because they are the cheapest and most conveniant to put there, the fact that it coincides with the actual criminals quite often might be more luck than judgment or stupidity on the criminals behalf.

Put bluntly the area of policing that works is “ears and eyes on the ground” criminals appear in the main incapable of staying hidden. In the UK something like 90% of lower level crime is actually solved because the criminals show off in some way, either “bigging it up” infront of their mates verbaly or going out and “buying flash gear because they’re flush”. Their mates or those on the edges see this and a name gets back to the ears and eyes and suddenly flash jonny is having his nice new shiny suit collar felt whilst facing a bunch of simple questions he cann’t answer.

It is only in the circles of noncriminals who do not have any association with law enforcment that the notion of justice and inocent untill proven guilty have any credibility any longer.

Daniel January 15, 2012 10:21 AM

@clive.

I agree with you analysis of the court system entirely. I could not have said it better myself and that’s a compliment.

Yes, the rate of error in facial recognition has improved dramatically but it’s not at the low error rate the article claims. The rate I’ve seen reported in reputable places is about 3%-5%. Moreover, that rate is under ideal laboratory conditions. In a real world scenario it drops into the 70-90% range depending on a wide variety of factors, assuming any match is possible at all.

The real debate is just how good a match is good enough. If one is scanning the faces of a crowd for a person who you think is planning a terrorist act some people would argue that you round up the matches and ask the difficult questions later. So I suppose that in terms of maintaining crowd control or nabbing suspects facial recognition might still play a role. But I think that in other ways it will be supplanted in the hear future such that one’s DNA is encoded on one’s drivers license and the officer of the law has a little machine that samples you and spits out a result.

I think that no matter how good the error rate becomes on facial recognition that for certain purposes we will develop methods that are better, faster, cheaper as the manta goes.

MW January 15, 2012 5:20 PM

@MikeA (#2):

I don’t think genetically targeted “ethnic-cleansing” viruses are realistic. I don’t think there are any genetic markers which are at all reliable at identifying a racial group. If you did identify such a marker, you’d need to design a virus which was dependent on this marker, even though the marker is highly unlikely to be directly relevant to the process of infection. If you did design such a virus, it would soon mutate to not be limited by the genetic marker, because doing so would be an evolutionary advantage.

I read an SF story based on this premise. Politicians are keen to (secretly) use a scientist’s discovery to solve world hunger by depopulating the 3rd world while having it look like a natural disaster. The scientist designs a virus to target himself, and infects himself and the politicians with it. As he is of similar genetic background to the politicians, this gives an expected 75% or so death rate for them. He then says that the survivors may end up with a different perspective on the depopulation plan.

Nick Selby January 15, 2012 5:43 PM

Thanks for the link. Just for a note; I’m not a journalist, I am an information security professional, specializing in incident response and cyber investigations. I’m also a police officer.

Nick P January 15, 2012 6:14 PM

“Just for a note; I’m not a journalist, I am an information security professional, specializing in incident response and cyber investigations. I’m also a police officer. ”

And posting that irrelevant information for advertising purposes, apparently. 😉

Toby Speight January 16, 2012 8:44 AM

Minor nit: the title of this squid thread says “blockage” but I’m sure that “blockade” is nearer the truth. Mind you, a squid blockage of the entire island group would certainly make news…

Chris January 16, 2012 2:39 PM

Just got an email from Zappos.com

Excerpt:
#############
We are writing to let you know that there may have been illegal and unauthorized access to some of your customer account information on Zappos.com, including one or more of the following: your name, e-mail address, billing and shipping addresses, phone number, the last four digits of your credit card number (the standard information you find on receipts), and/or your cryptographically scrambled password (but not your actual password).
#############

I wonder how strong their encryption was on that password table…

Clive Robinson January 16, 2012 3:45 PM

OFF Topic:

ZAPPOS INFO.

It would appear that Zappos has turned of it’s phone lines (for the customers benift) and has likewise made the blog which their CEO posted to about the security incident unavailable to many.

However if you are potentialy one of the 24 millio people affected Sophos has kindly dug out the blog message info and put it up on their nakedsecurity site,

http://nakedsecurity.sophos.com/2012/01/16/zappos-data-breach/?utm_source=facebook&utm_medium=status%20message&utm_campaign=naked%20security

Clive Robinson January 16, 2012 4:04 PM

OFF Topic:

Why you cann’t always belive the Malware stats you read.

Richard Clayton over at the UK’s Cambridge Computer Labs has put up an explanation of why the “malware stats” often cause cognative dissonance bassed on our everyday experiance.

http://www.lightbluetouchpaper.org/2012/01/12/beware-of-cybercrime-data-memes

In effect it’s a new version of “Chinese Whispers” where the numbers might be the same but the wording around them is changed so the interpretations change sometimes dramaticaly.

It’s one of the reasons I go on about access to raw data and the assumptions behind the measurment methods used to derive the data…

Clive Robinson January 16, 2012 5:02 PM

@ Bruce (and others 🙂

Microsofts sometimes controverial researcher Cormac Herley and Carleton University’s (Ottawa Canada) Prof. Paul Van Oorschot have coauthored a paper about passwords,

Acknowledging the Persistence of Passwords

Which has been published in the IEEE Security & Privacy Magazine. Cormac has put his “author copy” up as a PDF on Microsoft’s research web server at,

http://research.microsoft.com/pubs/154077/Persistence-authorcopy.pdf

Essentialy they argue (for reasonable reasons) that passwords are here to stay for some considerable period of time, and that we should simply accept this and get on with actually improving the security involved.

Nick P January 16, 2012 5:06 PM

@ Clive Robinson on Password Paper

That might be true. However, there are other researchers showing that they can largely be replaced by technologies that may be more secure or convenient. There are password managers, Cambrige’s Pico concept, OpenID-like services, smart cards, etc. The Pico paper has strong arguments against passwords & in favor of a secure device that managed passwords or other authentication for you. My transaction appliance tried to do that, as well.

Clive Robinson January 17, 2012 7:36 AM

@ Nick P,

. However, there are other researchers showing that they can largely be replaced by technologies that may be more secure or convenient

Yes and no, anything involving an “aid memoir” is technicaly not a password but a token. That is, it’s nolonger “something you know” but “something you have”.

And as we know from long experiance tokens be they bits of paper or Secure ID tokens pocket computers or smart phones etc have their own security problems. More importantly importantly from the user perspective they represent a very very weak link in the authz/authn chain they are being used in, in that if they are not at hand when required then the chain is effectivly broken (so in theory) the user can not get access or compleate a transaction very much to the users anoyance (and being human they won’t blaim themselves but the technology in some way).

The other and possibly primary reason passwords are still in existance is “negligible direct cost” of implementation.

I and others realised before the turn of the century that the systems banks were using for online banking were compleatly and utterly hopeless from a security perspective and said as much. However we had to put up with being told that we didn’t 1, Know what we were talking about 2, Know what the real risks were 3,Understand the way banking worked… And a whole load more gumf every step of the way where 15 or more years later only some banks use 2 factor and none (that I’m aware of) fully authenticate each and every transaction in a secure way.

And to be honest they were right in one respect (3) we didn’t understand the way the banks work by laying off risk onto others rather than design sensible systems…

It’s interesting to note just how different the banks aproach to online system security is in juresdictions where they are forced to take on some or all the risk. It is a lesson the UK and US legislators realy should wake upto and act upon in a clear and non negotiable manner.

If they did then certainly the sort of tokens that you and I have talked about in the past would become a cost effective solution within an inordinatly short period of time simply due to the market forces involved. And thus the oportunity to move “passwords” out into a more secure ring, but they would still be there as a word or phrase to control access to the token.

Clive Robinson January 18, 2012 10:18 PM

OFF Topic:

@ Bruce, Nick P, RobertT,

For those with an interest in EmSec especialy to do with “secure chips” Theador Markettos of the UK Cambridge Computer labs has in the past month put up his Phd thesis,

Active electromagnetic attacks on secure hardware

on line at,

http://www.cl.cam.ac.uk/techreports/UCAM-CL-TR-811.pdf

Even for the not technicaly inclined the first part makes an interesting read. The whole report however is very interesting as it goes into a lot of the “experimental issues” of building and using equipment to do leading edge academic EmSec research.

RobertT January 19, 2012 12:36 AM

@Clive R
“For those with an interest in EmSec especialy to do with “secure chips” Theador Markettos of the UK Cambridge Computer labs has in the past month put up his Phd thesis,..”

Thanks for the link. unfortunately the download is VERY slow, and failed half way through. It contains some interesting information but to be honest “nothing new”.

I’d definitely encourage Theador to gain a better understanding of the tools and methods used within the semiconductor Failure Analysis community. Because he is trying to reinvent tools that are easily rented by the hour at any competent FA lab.

If he wants to learn about some leading edge exotic stuff than I’d suggest he read-up on “Near Field Optical Microscopy”

This is an offshoot of Atomic Force Microscopes, but has the advantage that the light can be used for local signal injection.

He should also study IBM’s released information on an optical circuit debug technique called PICA, its very relevant to what he wants to do.

kashmarek January 19, 2012 12:07 PM

Brazil & World Games…

http://www.bitrebels.com/technology/2014-world-cup-will-test-robocop-facial-recognition-technology/

So, what metrics will that face recognition database have? That is, whose faces will be pre-recorded for recognition by the “machine vision glasses”?

That is the problem with terrorists. There is no database available with terrorist faces, fingerprints, voice prints, ear scans, gait data, or any other information (DNA), that can be reliably used for such on the spot recognition. And, if a terrorist is in the database, then they will just use somebody that ISN’T.

So, the basic question becomes: whose face are you going to recognize? Someone you already know (who won’t be there) or someone that you can’t recognize (because you don’t have data on them)?

Are these “machine vision glasses” being worn by real cops or by machines (RoboCops)? Maybe you can’t attend the games if you AREN’T in the database.

LinkTheValiant January 19, 2012 3:22 PM

So, the basic question becomes: whose face are you going to recognize? Someone you already know (who won’t be there) or someone that you can’t recognize (because you don’t have data on them)?

Or you do neither, deploy the system anyway, and point to it when it fails to prevent an attack and proclaim loudly “But at least we DID something!”

However, there are other researchers showing that they can largely be replaced by technologies that may be more secure or convenient.

As Mr. Robinson notes, this merely transforms it to a token, reducing the system to one-factor at best.

Like it or not, passwords/phrases or PINs are here to stay in one form or another. There is nothing else that is “something you know”. Everything falls into this category. The only way to fix this is to come up with a category besides something one is, knows, or has. And that’s not possible.

Two-channel authentication is a strong fix, but not a total one, mainly because it’s nothing but “something you have” in a dressed-up form.

Clive Robinson January 20, 2012 9:03 AM

@ LinkTheValient,

The only way to fix this is to come up with a category besides something one is, knows, or has And that’s not possible Two-channel authentication

I’ve given ssome thought to “something you know” and “something you are” over the past decade or so.

Instead of the usual typed text based pass word/phrase, there is also the idea of presenting a grid of some kind on a display where the user selects the grid with meaningful content and types the number of the grid square. They do this action four or more times in succession with different meaningful information on each grid presentation. Only when they have entered the required number of grid numbers are their responses checked.

So for a picture based system the users “mental phrase” is “Lincon played the violin to his cat on the. black chair”. The meaning full pictures being,

Presidents{Lincon}
Instruments{violin}
Animals{cat}
Colour{black}
Furniture{chair}

These can be presented in any order and importantly each actual choice can have multiple pictures from different angles etc. The choices can also be further refined by the user such as the options for Lincon could be,

1,head looking left
2,head looking right
3,standing looking left
4,standing looking right
5,seated looking left
6,seated looking right
7,Statue looking left
8,statue looking right
9,Painting looking left
0,Painting looking right

So two pictures of Lincon may be presented in a grid and the usser pics the right one.

Presidents{Lincon}{Head looking left}
Instruments{violin}{held with bow}
Animals{cat sitting}
Colour{black left half of grid}
Furniture{chair facing left}

As many people are “visual” not “textual” they may well remember longer or more complex sequences of pictures more easily than a text string containing numbers and random punctuation and capitalisation.

I’ve also been thinging about bio-metrics in terms of some of the failings.

One big failing often given is that you cann’t change your bio-metrics for various reasons such as “duress codes”. But is this actually true?

Take a simple case of a hand scanner, you can put your hand on it in various ways, such as with fingers spread or not or some pattern of fingers spread such as the “Vulcan Blessing” where the pattern is thumb/index spread, index/middle not, middle/ring spread and ring/little not. Which with four gaps between fingers will with a little practice give you 16 “codes” over just the scan of the bio-metrics of one hand.

For the more “geeky” you could combine the picture grid and finger spacing…

Thus although you don’t get a new factor you do conbine it with something you know.

Clive Robinson January 20, 2012 2:50 PM

@ Bruce,

There is a back story to the Argentine Squid blockade and it might lead to another South Alantic war.

The current Argentine premier appears to be a woman who has significant pretentions to being a cross between the “new Evita” and Imelda Marcos, with more pairs of shoes than any gal would need during ten or twenty lifetimes. Apparently there are jokes about her having the blood of a vampire due to the amount of make up she puts on before being seen in daylight.

Whilst that is how she is seen by many (of her detractors) what is quite clear is she is determined to get sovereignty of the Falklands away from the British.

Whilst this might appear to be a bit of popularist patriotic nonsense, there is actually quite a bit of jingoistic behaviour behind it that the Argentinians buy into. Back in the early days of the last UK Tory Government and the second term of the Thatcher premiership over a quater of a century ago the then Argentinian leader was in political trouble and to buy popularist support he invaded the Falkland islands. The result was quite a few atrocities and a war.

But further behinf this is the question of natural resources in the south atlantic and south pole. Supposadly some of the largest fossil fuel reserves are down there. Currently the south pole is protected by international treaty but this is due to come to an end in the very near future which means that it might well turn into a resource grab.

In the past resources not directly in the territory of a country have been divided up on a rule based on miles of costline reduced by the distance the coast is from the resources.

So the Falklands are very likley to have significant entitlement to any resources down in the south atlantic / south pole.

Thus we may well see another attempt by Argentina to forcefully wrest the sovereignty of the Falklands from the UK very much against the wishes of the people who live there and who still live in dread of the Argentinian jackboot.

Clive Robinson January 21, 2012 7:54 AM

OFF Topic:

There has been some noise in the press about an articale that has just been published

Experimental Demonstration of Blind Quantum Computing.

“Quantum computers, besides offering substantial computational speedups, are also expected to provide the possibility of preserving the privacy of a computation. Here we show the first such experimenta demonstration of blind quantum computation where the input computation, and output all remain unknown to the computer.”

(Ref – arXiv:1110.1381v1)

Put simply the scientists concerned (Stefanie Barz, Elham Kashefi, Anne Broadbent, Joseph F. Fitzsimons, Anton Zeilinger, Philip Walther) appear to have worked out a way for you to send encrypted data to a quantum computer and for it to process the data in some way and return the results to you without the computer, it’s operator or anybody eavesdropping the communications path seeing what the real data is.

Thus as some members of the press have put it combining “the potential of cloud computing” with “the security of quantum cryptography”.

Most of the news items,

http://www.bbc.co.uk/news/mobile/science-environment-16636580

http://www.theregister.co.uk/2012/01/20/blind_quantum_computing_for_the_cloud/

either don’t link to the paper or point you to a paywall. However one of the papers authors has a link (to the Cornell University pre-print site) up on his personal web site,

http://www.qunat.org/personal.php?id=9

Or you can just grab the PDF from,

http://xxx.soton.ac.uk/pdf/1110.1381v1

And before anybody asks me any awkward questions about if I actually think it’s secure (possibly not for practical implementations as with most Quantum crypto). Please give me a little time to inwardly digest the paper so I can see if I can spot any holes in it.

Clive Robinson January 23, 2012 7:59 AM

@ Aaron Potaka,

Some technology about Josephson junction operating upto 100ghz at 1mV using superconducting substance

You forgot to include the 2005 NSA assessment of superconducting technology,

http://www.nitrd.gov/pubs/nsa/sta.pdf

Though the technology still appears limited by the simple fact that liquid helium cooling systems are neither cheap or small, it is quite interesting. My main interest in it is for Software Defined Radio front ends not general computation that has some real issues that present some difficulties.

If you look at this recent “overview” paper,

http://www.postreh.com/vmichal/papers/Superconducting_RSFQ_Logic_Radio.pdf

You will see it referes to the FLUX1 8bit ALU/CPU made using RSFQ gates and clocked around 20GHz. It had several problems to do with propergation through gates and down transmission lines giving rise to significant synchronisation issues across the chip.

The solution used “micro pipeling” has it’s own problems which hark back to issues I had back in the early 80’s with conventional ECL ALU/CPU design. The solution I came up with back then is one that is still pertinent to RSFQ design, which is to design a “serial CPU” (which I’ve mentioned before) to get the throughput and put up with the consiquential increase in delay.

All that said however the research has appeared to be quiescent on the general news front for a few years, however articles pop up from time to time in the oddest places,

http://octopart.com/blog/archives/2011/10/the-far-limits-of-datacenter-compute-efficiency-and-rsfq

Chris Zweber January 23, 2012 12:10 PM

How likely is it that the US currently has the perfect lie detector to go alongside our silent helicopter?

It seems like a very solvable problem to me, hook people up to the best brain scan technology available, tell them to lie, and feed that data into the best machine learning algorithms available.

How could we not already have this device?

Is there an innate human right to internal thought?

Some Mark January 25, 2012 1:24 AM

I am guilty of not reading any of the previous comments except for the last one, so hopefully I’m not going to look too stupid when I say this, but Chris Zweber’s comment caught my eye. The problem may indeed be solvable, but it’s not quite as easy as that. Even leaving aside the issure of whether “the best machine learning algorithms” are good enough, you have to be careful to train on them on meaningful data. If you just tell someone “Lie to me now”, they are in a very different frame of mind than they would be in if they are actually trying to decieve someone. I repeat: I’m not saying it’s impossible. But you need a fancier setup.

Clive Robinson January 29, 2012 4:02 PM

OFF Topic:

Some of you may remember (or care to search back on this blog) I’ve always had a bit of a downer on Quantum Key Distribution (QKD) exchange systems. Frequently pointing out that whilst theoretical security is nice it’s practical security that counts, and we just cannot seem to get it right…

Well some of you may remember far enough back where I identified one weakness was that the equipment manufacturer could use “non true random” or determanistic generation on things like the polorizers to leak information to an observer, (this is possible because a sufficiently complex sequence will pass the statistical randomness tests). One such way is with a stream generator, you only need to steal a very small percentage of the bits to align a second generator, and thus know the polarisation of all the bits transmitted.

Well it looks like a group in the maths department over at the Royal Holloway (University of London) in Egham have just come up with the same basic idea, only they are proposing the device stores information in memory to regurgitate it later in the noise when the device is reused,

http://www.technologyreview.com/blog/arxiv/27522/?ref=rss

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.