Comments

posedge clock September 21, 2012 10:20 AM

Interesting, but is it practical?

There’s one other requirement that I believe to be necessary that is not addressed here: the method by which accountability is verified must be simple enough for those affected to understand.

Consider electronic voting. There have been many papers published describing various cryptographic methods for holding fair and verifiable elections. However, the crypto is beyond the understanding of the average Joe. He’d have to trust the cryptographers in order to trust the election. At that point, what’s the point? If you have to trust someone, you might as well trust the poll officials in the first place.

I’m from Canada. We vote with pencil and paper. The physical properties of paper ballots and ballot boxes are well-known, and the state of the vote can be perceived with one’s senses unaided. That’s verifiable and trustworthy. And anyone can understand it.

Fred P September 21, 2012 11:22 AM

Interesting, although in the example given, I’d be worried about information leakage allowing someone who wants to get on without a detailed search to do so using information about previous passengers being searched or not as inputs.

Taoist September 21, 2012 1:23 PM

While they’re decent examples, I don’t think security should be handled by an algorithm, however fair the random picker is – doing so removes intelligence from the system in order to try and gain some sort of equality in the system. Old ladies and toddlers are simply not as big of security threats as others, why are we trying to be fair and increasing our own costs by strip searching them? One of the first things any machine learning student learns is that discrimination is a good thing. That’s not to say that we should discriminate based on race, but there are plenty of other factors that can be taken into account: Destination, travel history, age, sex, physical condition, etc.

SparkyGSX September 21, 2012 1:35 PM

The system has several potential flaws. and the possible solutions mentioned in the comments below the second article all introduce new problems.

Several people suggest adding a timestamp to the “name” (actually, identifier) to be tested; this would introduce a flaw that completely destroys and accountability; all the Evil Government has to do, is change the timestamp by a few seconds to change the output of the algorithm.

This, of course, assumes that the timestamp involved is the time the algorithm is executed for the passenger in question, meaning the exact moment the TSA agent scans or enters the information on their passport.

Also, I would say the whole system depends on the key size. If the keysize is small, an adversary would only need a few observations of the algorithm (known input and known output) to determine the key. On the other hand, if the key is sufficiently large, the Evil Government could just select whoever they want, and figure out which key would generate the selection of that particular day.

Alternatively, the algorithm could include the number of the passport, serial number of the ticket, or a social security number (or foreign equivalent). In any case, it has to be something the Evil Government can’t choose or manipulate on that particular day.

On the other hand, the algorithm would have to include something that even the Evil Government can’t predict, simply because they know, in advance, who will be flying on a particular day, first decide who must be searched, and then decide what their “secret key” will be that particular day.

If there would be a way to prove that the secret key really was chosen by a fair random method, I guess it might just work.

Another way to make it work would be to publish a hash value of the secret key in advance (before the flights are booked, and thus before the government can know who will be flying that day), as a commitment to that particular key.

In short: I think that either the commitment should be made a long time in advance (maybe several weeks or even months), or it should include something the government can’t predict or manipulate, but the passenger can verify.

Maybe we should just ask the passenger to pick a number, and use that in the algorithm (along with many other, verifiable factors, of course).

The die roll is also inherently flawed; it would be trivial to include a small magnet or ferromagnetic material on the side opposite the “go through invasive screening” side, and an electromagnet in the table, so the agent or an automated system can force that particular outcome. In case of a magnet, the automated system could also force a “clear” outcome, for certain people who are decided to be more equal than others.

Julien Couvreur September 21, 2012 1:53 PM

Interesting concept. But beyond a specific approach which we can argue about (should the TSA do X or Y?), the broader solution is to get ride of a government monopolized solution. Only competition can bring about the right experiments and trade-offs.

Someone September 21, 2012 2:56 PM

@SparkyGSX: “if the key is sufficiently large, the Evil Government could just select whoever they want, and figure out which key would generate the selection of that particular day.”

That’s what the precommitment and the die roll are designed to prevent: the government chooses R before the die roll (so they can’t specifically choose R to cancel out the dice), and they commit to it (for example, by publishing a hash of their selection before making the roll) so they can’t change it after the fact.

Now, if you can’t make an accountable random-number generator AT ALL, then this doesn’t work. But you don’t need a number that is BOTH random AND secret, just one of each.

@Julien Couveur:

How is competition going to help us here? Most of the costs of a 9/11-type incident were externalities from the airline’s perspective; I’m not sure you could accurately calculate them to assign them as penalties, and I’m even more skeptical that the security company would actually be able to pay them if you did. Plus, competition is notoriously bad at balancing big but rare risks.

tunguuz September 21, 2012 3:08 PM

Cryptographers make a big mistake assuming that a technical solution can be theoretically secure enough. In reality, only practical security matters, the one achieved via the (possibly faulty) technology, but accompanied by guards and dogs.

Processes, fuzzing and complexity is what ensures the resulting security, i.e. people and procedures. Theoretical security can offer many interesting intellectual challenges, but otherwise, it is not sufficient to make things secure enough.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.