Schneier on Security
A blog covering security and security technology.
February 2013 Archives
Recently, Elon Musk and the New York Times took to Twitter and the Internet to argue the data -- and their grievances -- over a failed road test and car review. Meanwhile, an Applebee's server is part of a Change.org petition to get her job back after posting a pastor's no-tip receipt comment online. And when he wasn't paid quickly enough, a local Fitness SF web developer rewrote the company's webpage to air his complaint.
All of these "cases" are seeking their judgments in the court of public opinion. The court of public opinion has a full docket; even brick-and-mortar establishments aren't immune.
More and more individuals -- and companies -- are augmenting, even bypassing entirely, traditional legal process hoping to get a more favorable hearing in public.
Every day we have to interact with thousands of strangers, from people we pass on the street to people who touch our food to people we enter short-term business relationships with. Even though most of us don't have the ability to protect our interests with physical force, we can all be confident when dealing with these strangers because -- at least in part -- we trust that the legal system will intervene on our behalf in case of a problem. Sometimes that problem involves people who break the rules of society, and the criminal courts deal with them; when the problem is a disagreement between two parties, the civil courts will. Courts are an ancient system of justice, and modern society cannot function without them.
What matters in this system are the facts and the laws. Courts are intended to be impartial and fair in doling out their justice, and societies flourish based on the extent to which we approach this ideal. When courts are unfair -- when judges can be bribed, when the powerful are treated better, when more expensive lawyers produce more favorable outcomes -- society is harmed. We become more fearful and less able to trust each other. We are less willing to enter into agreement with strangers, and we spend more effort protecting our own because we don't believe the system is there to back us up.
The court of public opinion is an alternative system of justice. It's very different from the traditional court system: This court is based on reputation, revenge, public shaming, and the whims of the crowd. Having a good story is more important than having the law on your side. Being a sympathetic underdog is more important than being fair. Facts matter, but there are no standards of accuracy. The speed of the Internet exacerbates this; a good story spreads faster than a bunch of facts.
This court delivers reputational justice. Arguments are measured in relation to reputation. If one party makes a claim against another that seems plausible, based on both of their reputations, then that claim is likely to be received favorably. If someone makes a claim that clashes with the reputations of the parties, then it's likely to be disbelieved. Reputation is, of course, a commodity, and loss of reputation is the penalty this court imposes. In that respect, it less often recompenses the injured party and more often exacts revenge or retribution. And while those losses may be brutal, the effects are usually short-lived.
The court of public opinion has significant limitations. It works better for revenge and justice than for dispute resolution. It can punish a company for unfairly firing one of its employees or lying in an automobile test drive, but it's less effective at unraveling a complicated patent litigation or navigating a bankruptcy proceeding.
In many ways, this is a return to a medieval notion of "fama," or reputation. In other ways, it's like mob justice: sometimes benign and beneficial, sometimes terrible (think French Revolution). Trial by public opinion isn't new; remember Rodney King and O.J. Simpson?
Mass media has enabled this system for centuries. But the Internet, and social media in particular, has changed how it's being used.
Now it's being used more deliberately, more often, by more and more powerful entities as a redress mechanism. Perhaps because it's perceived to be more efficient or perhaps because one of the parties feels they can get a more favorable hearing in this new court, but it's being used instead of lawsuits. Instead of a sideshow to actual legal proceedings, it is turning into an alternate system of dispute resolution and justice.
Part of this trend is because the Internet makes taking a case in front of the court of public opinion so much easier. It used to be that the injured party had to convince a traditional media outlet to make his case public; now he can take his case directly to the people. And while it's still a surprise when some cases go viral while others languish in obscurity, it's simply more effective to present your case on Facebook or Twitter.
Another reason is that the traditional court system is increasingly viewed as unfair. Today, money can buy justice: not by directly bribing judges, but by hiring better lawyers and forcing the other side to spend more money than they are able to. We know that the courts treat the rich and the poor differently, that corporations can get away with crimes individuals cannot, and that the powerful can lobby to get the specific laws and regulations they want -- irrespective of any notions of fairness.
Smart companies have already prepared for battles in the court of public opinion. They've hired policy experts. They've hired firms to monitor Facebook, Twitter, and other Internet venues where these battles originate. They have response strategies and communications plans in place. They've recognized that while this court is very different from the traditional legal system, money and power does count and that there are ways to tip the outcomes in their favor: For example, fake grassroots movements can be just as effective on the Internet as they can in the offline world.
It's time we recognize the court of public opinion for what it is -- an alternative crowd-enabled system of justice. We need to start discussing its merits and flaws; we need to understand when it results in justice, and how it can be manipulated by the powerful. We also need to have a frank conversation about the failings of the traditional justice scheme, and why people are motivated to take their grievances to the public. Despite 24-hour PR firms and incident-response plans, this is a court where corporations and governments are at an inherent disadvantage. And because the weak will continue to run ahead of the powerful, those in power will prefer to use the more traditional mechanisms of government: police, courts, and laws.
Social-media researcher danah boyd had it right when she wrote in Wired: "In a networked society, who among us gets to decide where the moral boundaries lie? This isn't an easy question and it's at the root of how we, as a society, conceptualize justice." It's not an easy question, but it's the key question. The moral and ethical issues surrounding the court of public opinion are the real ones, and ones that society will have to tackle in the decades to come.
Three brazen robberies are in the news this week.
The first was a theft at a small museum of gold nuggets worth $750,000:
Police said the daring heist happened between daytime tours, during a 20-minute window. Museum employees said the thief used an ax to smash the acrylic window, and then left the ax behind.
The second was at the Four Seasons Hotel in New York:
But now, the thieves have shattered the sense of security at the hotel, following the daring smash-and-grab around 2 a.m. Saturday in the middle of the hotel's spectacular lobby.
And the third was the largest -- $50 million in diamonds stolen from the Brussels Airport:
Forcing their way through the airport's perimeter fence, the thieves raced, police lights flashing, to Flight LX789, which had just been loaded with diamonds from a Brink's armored van from Antwerp, Belgium, and was getting ready for an 8:05 p.m. departure for Zurich.
I don't have anywhere near enough data to call this a trend, but the similarities are striking. In all cases, the robbers barreled straight through security, relying on surprise and speed. In all cases, security based on response wasn't fast enough to do any good. And in all cases, there's surveillance video that -- at least so far -- isn't very useful.
It's important to remember that, even in our high-tech Internet world, sometimes smash-and-grab still works.
EDITED TO ADD (3/13): A similar case from The Netherlands.
Someone has analyzed the security mistakes in the Battle of Hoth, from the movie The Empire Strikes Back.
EDITED TO ADD (2/27): A series of rebuttals.
I would have liked to participate in this hearing: Committee on Homeland Security, Subcommittee on Oversight and Management Efficiency: "Assessing DHS 10 Years Later: How Wisely is DHS Spending Taxpayer Dollars?" February 15, 2013.
I'll be speaking twice at the RSA Conference this year. I'm giving a solo talk Tuesday at 1:00, and participating in a debate about training Wednesday at noon. This is a short written preview of my solo talk, and this is an audio interview on the topic.
Additionally: Akamai is giving away 1,500 copies of Liars and Outliers, and Zcaler is giving away 300 copies of Schneier on Security. I'll be doing book signings in both of those companies' booths and at the conference bookstore.
The Montréal Review asked me to write an essay about my latest book. Not much that regular readers haven't seen before.
As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.
I was a guest on Inventing the Future, for an episode on surveillance technology. The video is here.
As the College of Cardinals prepares to elect a new pope, security people like me wonder about the process. How does it work, and just how hard would it be to hack the vote?
The rules for papal elections are steeped in tradition. John Paul II last codified them in 1996, and Benedict XVI left the rules largely untouched. The "Universi Dominici Gregis on the Vacancy of the Apostolic See and the Election of the Roman Pontiff" is surprisingly detailed.
Every cardinal younger than 80 is eligible to vote. We expect 117 to be voting. The election takes place in the Sistine Chapel, directed by the church chamberlain. The ballot is entirely paper-based, and all ballot counting is done by hand. Votes are secret, but everything else is open.
First, there's the "pre-scrutiny" phase.
"At least two or three" paper ballots are given to each cardinal, presumably so that a cardinal has extras in case he makes a mistake. Then nine election officials are randomly selected from the cardinals: three "scrutineers" who count the votes; three "revisers" who verify the results of the scrutineers; and three "infirmarii" who collect the votes from those too sick to be in the chapel. Different sets of officials are chosen randomly for each ballot.
Each cardinal, including the nine officials, writes his selection for pope on a rectangular ballot paper "as far as possible in handwriting that cannot be identified as his." He then folds the paper lengthwise and holds it aloft for everyone to see.
When everyone has written his vote, the "scrutiny" phase of the election begins. The cardinals proceed to the altar one by one. On the altar is a large chalice with a paten -- the shallow metal plate used to hold communion wafers during Mass -- resting on top of it. Each cardinal places his folded ballot on the paten. Then he picks up the paten and slides his ballot into the chalice.
If a cardinal cannot walk to the altar, one of the scrutineers -- in full view of everyone -- does this for him.
If any cardinals are too sick to be in the chapel, the scrutineers give the infirmarii a locked empty box with a slot, and the three infirmarii together collect those votes. If a cardinal is too sick to write, he asks one of the infirmarii to do it for him. The box is opened, and the ballots are placed onto the paten and into the chalice, one at a time.
When all the ballots are in the chalice, the first scrutineer shakes it several times to mix them. Then the third scrutineer transfers the ballots, one by one, from one chalice to another, counting them in the process. If the total number of ballots is not correct, the ballots are burned and everyone votes again.
To count the votes, each ballot is opened, and the vote is read by each scrutineer in turn, the third one aloud. Each scrutineer writes the vote on a tally sheet. This is all done in full view of the cardinals.
The total number of votes cast for each person is written on a separate sheet of paper. Ballots with more than one name (overvotes) are void, and I assume the same is true for ballots with no name written on them (undervotes). Illegible or ambiguous ballots are much more likely, and I presume they are discarded as well.
Then there's the "post-scrutiny" phase. The scrutineers tally the votes and determine whether there's a winner. We're not done yet, though.
The revisers verify the entire process: ballots, tallies, everything. And then the ballots are burned. That's where the smoke comes from: white if a pope has been elected, black if not -- the black smoke is created by adding water or a special chemical to the ballots.
Being elected pope requires a two-thirds plus one vote majority. This is where Pope Benedict made a change. Traditionally a two-thirds majority had been required for election. Pope John Paul II changed the rules so that after roughly 12 days of fruitless votes, a simple majority was enough to elect a pope. Benedict reversed this rule.
How hard would this be to hack?
First, the system is entirely manual, making it immune to the sorts of technological attacks that make modern voting systems so risky.
Second, the small group of voters -- all of whom know each other -- makes it impossible for an outsider to affect the voting in any way. The chapel is cleared and locked before voting. No one is going to dress up as a cardinal and sneak into the Sistine Chapel. In short, the voter verification process is about as good as you're ever going to find.
A cardinal can't stuff ballots when he votes. The complicated paten-and-chalice ritual ensures that each cardinal votes once -- his ballot is visible -- and also keeps his hand out of the chalice holding the other votes. Not that they haven't thought about this: The cardinals are in "choir dress" during the voting, which has translucent lace sleeves under a short red cape, making sleight-of-hand tricks much harder. Additionally, the total would be wrong.
The rules anticipate this in another way: "If during the opening of the ballots the scrutineers should discover two ballots folded in such a way that they appear to have been completed by one elector, if these ballots bear the same name, they are counted as one vote; if however they bear two different names, neither vote will be valid; however, in neither of the two cases is the voting session annulled." This surprises me, as if it seems more likely to happen by accident and result in two cardinals' votes not being counted.
Ballots from previous votes are burned, which makes it harder to use one to stuff the ballot box. But there's one wrinkle: "If however a second vote is to take place immediately, the ballots from the first vote will be burned only at the end, together with those from the second vote." I assume that's done so there's only one plume of smoke for the two elections, but it would be more secure to burn each set of ballots before the next round of voting.
The scrutineers are in the best position to modify votes, but it's difficult. The counting is conducted in public, and there are multiple people checking every step. It'd be possible for the first scrutineer, if he were good at sleight of hand, to swap one ballot paper for another before recording it. Or for the third scrutineer to swap ballots during the counting process. Making the ballots large would make these attacks harder. So would controlling the blank ballots better, and only distributing one to each cardinal per vote. Presumably cardinals change their mind more often during the voting process, so distributing extra blank ballots makes sense.
There's so much checking and rechecking that it's just not possible for a scrutineer to misrecord the votes. And since they're chosen randomly for each ballot, the probability of a cabal being selected is extremely low. More interesting would be to try to attack the system of selecting scrutineers, which isn't well-defined in the document. Influencing the selection of scrutineers and revisers seems a necessary first step toward influencing the election.
If there's a weak step, it's the counting of the ballots.
There's no real reason to do a precount, and it gives the scrutineer doing the transfer a chance to swap legitimate ballots with others he previously stuffed up his sleeve. Shaking the chalice to randomize the ballots is smart, but putting the ballots in a wire cage and spinning it around would be more secure -- albeit less reverent.
I would also add some kind of white-glove treatment to prevent a scrutineer from hiding a pencil lead or pen tip under his fingernails. Although the requirement to write out the candidate's name in full provides some resistance against this sort of attack.
Probably the biggest risk is complacency. What might seem beautiful in its tradition and ritual during the first ballot could easily become cumbersome and annoying after the twentieth ballot, and there will be a temptation to cut corners to save time. If the Cardinals do that, the election process becomes more vulnerable.
A 1996 change in the process lets the cardinals go back and forth from the chapel to their dorm rooms, instead of being locked in the chapel the whole time, as was done previously. This makes the process slightly less secure but a lot more comfortable.
Of course, one of the infirmarii could do what he wanted when transcribing the vote of an infirm cardinal. There's no way to prevent that. If the infirm cardinal were concerned about that but not privacy, he could ask all three infirmarii to witness the ballot.
There are also enormous social -- religious, actually -- disincentives to hacking the vote. The election takes place in a chapel and at an altar. The cardinals swear an oath as they are casting their ballot -- further discouragement. The chalice and paten are the implements used to celebrate the Eucharist, the holiest act of the Catholic Church. And the scrutineers are explicitly exhorted not to form any sort of cabal or make any plans to sway the election, under pain of excommunication.
The other major security risk in the process is eavesdropping from the outside world. The election is supposed to be a completely closed process, with nothing communicated to the world except a winner. In today's high-tech world, this is very difficult. The rules explicitly state that the chapel is to be checked for recording and transmission devices "with the help of trustworthy individuals of proven technical ability." That was a lot easier in 2005 than it will be in 2013.
What are the lessons here?
First, open systems conducted within a known group make voting fraud much harder. Every step of the election process is observed by everyone, and everyone knows everyone, which makes it harder for someone to get away with anything.
Second, small and simple elections are easier to secure. This kind of process works to elect a pope or a club president, but quickly becomes unwieldy for a large-scale election. The only way manual systems could work for a larger group would be through a pyramid-like mechanism, with small groups reporting their manually obtained results up the chain to more central tabulating authorities.
And third: When an election process is left to develop over the course of a couple of thousand years, you end up with something surprisingly good.
This is interesting:
In the security practice, we have our own version of no-man's land, and that's midsize companies. Wendy Nather refers to these folks as being below the "Security Poverty Line." These folks have a couple hundred to a couple thousand employees. That's big enough to have real data interesting to attackers, but not big enough to have a dedicated security staff and the resources they need to really protect anything. These folks are caught between the baseline and the service box. They default to compliance mandates like PCI-DSS because they don't know any better. And the attackers seem to sneak those passing shots by them on a seemingly regular basis.
I've seen this trend, and I think it's a result of the increasing sophistication of the IT industry. Today, it's increasingly rare for organizations to have bespoke security, just as it's increasingly rare for them to have bespoke IT. It's only the larger organizations that can afford it. Everyone else is increasingly outsourcing its IT to cloud providers. These providers are taking care of security -- although we can certainly argue about how good a job they're doing -- so that the organizations themselves don't have to. A company whose email consists entirely of Gmail accounts, whose payroll is entirely outsourced to Paychex, whose customer tracking system is entirely on Salesforce.com, and so on -- and who increasingly accesses those systems using specialized devices like iPads and Android tablets -- simply doesn't have any IT infrastructure to secure anymore.
To be sure, I think we're a long way off from this future being a secure one, but it's the one the industry is headed toward. Yes, vendors at the RSA conference are only selling to the largest organizations. And, as I wrote back in 2008, soon they will only be selling to IT outsourcing companies (the term "cloud provider" hadn't been invented yet):
For a while now I have predicted the death of the security industry. Not the death of information security as a vital requirement, of course, but the death of the end-user security industry that gathers at the RSA Conference. When something becomes infrastructure -- power, water, cleaning service, tax preparation -- customers care less about details and more about results. Technological innovations become something the infrastructure providers pay attention to, and they package it for their customers.
Wow, is this a crazy media frenzy. We should know better. These attacks happen all the time, and just because the media is reporting about them with greater frequency doesn't mean that they're happening with greater frequency.
In a private e-mail, Gary McGraw made an important point about attribution that matters a lot in this debate.
Because espionage unfolds over months or years in realtime, we can triangulate the origin of an exfiltration attack with some certainty. During the fog of a real cyber war attack, which is more likely to happen in milliseconds, the kind of forensic work that Mandiant did would not be possible. (In fact, we might just well be "Gandalfed" and pin the attack on the wrong enemy.)
This media frenzy is going to be used by the U.S. military to grab more power in cyberspace. They're already ramping up the U.S. Cyber Command. President Obama is issuing vague executive orders that will result in we-don't-know what. I don't see any good coming of this.
EDITED TO ADD (3/13): Critical commentary on the Mandiant report.
Abstract: Older adults are disproportionately vulnerable to fraud, and federal agencies have speculated that excessive trust explains their greater vulnerability. Two studies, one behavioral and one using neuroimaging methodology, identified age differences in trust and their neural underpinnings. Older and younger adults rated faces high in trust cues similarly, but older adults perceived faces with cues to untrustworthiness to be significantly more trustworthy and approachable than younger adults. This age-related pattern was mirrored in neural activation to cues of trustworthiness. Whereas younger adults showed greater anterior insula activation to untrustworthy versus trustworthy faces, older adults showed muted activation of the anterior insula to untrustworthy faces. The insula has been shown to support interoceptive awareness that forms the basis of "gut feelings," which represent expected risk and predict risk-avoidant behavior. Thus, a diminished "gut" response to cues of untrustworthiness may partially underlie older adults' vulnerability to fraud.
EDITED TO ADD (3/12): I think this result reflects the fact that older people discount the future more than young ones, and therefore are more willing to gamble on a good outcome. It makes sense biologically; they have less future ahead of them. We see the same thing in pregnancy; older mothers have a higher threshold for spontaneous abortion of a risky embryo than younger mothers.
Good summary of cheating in tournament chess.
How international soccer matches are fixed.
Right now, Dan Tan's programmers are busy reverse-engineering the safeguards of online betting houses. About $3 billion is wagered on sports every day, most of it on soccer, most of it in Asia. That's a lot of noise on the big exchanges. We can exploit the fluctuations, rig the bets in a way that won't trip the houses' alarms. And there are so many moments in a soccer game that could swing either way. All you have to do is see an Ilves tackle in the box where maybe the Viikingit forward took a dive. It happens all the time. It would happen anyway. So while you're running around the pitch in Finland, the syndicate will have computers placing high-volume max bets on whatever outcome the bosses decided on, using markets in Manila that take bets during games, timing the surges so the security bots don't spot anything suspicious. The exchanges don't care, not really. They get a cut of all the action anyway. The system is stacked so it's gamblers further down the chain who bear all the risks.
There's a nice example of traffic analysis in the book No Name, by Wilkie Collins (1862). The attacker, Captain Wragge, needs to know whether a letter has been placed in the mail. He knows who it will have been addressed to if it has been mailed, and with that information, is able to convince the postmaster to tell him that it has, in fact, been mailed:
If she had gone to the admiral's, no choice would be left him but to follow the coach, to catch the train by which she traveled, and to outstrip her afterward on the drive from the station in Essex to St. Crux. If, on the contrary, she had been contented with writing to her master, it would only be necessary to devise measures for intercepting the letter. The captain decided on going to the post-office, in the first place. Assuming that the housekeeper had written, she would not have left the letter at the mercy of the servant—she would have seen it safely in the letter-box before leaving Aldborough.
Hacking citation counts using Google Scholar.
After the New York Times broke the story of what seemed to be a state-sponsored hack from China against the newspaper, the Register has stories of two similar attacks: one from Burma and another from China.
Tesla Motors gave one of its electric cars to John Broder, a very outspoken electric-car skeptic from the New York Times, for a test drive. After a negative review, Tesla revealed that it logged a dizzying amount of data from that test drive. The company then matched the reporter's claims against its logs and published a rebuttal. Broder rebutted the rebuttal, and others have tried to figure out who is lying and who is not.
What's interesting to me is the sheer amount of data Tesla Motors automatically collected about the test drive. From the rebuttal:
After a negative experience several years ago with Top Gear, a popular automotive show, where they pretended that our car ran out of energy and had to be pushed back to the garage, we always carefully data log media drives.
Read the article to see what they logged: power consumption, speed, ambient temperature, control settings, location, and so on.
The stakes are high here. Broder and the New York Times are concerned about their journalistic integrity, which affects their brand. And Tesla Motors wants to sell cars.
The implication is that Tesla Motors only does this for media test drives, but it gives you an idea of the sort of things that will be collected once automobile black boxes become the norm. We're used to airplane black boxes, which only collected a small amount of data from the minutes just before an incident. But that was back when data was expensive. Now that it's cheap, expect black boxes to collect everything all the time. And once it's collected, it'll be used. By auto manufacturers, by insurance companies, by car rental companies, by marketers. The list will be long.
But as we're learning from this particular back-and-forth between Broder and Tesla Motors, even intense electronic surveillance of the actions of a person in an enclosed space did not succeed in providing an unambiguous record of what happened. To know that, the car company would have had to have someone in the car with the journalist.
This will increasingly be a problem as we are judged by our data. And in most cases, neither side will spend this sort of effort trying to figure out what really happened.
EDITED TO ADD (2/21): CNN weighs in.
As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.
This speech from last December's 29C3 (29th Chaos Communication Congress) is worth listening to. He talks about what we can do in the face of oppressive power on the Internet. I'm not sure his answers are right, but am glad to hear someone talking about the real problems.
"Practicality of Accelerometer Side Channels on Smartphones," by Adam J. Aviv. Benjamin Sapp, Matt Blaze, and Jonathan M. Smith.
Abstract: Modern smartphones are equipped with a plethora of sensors that enable a wide range of interactions, but some of these sensors can be employed as a side channel to surreptitiously learn about user input. In this paper, we show that the accelerometer sensor can also be employed as a high-bandwidth side channel; particularly, we demonstrate how to use the accelerometer sensor to learn user tap and gesture-based input as required to unlock smartphones using a PIN/password or Android's graphical password pattern. Using data collected from a diverse group of 24 users in controlled (while sitting) and uncontrolled (while walking) settings, we develop sample rate independent features for accelerometer readings based on signal processing and polynomial fitting techniques. In controlled settings, our prediction model can on average classify the PIN entered 43% of the time and pattern 73% of the time within 5 attempts when selecting from a test set of 50 PINs and 50 patterns. In uncontrolled settings, while users are walking, our model can still classify 20% of the PINs and 40% of the patterns within 5 attempts. We additionally explore the possibility of constructing an accelerometer-reading-to-input dictionary and find that such dictionaries would be greatly challenged by movement-noise and cross-user training.
Usability engineer Bruce Tognazzini talks about how an iWatch -- which seems to be either a mythical Apple product or one actually in development -- can make authentication easier.
Passcodes. The watch can and should, for most of us, eliminate passcodes altogether on iPhones, and Macs and, if Apple's smart, PCs: As long as my watch is in range, let me in! That, to me, would be the single-most compelling feature a smartwatch could offer: If the watch did nothing but release me from having to enter my passcode/password 10 to 20 times a day, I would buy it. If the watch would just free me from having to enter pass codes, I would buy it even if it couldn't tell the right time! I would happily strap it to my opposite wrist! This one is a must. Yes, Apple is working on adding fingerprint reading for iDevices, and that's just wonderful, but it will still take time and trouble for the device to get an accurate read from the user. I want in now! Instantly! Let me in, let me in, let me in!
With over a thousand cameras operating 24/7, the monitoring room creates tremendous amounts of data every day, most of which goes unseen. Six technicians watch about 40 monitors, but all the feeds are saved for later analysis. One day, as with OCR scanning, it might be possible to search all that data for suspicious activity. Say, a baccarat player who leaves his seat, disappears for a few minutes, and is replaced with another player who hits an impressive winning streak. An alert human might spot the collusion, but even better, video analytics might flag the scene for further review. The valuable trend in surveillance, Whiting says, is toward this data-driven analysis (even when much of the job still involves old-fashioned gumshoe work). "It's the data," he says, "And cameras now are data. So it's all data. It's just learning to understand that data is important."
This is a real story of a pair of identical twins who are suspected in a crime. There is CCTV and DNA evidence that could implicate either suspect. Detailed DNA testing that could resolve the guilty twin is prohibitively expensive. So both have been arrested in the hope that one may confess or implicate the other.
There's not a lot of information -- and quite a lot of hyperbole -- in this article:
With the release of the Asrar Al Dardashah plugin, GIMF promised "secure correspondence" based on the Pidgin chat client, which supports multiple chat platforms, including Yahoo Messenger, Windows Live Messenger, AOL Instant Messenger, Google Talk and Jabber/XMPP.
This is an amazing story. I urge you to read the whole thing, but here's the basics:
A November car chase ended in a "full blown-out" firefight, with glass and bullets flying, according to Cleveland police officers who described for investigators the chaotic scene at the end of the deadly 25-minute pursuit.
At the end of the scene, both unarmed -- and presumably innocent -- people in the car were dead.
There's a lot that can be said here, but I don't feel qualified to say it. There's a whole body of research on decision making under stress -- police, firefighters, soldiers -- and how easy it is to get caught up in the heat of the moment. I have read one book on that subject, Sources of Power, but that was years ago.
What interests me right now is how this whole situation was colored by what "society" is talking about and afraid of, which became the preconceptions the officers brought to the event. School shootings are in the news, so as soon as the car drove into a school parking lot, the police assumed the worst. Firefights with dangerous criminals are what we see on TV, so that's not unexpected, either. When you read the story, it's clear how many of the elements that the officers believed -- police cars being rammed, for example -- are right out of television violence. This would have turned out very differently if the officers had assumed that, as is almost always true, the two people in the car were just two people in a car.
I'm also curious as to how much technology contributed to this. Reports on the radio brought more and more officers to the scene, and misinformation was broadcast over the radio.
Again, I'm not really qualified to write about any of this. But it's what I've been thinking about.
Society runs on trust. Over the millennia, we've developed a variety of mechanisms to induce trustworthy behavior in society. These range from a sense of guilt when we cheat, to societal disapproval when we lie, to laws that arrest fraudsters, to door locks and burglar alarms that keep thieves out of our homes. They're complicated and interrelated, but they tend to keep society humming along.
The information age is transforming our society. We're shifting from evolved social systems to deliberately created socio-technical systems. Instead of having conversations in offices, we use Facebook. Instead of meeting friends, we IM. We shop online. We let various companies and governments collect comprehensive dossiers on our movements, our friendships, and our interests. We let others censor what we see and read. I could go on for pages.
None of this is news to anyone. But what's important, and much harder to predict, are the social changes resulting from these technological changes. With the rapid proliferation of computers -- both fixed and mobile -- computing devices and in-the-cloud processing, new ways of socialization have emerged. Facebook friends are fundamentally different than in-person friends. IM conversations are fundamentally different than voice conversations. Twitter has no pre-Internet analog. More social changes are coming. These social changes affect trust, and trust affects everything.
This isn't just academic. There has always been a balance in society between the honest and the dishonest, and technology continually upsets that balance. Online banking results in new types of cyberfraud. Facebook posts become evidence in employment and legal disputes. Cell phone location tracking can be used to round up political dissidents. Random blogs and websites become trusted sources, abetting propaganda. Crime has changed: easier impersonation, action at a greater distance, automation, and so on. The more our nation's infrastructure relies on cyberspace, the more vulnerable we are to cyberattack.
Think of this as a "security gap": the time lag between when the bad guys figure out how to exploit a new technology and when the good guys figure out how to restore society's balance.
Critically, the security gap is larger when there's more technology, and especially in times of rapid technological change. More importantly, it's larger in times of rapid social change due to the increased use of technology. This is our world today. We don't know *how* the proliferation of networked, mobile devices will affect the systems we have in place to enable trust, but we do know it *will* affect them.
Trust is as old as our species. It's something we do naturally, and informally. We don't trust doctors because we've vetted their credentials, but because they sound learned. We don't trust politicians because we've analyzed their positions, but because we generally agree with their political philosophy -- or the buzzwords they use. We trust many things because our friends trust them. It's the same with corporations, government organizations, strangers on the street: this thing that's critical to society's smooth functioning occurs largely through intuition and relationship. Unfortunately, these traditional and low-tech mechanisms are increasingly failing us. Understanding how trust is being, and will be, affected -- probably not by predicting, but rather by recognizing effects as quickly as possible -- and then deliberately creating mechanisms to induce trustworthiness and enable trust, is the only thing that will enable society to adapt.
If there's anything I've learned in all my years working at the intersection of security and technology, it's that technology is rarely more than a small piece of the solution. People are always the issue and we need to think as broadly as possible about solutions. So while laws are important, they don't work in isolation. Much of our security comes from the informal mechanisms we've evolved over the millennia: systems of morals and reputation.
There will exist new regimes of trust in the information age. They simply must evolve, or society will suffer unpredictably. We have already begun fleshing out such regimes, albeit in an ad hoc manner. It's time for us to deliberately think about how trust works in the information age, and use legal, social, and technological tools to enable this trust. We might get it right by accident, but it'll be a long and ugly iterative process getting there if we do.
This essay was originally published in The SciTech Lawyer, Winter/Spring 2013.
This is an extremely clever man-in-the-middle timing attack against TLS that exploits the interaction between how the protocol implements AES in CBC mode for encryption, and HMAC-SHA1 for authentication. (And this is a really good plain-language description of it.)
Interesting article about the difficulty Google has pushing security updates onto Android phones. The problem is that the phone manufacturer is in charge, and there are a lot of different phone manufacturers of varying ability and interest.
Chorizo-stuffed squid with potatoes, capers and sage.
As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.
This seems so obviously written by someone who Googled me on the Internet, without any other knowledge of who I am or what I do.
This long report looks at risky online behavior among the Millennial generation, and finds that they respond positively to automatic reminders and prodding. No surprise, really.
A first-person account of the security surrounding the second inauguration of President Obama. Read it more for the details than for the author's reaction to them.
Basically, Tide detergent is a popular product with a very small profit margin. So small non-chain grocery and convenience stores are happy to buy it cheaply, no questions asked. This makes it easy to sell if you steal it. And drug dealers have started taking it as currency, large bottles being worth about $5.
EDITED TO ADD (2/13): Snopes rates this as "undetermined."
Google's contest at the CanSecWest conference:
Today we’re announcing our third Pwnium competitionPwnium 3. Google Chrome is already featured in the Pwn2Own competition this year, so Pwnium 3 will have a new focus: Chrome OS.
Note that I do not have the physics to evaluate these claims.
The New York Times hack was big news last week, and I spent a lot of time doing press interviews about it. But while it is an important story -- hacking a newspaper for confidential sources is fundamentally different from hacking a random network for financial gain -- it's not much different than GhostNet in 2009, Google's Chinese hacking stories from 2010 and 2011, or others.
Why all the press, then? Turns out that if you hack a major newspaper, one of the side effects is a 2,400-word newspaper story about the event.
It's a good story, and I recommend that people read it. The newspaper learned of the attack early on, and had a reporter embedded in the team as they spent months watching the hackers and clearing them out. So there's a lot more detail than you usually get. But otherwise, this seems like just another of the many cyberattacks from China. (It seems that the Wall Street Journal was also hacked, but they didn't write about it. This tells me that, with high probability, other high-profile news organizations around the world were hacked as well.)
My favorite bit of the New York Times story is when they ding Symantec for not catching the attacks:
Over the course of three months, attackers installed 45 pieces of custom malware. The Times -- which uses antivirus products made by Symantec -- found only one instance in which Symantec identified an attacker’s software as malicious and quarantined it, according to Mandiant.
Symantec, of course, had to respond:
Turning on only the signature-based anti-virus components of endpoint solutions alone are not enough in a world that is changing daily from attacks and threats. We encourage customers to be very aggressive in deploying solutions that offer a combined approach to security. Anti-virus software alone is not enough.
It's nice to have them on record as saying that.
EDITED TO ADD (2/6): This blog post on Symantec's response is really good.
Clothing designed to thwart drones.
I just printed this out: "Proactive Defense for Evolving Cyber Threats," a Sandia Report by Richard Colbaugh and Kristin Glass. It's a collection of academic papers, and it looks interesting.
I don't see a lot written about security seals, despite how common they are. This article is a very basic overview of the technologies.
"It's really hard for the government to censor things when they don't understand the made-up words or meaning behind the imagery," said Kevin Lee, COO of China Youthology, in conversation at the DLD conference in Munich on Monday. "The people there aren't even relying on text anymore It's audio, visual, photos. All the young people are creating their own languages."
Webpage says that it's "the most effective lightweight, portable anchor around."
The Washington Post has the story:
The move, requested by the head of the Defense Department's Cyber Command, is part of an effort to turn an organization that has focused largely on defensive measures into the equivalent of an Internet-era fighting force. The command, made up of about 900 personnel, will expand to include 4,900 troops and civilians.
Jared Diamond has an op-ed in the New York Times where he talks about how we overestimate rare risks and underestimate common ones. Nothing new here -- I and others have written about this sort of thing extensively -- but he says that this is a bias found more in developed countries than in primitive cultures.
I first became aware of the New Guineans' attitude toward risk on a trip into a forest when I proposed pitching our tents under a tall and beautiful tree. To my surprise, my New Guinea friends absolutely refused. They explained that the tree was dead and might fall on us.
Diamond has a point. While it's universally true that humans exaggerate rare and spectacular risks and downplay mundane and common risks, we in developed countries do it more. The reason, I think, is how fears propagate. If someone in New Guinea gets eaten by a tiger -- do they even have tigers in New Guinea? -- then those who know the victim or hear about it learn to fear tiger attacks. If it happens in the U.S., it's the lead story on every news program, and the entire country fears tigers. Technology magnifies rare risks. Think of plane crashes versus car crashes. Think of school shooters versus home accidents. Think of 9/11 versus everything else.
On the other side of the coin, we in the developed world have largely made the pedestrian risks invisible. Diamond makes the point that, for an older man, falling is a huge risk, and showering is especially dangerous. How many people do you know who have fallen in the shower and seriously hurt themselves? I can't think of anyone. We tend to compartmentalize our old, our poor, our different -- and their accidents don't make the news. Unless it's someone we know personally, we don't hear about it.
EDITED TO ADD (2/21): George Burns fatally fell in the shower at age 98.
Powered by Movable Type. Photo at top by Per Ervland.
Schneier.com is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc.