Entries Tagged "sensors"

Page 2 of 13

Metal Detectors at Sports Stadiums

Fans attending Major League Baseball games are being greeted in a new way this year: with metal detectors at the ballparks. Touted as a counterterrorism measure, they’re nothing of the sort. They’re pure security theater: They look good without doing anything to make us safer. We’re stuck with them because of a combination of buck passing, CYA thinking, and fear.

As a security measure, the new devices are laughable. The ballpark metal detectors are much more lax than the ones at an airport checkpoint. They aren’t very sensitive — people with phones and keys in their pockets are sailing through — and there are no X-ray machines. Bags get the same cursory search they’ve gotten for years. And fans wanting to avoid the detectors can opt for a “light pat-down search” instead.

There’s no evidence that this new measure makes anyone safer. A halfway competent ticketholder would have no trouble sneaking a gun into the stadium. For that matter, a bomb exploded at a crowded checkpoint would be no less deadly than one exploded in the stands. These measures will, at best, be effective at stopping the random baseball fan who’s carrying a gun or knife into the stadium. That may be a good idea, but unless there’s been a recent spate of fan shootings and stabbings at baseball games — and there hasn’t — this is a whole lot of time and money being spent to combat an imaginary threat.

But imaginary threats are the only ones baseball executives have to stop this season; there’s been no specific terrorist threat or actual intelligence to be concerned about. MLB executives forced this change on ballparks based on unspecified discussions with the Department of Homeland Security after the Boston Marathon bombing in 2013. Because, you know, that was also a sporting event.

This system of vague consultations and equally vague threats ensure that no one organization can be seen as responsible for the change. MLB can claim that the league and teams “work closely” with DHS. DHS can claim that it was MLB’s initiative. And both can safely relax because if something happens, at least they did something.

It’s an attitude I’ve seen before: “Something must be done. This is something. Therefore, we must do it.” Never mind if the something makes any sense or not.

In reality, this is CYA security, and it’s pervasive in post-9/11 America. It no longer matters if a security measure makes sense, if it’s cost-effective or if it mitigates any actual threats. All that matters is that you took the threat seriously, so if something happens you won’t be blamed for inaction. It’s security, all right — security for the careers of those in charge.

I’m not saying that these officials care only about their jobs and not at all about preventing terrorism, only that their priorities are skewed. They imagine vague threats, and come up with correspondingly vague security measures intended to address them. They experience none of the costs. They’re not the ones who have to deal with the long lines and confusion at the gates. They’re not the ones who have to arrive early to avoid the messes the new policies have caused around the league. And if fans spend more money at the concession stands because they’ve arrived an hour early and have had the food and drinks they tried to bring along confiscated, so much the better, from the team owners’ point of view.

I can hear the objections to this as I write. You don’t know these measures won’t be effective! What if something happens? Don’t we have to do everything possible to protect ourselves against terrorism?

That’s worst-case thinking, and it’s dangerous. It leads to bad decisions, bad design and bad security. A better approach is to realistically assess the threats, judge security measures on their effectiveness and take their costs into account. And the result of that calm, rational look will be the realization that there will always be places where we pack ourselves densely together, and that we should spend less time trying to secure those places and more time finding terrorist plots before they can be carried out.

So far, fans have been exasperated but mostly accepting of these new security measures. And this is precisely the problem — most of us don’t care all that much. Our options are to put up with these measures, or stay home. Going to a baseball game is not a political act, and metal detectors aren’t worth a boycott. But there’s an undercurrent of fear as well. If it’s in the name of security, we’ll accept it. As long as our leaders are scared of the terrorists, they’re going to continue the security theater. And we’re similarly going to accept whatever measures are forced upon us in the name of security. We’re going to accept the National Security Agency’s surveillance of every American, airport security procedures that make no sense and metal detectors at baseball and football stadiums. We’re going to continue to waste money overreacting to irrational fears.

We no longer need the terrorists. We’re now so good at terrorizing ourselves.

This essay previously appeared in the Washington Post.

Posted on April 15, 2015 at 6:58 AMView Comments

Automatic Scanning for Highly Stressed Individuals

This borders on ridiculous:

Chinese scientists are developing a mini-camera to scan crowds for highly stressed individuals, offering law-enforcement officers a potential tool to spot would-be suicide bombers.

[…]

“They all looked and behaved as ordinary people but their level of mental stress must have been extremely high before they launched their attacks. Our technology can detect such people, so law enforcement officers can take precautions and prevent these tragedies,” Chen said.

Officers looking through the device at a crowd would see a mental “stress bar” above each person’s head, and the suspects highlighted with a red face.

The researchers said they were able to use the technology to tell the difference between high-blood oxygen levels produced by stress rather than just physical exertion.

I’m not optimistic about this technology.

Posted on August 13, 2014 at 6:20 AMView Comments

Building Retro Reflectors

A group of researchers have reverse-engineered the NSA’s retro reflectors, and has recreated them using software-defined radio (SDR):

An SDR Ossmann designed and built, called HackRF, was a key part of his work in reconstructing the NSA’s retro-reflector systems. Such systems come in two parts – a plantable “reflector” bug and a remote SDR-based receiver.

One reflector, which the NSA called Ragemaster, can be fixed to a computer’s monitor cable to pick up on-screen images. Another, Surlyspawn, sits on the keyboard cable and harvests keystrokes. After a lot of trial and error, Ossmann found these bugs can be remarkably simple devices – little more than a tiny transistor and a 2-centimetre-long wire acting as an antenna.

Getting the information from the bugs is where SDRs come in. Ossmann found that using the radio to emit a high-power radar signal causes a reflector to wirelessly transmit the data from keystrokes, say, to an attacker. The set-up is akin to a large-scale RFID- chip system. Since the signals returned from the reflectors are noisy and often scattered across different bands, SDR’s versatility is handy, says Robin Heydon at Cambridge Silicon Radio in the UK. “Software-defined radio is flexibly programmable and can tune in to anything,” he says.

The NSA devices are LOUDAUTO, SURLYSPAWN, TAWDRYYARD, and RAGEMASTER. Here are videos that talk about how TAWDRYYARD and LOUDAUTO work.

This is important research. While the information we have about these sorts of tools is largely from the NSA, it is fanciful to assume that they are the only intelligence agency using this technology. And it’s equally fanciful to assume that criminals won’t be using this technology soon, even without Snowden’s documents. Understanding and building these tools is the first step to protecting ourselves from them.

Posted on June 23, 2014 at 6:51 AMView Comments

Heartwave Biometric

Here’s a new biometric I know nothing about:

The wristband relies on authenticating identity by matching the overall shape of the user’s heartwave (captured via an electrocardiogram sensor). Unlike other biotech authentication methods — like fingerprint scanning and iris-/facial-recognition tech — the system doesn’t require the user to authenticate every time they want to unlock something. Because it’s a wearable device, the system sustains authentication so long as the wearer keeps the wristband on.

EDITED TO ADD (12/13): A more technical explanation.

Posted on December 5, 2013 at 1:16 PMView Comments

iPhone Sensor Surveillance

The new iPhone has a motion sensor chip, and that opens up new opportunities for surveillance:

The M7 coprocessors introduce functionality that some may instinctively identify as “creepy.” Even Apple’s own description hints at eerie omniscience: “M7 knows when you’re walking, running, or even driving…” While it’s quietly implemented within iOS, it’s not secret for third party apps (which require an opt-in through pop-up notification, and management through the phone’s Privacy settings). But as we know, most users blindly accept these permissions.

It all comes down to a question of agency in tracking our physical bodies.

The fact that my Fitbit tracks activity without matching it up with all my other data sources, like GPS location or my calendar, is comforting. These data silos can sometimes be frustrating when I want to query across my QS datasets, but the built-in divisions between data about my body ­– and data about the rest of my digital life — leave room for my intentional inquiry and interpretation.

Posted on October 16, 2013 at 7:33 AMView Comments

Surveillance and the Internet of Things

The Internet has turned into a massive surveillance tool. We’re constantly monitored on the Internet by hundreds of companies — both familiar and unfamiliar. Everything we do there is recorded, collected, and collated — sometimes by corporations wanting to sell us stuff and sometimes by governments wanting to keep an eye on us.

Ephemeral conversation is over. Wholesale surveillance is the norm. Maintaining privacy from these powerful entities is basically impossible, and any illusion of privacy we maintain is based either on ignorance or on our unwillingness to accept what’s really going on.

It’s about to get worse, though. Companies such as Google may know more about your personal interests than your spouse, but so far it’s been limited by the fact that these companies only see computer data. And even though your computer habits are increasingly being linked to your offline behavior, it’s still only behavior that involves computers.

The Internet of Things refers to a world where much more than our computers and cell phones is Internet-enabled. Soon there will be Internet-connected modules on our cars and home appliances. Internet-enabled medical devices will collect real-time health data about us. There’ll be Internet-connected tags on our clothing. In its extreme, everything can be connected to the Internet. It’s really just a matter of time, as these self-powered wireless-enabled computers become smaller and cheaper.

Lots has been written about theInternet of Things” and how it will change society for the better. It’s true that it will make a lot of wonderful things possible, but the “Internet of Things” will also allow for an even greater amount of surveillance than there is today. The Internet of Things gives the governments and corporations that follow our every move something they don’t yet have: eyes and ears.

Soon everything we do, both online and offline, will be recorded and stored forever. The only question remaining is who will have access to all of this information, and under what rules.

We’re seeing an initial glimmer of this from how location sensors on your mobile phone are being used to track you. Of course your cell provider needs to know where you are; it can’t route your phone calls to your phone otherwise. But most of us broadcast our location information to many other companies whose apps we’ve installed on our phone. Google Maps certainly, but also a surprising number of app vendors who collect that information. It can be used to determine where you live, where you work, and who you spend time with.

Another early adopter was Nike, whose Nike+ shoes communicate with your iPod or iPhone and track your exercising. More generally, medical devices are starting to be Internet-enabled, collecting and reporting a variety of health data. Wiring appliances to the Internet is one of the pillars of the smart electric grid. Yes, there are huge potential savings associated with the smart grid, but it will also allow power companies – and anyone they decide to sell the data to — to monitor how people move about their house and how they spend their time.

Drones are another “thing” moving onto the Internet. As their price continues to drop and their capabilities increase, they will become a very powerful surveillance tool. Their cameras are powerful enough to see faces clearly, and there are enough tagged photographs on the Internet to identify many of us. We’re not yet up to a real-time Google Earth equivalent, but it’s not more than a few years away. And drones are just a specific application of CCTV cameras, which have been monitoring us for years, and will increasingly be networked.

Google’s Internet-enabled glasses — Google Glass — are another major step down this path of surveillance. Their ability to record both audio and video will bring ubiquitous surveillance to the next level. Once they’re common, you might never know when you’re being recorded in both audio and video. You might as well assume that everything you do and say will be recorded and saved forever.

In the near term, at least, the sheer volume of data will limit the sorts of conclusions that can be drawn. The invasiveness of these technologies depends on asking the right questions. For example, if a private investigator is watching you in the physical world, she or he might observe odd behavior and investigate further based on that. Such serendipitous observations are harder to achieve when you’re filtering databases based on pre-programmed queries. In other words, it’s easier to ask questions about what you purchased and where you were than to ask what you did with your purchases and why you went where you did. These analytical limitations also mean that companies like Google and Facebook will benefit more from the Internet of Things than individuals — not only because they have access to more data, but also because they have more sophisticated query technology. And as technology continues to improve, the ability to automatically analyze this massive data stream will improve.

In the longer term, the Internet of Things means ubiquitous surveillance. If an object “knows” you have purchased it, and communicates via either Wi-Fi or the mobile network, then whoever or whatever it is communicating with will know where you are. Your car will know who is in it, who is driving, and what traffic laws that driver is following or ignoring. No need to show ID; your identity will already be known. Store clerks could know your name, address, and income level as soon as you walk through the door. Billboards will tailor ads to you, and record how you respond to them. Fast food restaurants will know what you usually order, and exactly how to entice you to order more. Lots of companies will know whom you spend your days — and nights — with. Facebook will know about any new relationship status before you bother to change it on your profile. And all of this information will all be saved, correlated, and studied. Even now, it feels a lot like science fiction.

Will you know any of this? Will your friends? It depends. Lots of these devices have, and will have, privacy settings. But these settings are remarkable not in how much privacy they afford, but in how much they deny. Access will likely be similar to your browsing habits, your files stored on Dropbox, your searches on Google, and your text messages from your phone. All of your data is saved by those companies — and many others — correlated, and then bought and sold without your knowledge or consent. You’d think that your privacy settings would keep random strangers from learning everything about you, but it only keeps random strangers who don’t pay for the privilege — or don’t work for the government and have the ability to demand the data. Power is what matters here: you’ll be able to keep the powerless from invading your privacy, but you’ll have no ability to prevent the powerful from doing it again and again.

This essay originally appeared on the Guardian.

EDITED TO ADD (6/14): Another article on the subject.

Posted on May 21, 2013 at 6:15 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.