Entries Tagged "cameras"

Page 3 of 20

Using Wi-Fi to Get 3D Images of Surrounding Location

Interesting research:

The radio signals emitted by a commercial Wi-Fi router can act as a kind of radar, providing images of the transmitter’s environment, according to new experiments. Two researchers in Germany borrowed techniques from the field of holography to demonstrate Wi-Fi imaging. They found that the technique could potentially allow users to peer through walls and could provide images 10 times per second.

News article.

Posted on May 16, 2017 at 6:08 AMView Comments

"Proof Mode" for your Smartphone Camera

ProofMode is an app for your smartphone that adds data to the photos you take to prove that they are real and unaltered:

On the technical front, what the app is doing is automatically generating an OpenPGP key for this installed instance of the app itself, and using that to automatically sign all photos and videos at time of capture. A sha256 hash is also generated, and combined with a snapshot of all available device sensor data, such as GPS location, wifi and mobile networks, altitude, device language, hardware type, and more. This is also signed, and stored with the media. All of this happens with no noticeable impact on battery life or performance, every time the user takes a photo or video.

This doesn’t solve all the problems with fake photos, but it’s a good step in the right direction.

Posted on March 1, 2017 at 6:02 AMView Comments

Eavesdropping by the Foscam Security Camera

Brian Krebs has a really weird story about the built-in eavesdropping by the Chinese-made Foscam security camera:

Imagine buying an internet-enabled surveillance camera, network attached storage device, or home automation gizmo, only to find that it secretly and constantly phones home to a vast peer-to-peer (P2P) network run by the Chinese manufacturer of the hardware. Now imagine that the geek gear you bought doesn’t actually let you block this P2P communication without some serious networking expertise or hardware surgery that few users would attempt.

Posted on February 24, 2016 at 12:05 PMView Comments

Automatic Face Recognition and Surveillance

ID checks were a common response to the terrorist attacks of 9/11, but they’ll soon be obsolete. You won’t have to show your ID, because you’ll be identified automatically. A security camera will capture your face, and it’ll be matched with your name and a whole lot of other information besides. Welcome to the world of automatic facial recognition. Those who have access to databases of identified photos will have the power to identify us. Yes, it’ll enable some amazing personalized services; but it’ll also enable whole new levels of surveillance. The underlying technologies are being developed today, and there are currently no rules limiting their use.

Walk into a store, and the salesclerks will know your name. The store’s cameras and computers will have figured out your identity, and looked you up in both their store database and a commercial marketing database they’ve subscribed to. They’ll know your name, salary, interests, what sort of sales pitches you’re most vulnerable to, and how profitable a customer you are. Maybe they’ll have read a profile based on your tweets and know what sort of mood you’re in. Maybe they’ll know your political affiliation or sexual identity, both predictable by your social media activity. And they’re going to engage with you accordingly, perhaps by making sure you’re well taken care of or possibly by trying to make you so uncomfortable that you’ll leave.

Walk by a policeman, and she will know your name, address, criminal record, and with whom you routinely are seen. The potential for discrimination is enormous, especially in low-income communities where people are routinely harassed for things like unpaid parking tickets and other minor violations. And in a country where people are arrested for their political views, the use of this technology quickly turns into a nightmare scenario.

The critical technology here is computer face recognition. Traditionally it has been pretty poor, but it’s slowly improving. A computer is now as good as a person. Already Google’s algorithms can accurately match child and adult photos of the same person, and Facebook has an algorithm that works by recognizing hair style, body shape, and body language ­- and works even when it can’t see faces. And while we humans are pretty much as good at this as we’re ever going to get, computers will continue to improve. Over the next years, they’ll continue to get more accurate, making better matches using even worse photos.

Matching photos with names also requires a database of identified photos, and we have plenty of those too. Driver’s license databases are a gold mine: all shot face forward, in good focus and even light, with accurate identity information attached to each photo. The enormous photo collections of social media and photo archiving sites are another. They contain photos of us from all sorts of angles and in all sorts of lighting conditions, and we helpfully do the identifying step for the companies by tagging ourselves and our friends. Maybe this data will appear on handheld screens. Maybe it’ll be automatically displayed on computer-enhanced glasses. Imagine salesclerks ­—or politicians ­—being able to scan a room and instantly see wealthy customers highlighted in green, or policemen seeing people with criminal records highlighted in red.

Science fiction writers have been exploring this future in both books and movies for decades. Ads followed people from billboard to billboard in the movie Minority Report. In John Scalzi’s recent novel Lock In, characters scan each other like the salesclerks I described above.

This is no longer fiction. High-tech billboards can target ads based on the gender of who’s standing in front of them. In 2011, researchers at Carnegie Mellon pointed a camera at a public area on campus and were able to match live video footage with a public database of tagged photos in real time. Already government and commercial authorities have set up facial recognition systems to identify and monitor people at sporting events, music festivals, and even churches. The Dubai police are working on integrating facial recognition into Google Glass, and more US local police forces are using the technology.

Facebook, Google, Twitter, and other companies with large databases of tagged photos know how valuable their archives are. They see all kinds of services powered by their technologies ­ services they can sell to businesses like the stores you walk into and the governments you might interact with.

Other companies will spring up whose business models depend on capturing our images in public and selling them to whoever has use for them. If you think this is farfetched, consider a related technology that’s already far down that path: license-plate capture.

Today in the US there’s a massive but invisible industry that records the movements of cars around the country. Cameras mounted on cars and tow trucks capture license places along with date/time/location information, and companies use that data to find cars that are scheduled for repossession. One company, Vigilant Solutions, claims to collect 70 million scans in the US every month. The companies that engage in this business routinely share that data with the police, giving the police a steady stream of surveillance information on innocent people that they could not legally collect on their own. And the companies are already looking for other profit streams, selling that surveillance data to anyone else who thinks they have a need for it.

This could easily happen with face recognition. Finding bail jumpers could even be the initial driving force, just as finding cars to repossess was for license plate capture.

Already the FBI has a database of 52 million faces, and describes its integration of facial recognition software with that database as “fully operational.” In 2014, FBI Director James Comey told Congress that the database would not include photos of ordinary citizens, although the FBI’s own documents indicate otherwise. And just last month, we learned that the FBI is looking to buy a system that will collect facial images of anyone an officer stops on the street.

In 2013, Facebook had a quarter of a trillion user photos in its database. There’s currently a class-action lawsuit in Illinois alleging that the company has over a billion “face templates” of people, collected without their knowledge or consent.

Last year, the US Department of Commerce tried to prevail upon industry representatives and privacy organizations to write a voluntary code of conduct for companies using facial recognition technologies. After 16 months of negotiations, all of the consumer-focused privacy organizations pulled out of the process because industry representatives were unable to agree on any limitations on something as basic as nonconsensual facial recognition.

When we talk about surveillance, we tend to concentrate on the problems of data collection: CCTV cameras, tagged photos, purchasing habits, our writings on sites like Facebook and Twitter. We think much less about data analysis. But effective and pervasive surveillance is just as much about analysis. It’s sustained by a combination of cheap and ubiquitous cameras, tagged photo databases, commercial databases of our actions that reveal our habits and personalities, and ­—most of all ­—fast and accurate face recognition software.

Don’t expect to have access to this technology for yourself anytime soon. This is not facial recognition for all. It’s just for those who can either demand or pay for access to the required technologies ­—most importantly, the tagged photo databases. And while we can easily imagine how this might be misused in a totalitarian country, there are dangers in free societies as well. Without meaningful regulation, we’re moving into a world where governments and corporations will be able to identify people both in real time and backwards in time, remotely and in secret, without consent or recourse.

Despite protests from industry, we need to regulate this budding industry. We need limitations on how our images can be collected without our knowledge or consent, and on how they can be used. The technologies aren’t going away, and we can’t uninvent these capabilities. But we can ensure that they’re used ethically and responsibly, and not just as a mechanism to increase police and corporate power over us.

This essay previously appeared on Forbes.com.

EDITED TO ADD: Two articles that say much the same thing.

Posted on October 5, 2015 at 6:11 AMView Comments

Living in a Code Yellow World

In the 1980s, handgun expert Jeff Cooper invented something called the Color Code to describe what he called the “combat mind-set.” Here is his summary:

In White you are unprepared and unready to take lethal action. If you are attacked in White you will probably die unless your adversary is totally inept.

In Yellow you bring yourself to the understanding that your life may be in danger and that you may have to do something about it.

In Orange you have determined upon a specific adversary and are prepared to take action which may result in his death, but you are not in a lethal mode.

In Red you are in a lethal mode and will shoot if circumstances warrant.

Cooper talked about remaining in Code Yellow over time, but he didn’t write about its psychological toll. It’s significant. Our brains can’t be on that alert level constantly. We need downtime. We need to relax. This is why we have friends around whom we can let our guard down and homes where we can close our doors to outsiders. We only want to visit Yellowland occasionally.

Since 9/11, the US has increasingly become Yellowland, a place where we assume danger is imminent. It’s damaging to us individually and as a society.

I don’t mean to minimize actual danger. Some people really do live in a Code Yellow world, due to the failures of government in their home countries. Even there, we know how hard it is for them to maintain a constant level of alertness in the face of constant danger. Psychologist Abraham Maslow wrote about this, making safety a basic level in his hierarchy of needs. A lack of safety makes people anxious and tense, and the long term effects are debilitating.

The same effects occur when we believe we’re living in an unsafe situation even if we’re not. The psychological term for this is hypervigilance. Hypervigilance in the face of imagined danger causes stress and anxiety. This, in turn, alters how your hippocampus functions, and causes an excess of cortisol in your body. Now cortisol is great in small and infrequent doses, and helps you run away from tigers. But it destroys your brain and body if you marinate in it for extended periods of time.

Not only does trying to live in Yellowland harm you physically, it changes how you interact with your environment and it impairs your judgment. You forget what’s normal and start seeing the enemy everywhere. Terrorism actually relies on this kind of reaction to succeed.

Here’s an example from The Washington Post last year: “I was taking pictures of my daughters. A stranger thought I was exploiting them.” A father wrote about his run-in with an off-duty DHS agent, who interpreted an innocent family photoshoot as something nefarious and proceeded to harass and lecture the family. That the parents were white and the daughters Asian added a racist element to the encounter.

At the time, people wrote about this as an example of worst-case thinking, saying that as a DHS agent, “he’s paid to suspect the worst at all times and butt in.” While, yes, it was a “disturbing reminder of how the mantra of ‘see something, say something’ has muddied the waters of what constitutes suspicious activity,” I think there’s a deeper story here. The agent is trying to live his life in Yellowland, and it caused him to see predators where there weren’t any.

I call these “movie-plot threats,” scenarios that would make great action movies but that are implausible in real life. Yellowland is filled with them.

Last December former DHS director Tom Ridge wrote about the security risks of building a NFL stadium near the Los Angeles Airport. His report is full of movie-plot threats, including terrorists shooting down a plane and crashing it into a stadium. His conclusion, that it is simply too dangerous to build a sports stadium within a few miles of the airport, is absurd. He’s been living too long in Yellowland.

That our brains aren’t built to live in Yellowland makes sense, because actual attacks are rare. The person walking towards you on the street isn’t an attacker. The person doing something unexpected over there isn’t a terrorist. Crashing an airplane into a sports stadium is more suitable to a Die Hard movie than real life. And the white man taking pictures of two Asian teenagers on a ferry isn’t a sex slaver. (I mean, really?)

Most of us, that DHS agent included, are complete amateurs at knowing the difference between something benign and something that’s actually dangerous. Combine this with the rarity of attacks, and you end up with an overwhelming number of false alarms. This is the ultimate problem with programs like “see something, say something.” They waste an enormous amount of time and money.

Those of us fortunate enough to live in a Code White society are much better served acting like we do. This is something we need to learn at all levels, from our personal interactions to our national policy. Since the terrorist attacks of 9/11, many of our counterterrorism policies have helped convince people they’re not safe, and that they need to be in a constant state of readiness. We need our leaders to lead us out of Yellowland, not to perpetuate it.

This essay previously appeared on Fusion.net.

EDITED TO ADD (9/25): UK student reading book on terrorism is accused of being a terrorist. He was reading the book for a class he was taking. I’ll let you guess his ethnicity.

Posted on September 24, 2015 at 11:39 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.