Entries Tagged "cell phones"

Page 23 of 29

SnapScouts

I sure hope this is a parody:

SnapScouts Keep America Safe!

Want to earn tons of cool badges and prizes while competing with you friends to see who can be the best American? Download the SnapScouts app for your Android phone (iPhone app coming soon) and get started patrolling your neighborhood.

It’s up to you to keep America safe! If you see something suspicious, Snap it! If you see someone who doesn’t belong, Snap it! Not sure if someone or something is suspicious? Snap it anyway!

Play with your friends and family to see who can get the best prizes. Join the SnapScouts today!

Posted on May 10, 2010 at 2:11 PMView Comments

Cory Doctorow Gets Phished

It can happen to anyone:

Here’s how I got fooled. On Monday, I unlocked my Nexus One phone, installing a new and more powerful version of the Android operating system that allowed me to do some neat tricks, like using the phone as a wireless modem on my laptop. In the process of reinstallation, I deleted all my stored passwords from the phone. I also had a couple of editorials come out that day, and did a couple of interviews, and generally emitted a pretty fair whack of information.

The next day, Tuesday, we were ten minutes late getting out of the house. My wife and I dropped my daughter off at the daycare, then hurried to our regular coffee shop to get take-outs before parting ways to go to our respective offices. Because we were a little late arriving, the line was longer than usual. My wife went off to read the free newspapers, I stood in the line. Bored, I opened up my phone fired up my freshly reinstalled Twitter client and saw that I had a direct message from an old friend in Seattle, someone I know through fandom. The message read “Is this you????”? and was followed by one of those ubiquitous shortened URLs that consist of a domain and a short code, like this: http://owl.ly/iuefuew.

The whole story is worth reading.

Posted on May 7, 2010 at 6:56 AMView Comments

Cyber Shockwave Test

There was a big U.S. cyberattack exercise this week. We didn’t do so well:

In a press release issued today, the Bipartisan Policy Center (BPC)—which organized “Cyber Shockwave” using a group of former government officials and computer simulations—concluded the U.S is “unprepared for cyber threats.”

[…]

…the U.S. defenders had difficulty identifying the source of the simulated attack, which in turn made it difficult to take action.

“During the exercise, a server hosting the attack appeared to be based in Russia,” said one report. “However, the developer of the malware program was actually in the Sudan. Ultimately, the source of the attack remained unclear during the event.”

The simulation envisioned an attack that unfolds during a single day in July 2011. When the council convenes to face this crisis, 20 million of the nation’s smartphones have already stopped working. The attack—the result of a malware program that had been planted in phones months earlier through a popular “March Madness” basketball bracket application—disrupts mobile service for millions. The attack escalates, shutting down an electronic energy trading platform and crippling the power grid on the Eastern seaboard.

This is, I think, an eyewitness report.

Posted on February 19, 2010 at 1:33 PMView Comments

Sprint Provides U.S. Law Enforcement with Cell Phone Customer Location Data

Wired summarizes research by Christopher Soghoian:

Sprint Nextel provided law enforcement agencies with customer location data more than 8 million times between September 2008 and October 2009, according to a company manager who disclosed the statistic at a non-public interception and wiretapping conference in October.

The manager also revealed the existence of a previously undisclosed web portal that Sprint provides law enforcement to conduct automated “pings” to track users. Through the website, authorized agents can type in a mobile phone number and obtain global positioning system (GPS) coordinates of the phone.

From Soghoian’s blog:

Sprint Nextel provided law enforcement agencies with its customers’ (GPS) location information over 8 million times between September 2008 and October 2009. This massive disclosure of sensitive customer information was made possible due to the roll-out by Sprint of a new, special web portal for law enforcement officers.

The evidence documenting this surveillance program comes in the form of an audio recording of Sprint’s Manager of Electronic Surveillance, who described it during a panel discussion at a wiretapping and interception industry conference, held in Washington DC in October of 2009.

It is unclear if Federal law enforcement agencies’ extensive collection of geolocation data should have been disclosed to Congress pursuant to a 1999 law that requires the publication of certain surveillance statistics—since the Department of Justice simply ignores the law, and has not provided the legally mandated reports to Congress since 2004.

Sprint denies this; details in the Wired article. The odds of us ever learning the truth are probably very low.

Posted on December 3, 2009 at 7:18 AMView Comments

Best Buy Sells Surveillance Tracker

Only $99.99:

Keep tabs on your child at all times with this small but sophisticated device that combines GPS and cellular technology to provide you with real-time location updates. The small and lightweight Little Buddy transmitter fits easily into a backpack, lunchbox or other receptacle, making it easy for your child to carry so you can check his or her location at any time using a smartphone or computer. Customizable safety checks allow you to establish specific times and locations where your child is supposed to be—for example, in school—causing the device to alert you with a text message if your child leaves the designated area during that time. Additional real-time alerts let you know when the device’s battery is running low so you can take steps to ensure your monitoring isn’t interrupted.

Presumably it can also be used to track people who aren’t your kids.

EDITED TO ADD (11/12): You can also use an iPhone as a tracking device.

Posted on October 28, 2009 at 1:28 PMView Comments

Inferring Friendship from Location Data

Interesting:

For nine months, Eagle’s team recorded data from the phones of 94 students and staff at MIT. By using blue-tooth technology and phone masts, they could monitor the movements of the participants, as well as their phone calls. Their main goal with this preliminary study was to compare data collected from the phones with subjective self-report data collected through traditional survey methodology.

The participants were asked to estimate their average spatial proximity to the other participants, whether they were close friends, and to indicate how satisfied they were at work.

Some intriguing findings emerged. For example, the researchers could predict with around 95 per cent accuracy who was friends with whom by looking at how much time participants spent with each other during key periods, such as Saturday nights.

According to the abstract:

Data collected from mobile phones have the potential to provide insight into the relational dynamics of individuals. This paper compares observational data from mobile phones with standard self-report survey data. We find that the information from these two data sources is overlapping but distinct. For example, self-reports of physical proximity deviate from mobile phone records depending on the recency and salience of the interactions. We also demonstrate that it is possible to accurately infer 95% of friendships based on the observational data alone, where friend dyads demonstrate distinctive temporal and spatial patterns in their physical proximity and calling patterns. These behavioral patterns, in turn, allow the prediction of individual-level outcomes such as job satisfaction.

We all leave data shadows everywhere we go, and maintaining privacy is very hard. Here’s the EFF writing about locational privacy.

EDITED TO ADD (10/12): More information.

Posted on September 21, 2009 at 1:41 PMView Comments

Building in Surveillance

China is the world’s most successful Internet censor. While the Great Firewall of China isn’t perfect, it effectively limits information flowing in and out of the country. But now the Chinese government is taking things one step further.

Under a requirement taking effect soon, every computer sold in China will have to contain the Green Dam Youth Escort software package. Ostensibly a pornography filter, it is government spyware that will watch every citizen on the Internet.

Green Dam has many uses. It can police a list of forbidden Web sites. It can monitor a user’s reading habits. It can even enlist the computer in some massive botnet attack, as part of a hypothetical future cyberwar.

China’s actions may be extreme, but they’re not unique. Democratic governments around the world—Sweden, Canada and the United Kingdom, for example—are rushing to pass laws giving their police new powers of Internet surveillance, in many cases requiring communications system providers to redesign products and services they sell.

Many are passing data retention laws, forcing companies to keep information on their customers. Just recently, the German government proposed giving itself the power to censor the Internet.

The United States is no exception. The 1994 CALEA law required phone companies to facilitate FBI eavesdropping, and since 2001, the NSA has built substantial eavesdropping systems in the United States. The government has repeatedly proposed Internet data retention laws, allowing surveillance into past activities as well as present.

Systems like this invite criminal appropriation and government abuse. New police powers, enacted to fight terrorism, are already used in situations of normal crime. Internet surveillance and control will be no different.

Official misuses are bad enough, but the unofficial uses worry me more. Any surveillance and control system must itself be secured. An infrastructure conducive to surveillance and control invites surveillance and control, both by the people you expect and by the people you don’t.

China’s government designed Green Dam for its own use, but it’s been subverted. Why does anyone think that criminals won’t be able to use it to steal bank account and credit card information, use it to launch other attacks, or turn it into a massive spam-sending botnet?

Why does anyone think that only authorized law enforcement will mine collected Internet data or eavesdrop on phone and IM conversations?

These risks are not theoretical. After 9/11, the National Security Agency built a surveillance infrastructure to eavesdrop on telephone calls and e-mails within the United States.

Although procedural rules stated that only non-Americans and international phone calls were to be listened to, actual practice didn’t always match those rules. NSA analysts collected more data than they were authorized to, and used the system to spy on wives, girlfriends, and famous people such as President Clinton.

But that’s not the most serious misuse of a telecommunications surveillance infrastructure. In Greece, between June 2004 and March 2005, someone wiretapped more than 100 cell phones belonging to members of the Greek government—the prime minister and the ministers of defense, foreign affairs and justice.

Ericsson built this wiretapping capability into Vodafone’s products, and enabled it only for governments that requested it. Greece wasn’t one of those governments, but someone still unknown—a rival political party? organized crime?—figured out how to surreptitiously turn the feature on.

Researchers have already found security flaws in Green Dam that would allow hackers to take over the computers. Of course there are additional flaws, and criminals are looking for them.

Surveillance infrastructure can be exported, which also aids totalitarianism around the world. Western companies like Siemens, Nokia, and Secure Computing built Iran’s surveillance infrastructure. U.S. companies helped build China’s electronic police state. Twitter’s anonymity saved the lives of Iranian dissidents—anonymity that many governments want to eliminate.

Every year brings more Internet censorship and control—not just in countries like China and Iran, but in the United States, the United Kingdom, Canada and other free countries.

The control movement is egged on by both law enforcement, trying to catch terrorists, child pornographers and other criminals, and by media companies, trying to stop file sharers.

It’s bad civic hygiene to build technologies that could someday be used to facilitate a police state. No matter what the eavesdroppers and censors say, these systems put us all at greater risk. Communications systems that have no inherent eavesdropping capabilities are more secure than systems with those capabilities built in.

This essay previously appeared—albeit with fewer links—on the Minnesota Public Radio website.

Posted on August 3, 2009 at 6:43 AMView Comments

iPhone Encryption Useless

Interesting, although I want some more technical details.

…the new iPhone 3GS’ encryption feature is “broken” when it comes to protecting sensitive information such as credit card numbers and social-security digits, Zdziarski said.

Zdziarski said it’s just as easy to access a user’s private information on an iPhone 3GS as it was on the previous generation iPhone 3G or first generation iPhone, both of which didn’t feature encryption. If a thief got his hands on an iPhone, a little bit of free software is all that’s needed to tap into all of the user’s content. Live data can be extracted in as little as two minutes, and an entire raw disk image can be made in about 45 minutes, Zdziarski said.

Wondering where the encryption comes into play? It doesn’t. Strangely, once one begins extracting data from an iPhone 3GS, the iPhone begins to decrypt the data on its own, he said.

Posted on July 29, 2009 at 6:16 AMView Comments

Second SHB Workshop Liveblogging (5)

David Livingstone Smith moderated the fourth session, about (more or less) methodology.

Angela Sasse, University College London (suggested reading: The Compliance Budget: Managing Security Behaviour in Organisations; Human Vulnerabilities in Security Systems), has been working on usable security for over a dozen years. As part of a project called “Trust Economics,” she looked at whether people comply with security policies and why they either do or do not. She found that there is a limit to the amount of effort people will make to comply—this is less actual cost and more perceived cost. Strict and simple policies will be complied with more than permissive but complex policies. Compliance detection, and reward or punishment, also affect compliance. People justify noncompliance by “frequently made excuses.”

Bashar Nuseibeh, Open University (suggested reading: A Multi-Pronged Empirical Approach to Mobile Privacy Investigation; Security Requirements Engineering: A Framework for Representation and Analysis), talked about mobile phone security; specifically, Facebook privacy on mobile phones. He did something clever in his experiments. Because he wasn’t able to interview people at the moment they did something—he worked with mobile users—he asked them to provide a “memory phrase” that allowed him to effectively conduct detailed interviews at a later time. This worked very well, and resulted in all sorts of information about why people made privacy decisions at that earlier time.

James Pita, University of Southern California (suggested reading: Deployed ARMOR Protection: The Application of a Game Theoretic Model for Security at the Los Angeles International Airport), studies security personnel who have to guard a physical location. In his analysis, there are limited resources—guards, cameras, etc.—and a set of locations that need to be guarded. An example would be the Los Angeles airport, where a finite number of K-9 units need to guard eight terminals. His model uses a Stackelberg game to minimize predictability (otherwise, the adversary will learn it and exploit it) while maximizing security. There are complications—observational uncertainty and bounded rationally on the part of the attackers—which he tried to capture in his model.

Markus Jakobsson, Palo Alto Research Center (suggested reading: Male, late with your credit card payment, and like to speed? You will be phished!; Social Phishing; Love and Authentication; Quantifying the Security of Preference-Based Authentication), pointed out that auto insurers ask people if they smoke in order to get a feeling for whether they engage in high-risk behaviors. In his experiment, he selected 100 people who were the victim of online fraud and 100 people who were not. He then asked them to complete a survey about different physical risks such as mountain climbing and parachute jumping, financial risks such as buying stocks and real estate, and Internet risks such as visiting porn sites and using public wi-fi networks. He found significant correlation between different risks, but I didn’t see an overall pattern emerge. And in the discussion phase, several people had questions about the data. More analysis, and probably more data, is required. To be fair, he was still in the middle of his analysis.

Rachel Greenstadt, Drexel University (suggested reading: Practical Attacks Against Authorship Recognition Techniques (pre-print); Reinterpreting the Disclosure Debate for Web Infections), discussed ways in which humans and machines can collaborate in making security decisions. These decisions are hard for several reasons: because they are context dependent, require specialized knowledge, are dynamic, and require complex risk analysis. And humans and machines are good at different sorts of tasks. Machine-style authentication: This guy I’m standing next to knows Jake’s private key, so he must be Jake. Human-style authentication: This guy I’m standing next to looks like Jake and sounds like Jake, so he must be Jake. The trick is to design systems that get the best of these two authentication styles and not the worst. She described two experiments examining two decisions: should I log into this website (the phishing problem), and should I publish this anonymous essay or will my linguistic style betray me?

Mike Roe, Microsoft, talked about crime in online games, particularly in Second Life and Metaplace. There are four classes of people on online games: explorers, socializers, achievers, and griefers. Griefers try to annoy socializers in social worlds like Second Life, or annoy achievers in competitive worlds like World of Warcraft. Crime is not necessarily economic; criminals trying to steal money is much less of a problem in these games than people just trying to be annoying. In the question session, Dave Clark said that griefers are a constant, but economic fraud grows over time. I responded that the two types of attackers are different people, with different personality profiles. I also pointed out that there is another kind of attacker: achievers who use illegal mechanisms to assist themselves.

In the discussion, Peter Neumann pointed out that safety is an emergent property, and requires security, reliability, and survivability. Others weren’t so sure.

Adam Shostack’s liveblogging is here. Ross Anderson’s liveblogging is in his blog post’s comments. Matt Blaze’s audio is here.

Conference dinner tonight at Legal Seafoods. And four more sessions tomorrow.

Posted on June 11, 2009 at 4:50 PMView Comments

1 21 22 23 24 25 29

Sidebar photo of Bruce Schneier by Joe MacInnis.