Entries Tagged "iPhone"

Page 10 of 11

Recording the Police

I’ve written a lot on the “War on Photography,” where normal people are harassed as potential terrorists for taking pictures of things in public. This article is different; it’s about recording the police:

Allison’s predicament is an extreme example of a growing and disturbing trend. As citizens increase their scrutiny of law enforcement officials through technologies such as cell phones, miniature cameras, and devices that wirelessly connect to video-sharing sites such as YouTube and LiveLeak, the cops are increasingly fighting back with force and even jail time—and not just in Illinois. Police across the country are using decades-old wiretapping statutes that did not anticipate iPhones or Droids, combined with broadly written laws against obstructing or interfering with law enforcement, to arrest people who point microphones or video cameras at them. Even in the wake of gross injustices, state legislatures have largely neglected the issue. Meanwhile, technology is enabling the kind of widely distributed citizen documentation that until recently only spy novelists dreamed of. The result is a legal mess of outdated, loosely interpreted statutes and piecemeal court opinions that leave both cops and citizens unsure of when recording becomes a crime.

This is all important. Being able to record the police is one of the best ways to ensure that the police are held accountable for their actions. Privacy has to be viewed in the context of relative power. For example, the government has a lot more power than the people. So privacy for the government increases their power and increases the power imbalance between government and the people; it decreases liberty. Forced openness in government—open government laws, Freedom of Information Act filings, the recording of police officers and other government officials, WikiLeaks—reduces the power imbalance between government and the people, and increases liberty.

Privacy for the people increases their power. It also increases liberty, because it reduces the power imbalance between government and the people. Forced openness in the people—NSA monitoring of everyone’s phone calls and e-mails, the DOJ monitoring everyone’s credit card transactions, surveillance cameras—decreases liberty.

I think we need a law that explicitly makes it legal for people to record government officials when they are interacting with them in their official capacity. And this is doubly true for police officers and other law enforcement officials.

EDITED TO ADD: Anthony Graber, the Maryland motorcyclist in the article, had all the wiretapping charges cleared.

Posted on December 21, 2010 at 1:39 PMView Comments

Security in 2020

There’s really no such thing as security in the abstract. Security can only be defined in relation to something else. You’re secure from something or against something. In the next 10 years, the traditional definition of IT security—­that it protects you from hackers, criminals, and other bad guys—­will undergo a radical shift. Instead of protecting you from the bad guys, it will increasingly protect businesses and their business models from you.

Ten years ago, the big conceptual change in IT security was deperimeterization. A wordlike grouping of 18 letters with both a prefix and a suffix, it has to be the ugliest word our industry invented. The concept, though—­the dissolution of the strict boundaries between the internal and external network—­was both real and important.

There’s more deperimeterization today than there ever was. Customer and partner access, guest access, outsourced e-mail, VPNs; to the extent there is an organizational network boundary, it’s so full of holes that it’s sometimes easier to pretend it isn’t there. The most important change, though, is conceptual. We used to think of a network as a fortress, with the good guys on the inside and the bad guys on the outside, and walls and gates and guards to ensure that only the good guys got inside. Modern networks are more like cities, dynamic and complex entities with many different boundaries within them. The access, authorization, and trust relationships are even more complicated.

Today, two other conceptual changes matter. The first is consumerization. Another ponderous invented word, it’s the idea that consumers get the cool new gadgets first, and demand to do their work on them. Employees already have their laptops configured just the way they like them, and they don’t want another one just for getting through the corporate VPN. They’re already reading their mail on their BlackBerrys or iPads. They already have a home computer, and it’s cooler than the standard issue IT department machine. Network administrators are increasingly losing control over clients.

This trend will only increase. Consumer devices will become trendier, cheaper, and more integrated; and younger people are already used to using their own stuff on their school networks. It’s a recapitulation of the PC revolution. The centralized computer center concept was shaken by people buying PCs to run VisiCalc; now it’s iPads and Android smart phones.

The second conceptual change comes from cloud computing: our increasing tendency to store our data elsewhere. Call it decentralization: our email, photos, books, music, and documents are stored somewhere, and accessible to us through our consumer devices. The younger you are, the more you expect to get your digital stuff on the closest screen available. This is an important trend, because it signals the end of the hardware and operating system battles we’ve all lived with. Windows vs. Mac doesn’t matter when all you need is a web browser. Computers become temporary; user backup becomes irrelevant. It’s all out there somewhere—­and users are increasingly losing control over their data.

During the next 10 years, three new conceptual changes will emerge, two of which we can already see the beginnings of. The first I’ll call deconcentration. The general-purpose computer is dying and being replaced by special-purpose devices. Some of them, like the iPhone, seem general purpose but are strictly controlled by their providers. Others, like Internet-enabled game machines or digital cameras, are truly special purpose. In 10 years, most computers will be small, specialized, and ubiquitous.

Even on what are ostensibly general-purpose devices, we’re seeing more special-purpose applications. Sure, you could use the iPhone’s web browser to access the New York Times website, but it’s much easier to use the NYT’s special iPhone app. As computers become smaller and cheaper, this trend will only continue. It’ll be easier to use special-purpose hardware and software. And companies, wanting more control over their users’ experience, will push this trend.

The second is decustomerization—­now I get to invent the really ugly words­—the idea that we get more of our IT functionality without any business relation­ship. We’re all part of this trend: every search engine gives away its services in exchange for the ability to advertise. It’s not just Google and Bing; most webmail and social networking sites offer free basic service in exchange for advertising, possibly with premium services for money. Most websites, even useful ones that take the place of client software, are free; they are either run altruistically or to facilitate advertising.

Soon it will be hardware. In 1999, Internet startup FreePC tried to make money by giving away computers in exchange for the ability to monitor users’ surfing and purchasing habits. The company failed, but computers have only gotten cheaper since then. It won’t be long before giving away netbooks in exchange for advertising will be a viable business. Or giving away digital cameras. Already there are companies that give away long-distance minutes in exchange for advertising. Free cell phones aren’t far off. Of course, not all IT hardware will be free. Some of the new cool hardware will cost too much to be free, and there will always be a need for concentrated computing power close to the user­—game systems are an obvious example—­but those will be the exception. Where the hardware costs too much to just give away, however, we’ll see free or highly subsidized hardware in exchange for locked-in service; that’s already the way cell phones are sold.

This is important because it destroys what’s left of the normal business rela­tionship between IT companies and their users. We’re not Google’s customers; we’re Google’s product that they sell to their customers. It’s a three-way relation­ship: us, the IT service provider, and the advertiser or data buyer. And as these noncustomer IT relationships proliferate, we’ll see more IT companies treating us as products. If I buy a Dell computer, then I’m obviously a Dell customer; but if I get a Dell computer for free in exchange for access to my life, it’s much less obvious whom I’m entering a business relationship with. Facebook’s continual ratcheting down of user privacy in order to satisfy its actual customers­—the advertisers—and enhance its revenue is just a hint of what’s to come.

The third conceptual change I’ve termed depersonization: computing that removes the user, either partially or entirely. Expect to see more software agents: programs that do things on your behalf, such as prioritize your email based on your observed preferences or send you personalized sales announcements based on your past behavior. The “people who liked this also liked” feature on many retail websites is just the beginning. A website that alerts you if a plane ticket to your favorite destination drops below a certain price is simplistic but useful, and some sites already offer this functionality. Ten years won’t be enough time to solve the serious artificial intelligence problems required to fully real­ize intelligent agents, but the agents of that time will be both sophisticated and commonplace, and they’ll need less direct input from you.

Similarly, connecting objects to the Internet will soon be cheap enough to be viable. There’s already considerable research into Internet-enabled medical devices, smart power grids that communicate with smart phones, and networked automobiles. Nike sneakers can already communicate with your iPhone. Your phone already tells the network where you are. Internet-enabled appliances are already in limited use, but soon they will be the norm. Businesses will acquire smart HVAC units, smart elevators, and smart inventory systems. And, as short-range communications­—like RFID and Bluetooth—become cheaper, everything becomes smart.

The “Internet of things” won’t need you to communicate. The smart appliances in your smart home will talk directly to the power company. Your smart car will talk to road sensors and, eventually, other cars. Your clothes will talk to your dry cleaner. Your phone will talk to vending machines; they already do in some countries. The ramifications of this are hard to imagine; it’s likely to be weirder and less orderly than the contemporary press describes it. But certainly smart objects will be talking about you, and you probably won’t have much control over what they’re saying.

One old trend: deperimeterization. Two current trends: consumerization and decentralization. Three future trends: deconcentration, decustomerization, and depersonization. That’s IT in 2020—­it’s not under your control, it’s doing things without your knowledge and consent, and it’s not necessarily acting in your best interests. And this is how things will be when they’re working as they’re intended to work; I haven’t even started talking about the bad guys yet.

That’s because IT security in 2020 will be less about protecting you from traditional bad guys, and more about protecting corporate business models from you. Deperimeterization assumes everyone is untrusted until proven otherwise. Consumerization requires networks to assume all user devices are untrustworthy until proven otherwise. Decentralization and deconcentration won’t work if you’re able to hack the devices to run unauthorized software or access unauthorized data. Deconsumerization won’t be viable unless you’re unable to bypass the ads, or whatever the vendor uses to monetize you. And depersonization requires the autonomous devices to be, well, autonomous.

In 2020—­10 years from now­—Moore’s Law predicts that computers will be 100 times more powerful. That’ll change things in ways we can’t know, but we do know that human nature never changes. Cory Doctorow rightly pointed out that all complex ecosystems have parasites. Society’s traditional parasites are criminals, but a broader definition makes more sense here. As we users lose control of those systems and IT providers gain control for their own purposes, the definition of “parasite” will shift. Whether they’re criminals trying to drain your bank account, movie watchers trying to bypass whatever copy protection studios are using to protect their profits, or Facebook users trying to use the service without giving up their privacy or being forced to watch ads, parasites will continue to try to take advantage of IT systems. They’ll exist, just as they always have existed, and­ like today­ security is going to have a hard time keeping up with them.

Welcome to the future. Companies will use technical security measures, backed up by legal security measures, to protect their business models. And unless you’re a model user, the parasite will be you.

This essay was originally written as a foreword to Security 2020, by Doug Howard and Kevin Prince.

Posted on December 16, 2010 at 6:27 AMView Comments

Apple JailBreakMe Vulnerability

Good information from Mikko Hyppönen.

Q: What is this all about?
A: It’s about a site called jailbreakme.com that enables you to Jailbreak your iPhones and iPads just by visiting the site.

Q: So what’s the problem?
A: The problem is that the site uses a zero-day vulnerability to execute code on the device.

Q: How does the vulnerability work?
A: Actually, it’s two vulnerabilities. First one uses a corrupted font embedded in a PDF file to execute code and the second one uses a vulnerability in the kernel to escalate the code execution to unsandboxed root.

Q: How difficult was it to create this exploit?
A: Very difficult.

Q: How difficult would it be for someone else to modify the exploit now that it’s out?
A: Quite easy.

Here’s the JailBreakMe blog.

EDITED TO ADD (8/14): Apple has released a patch. It doesn’t help people with old model iPhones and iPod Touches, or work for people who’ve jailbroken their phones.

EDITED TO ADD (8/15): More info.

Posted on August 10, 2010 at 12:12 PMView Comments

Remote Printing to an E-Mail Address

This is cool technology from HP:

Each printer with the ePrint capability will be assigned its own e-mail address. If someone wants to print a document from an iPhone, the document will go to HP’s data center, where it is rendered into the correct format, and then sent to the person’s printer. The process takes about 25 seconds.

Maybe this feature was designed with robust security, but I’m not betting on it. The first people to hack the system will certainly be spammers. (For years I’ve gotten more spam on my fax machine than legitimate faxes.) And why would HP fix the spam problem when it will just enable them to sell overpriced ink cartridges faster?

Any other illegitimate uses for this technology?

EDITED TO ADD (7/13): Location-sensitive advertising to your printer.

Posted on June 18, 2010 at 1:37 PMView Comments

Alerting Users that Applications are Using Cameras, Microphones, Etc.

Interesting research: “What You See is What They Get: Protecting users from unwanted use of microphones, cameras, and other sensors,” by Jon Howell and Stuart Schechter.

Abstract: Sensors such as cameras and microphones collect privacy-sensitive data streams without the user’s explicit action. Conventional sensor access policies either hassle users to grant applications access to sensors or grant with no approval at all. Once access is granted, an application may collect sensor data even after the application’s interface suggests that the sensor is no longer being accessed.

We introduce the sensor-access widget, a graphical user interface element that resides within an application’s display. The widget provides an animated representation of the personal data being collected by its corresponding sensor, calling attention to the application’s attempt to collect the data. The widget indicates whether the sensor data is currently allowed to flow to the application. The widget also acts as a control point through which the user can configure the sensor and grant or deny the application access. By building perpetual disclosure of sensor data collection into the platform, sensor-access widgets enable new access-control policies that relax the tension between the user’s privacy needs and applications’ ease of access.

Apple seems to be taking some steps in this direction with the location sensor disclosure in iPhone 4.0 OS.

Posted on May 24, 2010 at 7:32 AMView Comments

SnapScouts

I sure hope this is a parody:

SnapScouts Keep America Safe!

Want to earn tons of cool badges and prizes while competing with you friends to see who can be the best American? Download the SnapScouts app for your Android phone (iPhone app coming soon) and get started patrolling your neighborhood.

It’s up to you to keep America safe! If you see something suspicious, Snap it! If you see someone who doesn’t belong, Snap it! Not sure if someone or something is suspicious? Snap it anyway!

Play with your friends and family to see who can get the best prizes. Join the SnapScouts today!

Posted on May 10, 2010 at 2:11 PMView Comments

Punishing Security Breaches

The editor of the Freakonomics blog asked me to write about this topic. The idea was that they would get several opinions, and publish them all. They spiked the story, but I already wrote my piece. So here it is.

In deciding what to do with Gray Powell, the Apple employee who accidentally left a secret prototype 4G iPhone in a California bar, Apple needs to figure out how much of the problem is due to an employee not following the rules, and how much of the problem is due to unclear, unrealistic, or just plain bad rules.

If Powell sneaked the phone out of the Apple building in a flagrant violation of the rules—maybe he wanted to show it to a friend—he should be disciplined, perhaps even fired. Some military installations have rules like that. If someone wants to take something classified out of a top secret military compound, he might have to secrete it on his person and deliberately sneak it past a guard who searches briefcases and purses. He might be committing a crime by doing so, by the way. Apple isn’t the military, of course, but if their corporate security policy is that strict, it may very well have rules like that. And the only way to ensure rules are followed is by enforcing them, and that means severe disciplinary action against those who bypass the rules.

Even if Powell had authorization to take the phone out of Apple’s labs—presumably someone has to test drive the new toys sooner or later—the corporate rules might have required him to pay attention to it at all times. We’ve all heard of military attachés who carry briefcases chained to their wrists. It’s an extreme example, but demonstrates how a security policy can allow for objects to move around town—or around the world—without getting lost. Apple almost certainly doesn’t have a policy as rigid as that, but its policy might explicitly prohibit Powell from taking that phone into a bar, putting it down on a counter, and participating in a beer tasting. Again, if Apple’s rules and Powell’s violation were both that clear, Apple should enforce them.

On the other hand, if Apple doesn’t have clear-cut rules, if Powell wasn’t prohibited from taking the phone out of his office, if engineers routinely ignore or bypass security rules and—as long as nothing bad happens—no one complains, then Apple needs to understand that the system is more to blame than the individual. Most corporate security policies have this sort of problem. Security is important, but it’s quickly jettisoned when there’s an important job to be done. A common example is passwords: people aren’t supposed to share them, unless it’s really important and they have to. Another example is guest accounts. And doors that are supposed to remain locked but rarely are. People routinely bypass security policies if they get in the way, and if no one complains, those policies are effectively meaningless.

Apple’s unfortunately public security breach has given the company an opportunity to examine its policies and figure out how much of the problem is Powell and how much of it is the system he’s a part of. Apple needs to fix its security problem, but only after it figures out where the problem is.

Posted on April 26, 2010 at 7:20 AMView Comments

Best Buy Sells Surveillance Tracker

Only $99.99:

Keep tabs on your child at all times with this small but sophisticated device that combines GPS and cellular technology to provide you with real-time location updates. The small and lightweight Little Buddy transmitter fits easily into a backpack, lunchbox or other receptacle, making it easy for your child to carry so you can check his or her location at any time using a smartphone or computer. Customizable safety checks allow you to establish specific times and locations where your child is supposed to be—for example, in school—causing the device to alert you with a text message if your child leaves the designated area during that time. Additional real-time alerts let you know when the device’s battery is running low so you can take steps to ensure your monitoring isn’t interrupted.

Presumably it can also be used to track people who aren’t your kids.

EDITED TO ADD (11/12): You can also use an iPhone as a tracking device.

Posted on October 28, 2009 at 1:28 PMView Comments

File Deletion

File deletion is all about control. This used to not be an issue. Your data was on your computer, and you decided when and how to delete a file. You could use the delete function if you didn’t care about whether the file could be recovered or not, and a file erase program—I use BCWipe for Windows—if you wanted to ensure no one could ever recover the file.

As we move more of our data onto cloud computing platforms such as Gmail and Facebook, and closed proprietary platforms such as the Kindle and the iPhone, deleting data is much harder.

You have to trust that these companies will delete your data when you ask them to, but they’re generally not interested in doing so. Sites like these are more likely to make your data inaccessible than they are to physically delete it. Facebook is a known culprit: actually deleting your data from its servers requires a complicated procedure that may or may not work. And even if you do manage to delete your data, copies are certain to remain in the companies’ backup systems. Gmail explicitly says this in its privacy notice.

Online backups, SMS messages, photos on photo sharing sites, smartphone applications that store your data in the network: you have no idea what really happens when you delete pieces of data or your entire account, because you’re not in control of the computers that are storing the data.

This notion of control also explains how Amazon was able to delete a book that people had previously purchased on their Kindle e-book readers. The legalities are debatable, but Amazon had the technical ability to delete the file because it controls all Kindles. It has designed the Kindle so that it determines when to update the software, whether people are allowed to buy Kindle books, and when to turn off people’s Kindles entirely.

Vanish is a research project by Roxana Geambasu and colleagues at the University of Washington. They designed a prototype system that automatically deletes data after a set time interval. So you can send an email, create a Google Doc, post an update to Facebook, or upload a photo to Flickr, all designed to disappear after a set period of time. And after it disappears, no one—not anyone who downloaded the data, not the site that hosted the data, not anyone who intercepted the data in transit, not even you—will be able to read it. If the police arrive at Facebook or Google or Flickr with a warrant, they won’t be able to read it.

The details are complicated, but Vanish breaks the data’s decryption key into a bunch of pieces and scatters them around the web using a peer-to-peer network. Then it uses the natural turnover in these networks—machines constantly join and leave—to make the data disappear. Unlike previous programs that supported file deletion, this one doesn’t require you to trust any company, organisation, or website. It just happens.

Of course, Vanish doesn’t prevent the recipient of an email or the reader of a Facebook page from copying the data and pasting it into another file, just as Kindle’s deletion feature doesn’t prevent people from copying a book’s files and saving them on their computers. Vanish is just a prototype at this point, and it only works if all the people who read your Facebook entries or view your Flickr pictures have it installed on their computers as well; but it’s a good demonstration of how control affects file deletion. And while it’s a step in the right direction, it’s also new and therefore deserves further security analysis before being adopted on a wide scale.

We’ve lost the control of data on some of the computers we own, and we’ve lost control of our data in the cloud. We’re not going to stop using Facebook and Twitter just because they’re not going to delete our data when we ask them to, and we’re not going to stop using Kindles and iPhones because they may delete our data when we don’t want them to. But we need to take back control of data in the cloud, and projects like Vanish show us how we can.

Now we need something that will protect our data when a large corporation decides to delete it.

This essay originally appeared in The Guardian.

EDITED TO ADD (9/30): Vanish has been broken, paper here.

Posted on September 10, 2009 at 6:08 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.