Entries Tagged "Amazon"
Page 3 of 4
In the first of what will undoubtedly be a large number of battles between companies that make IoT devices and the police, Amazon is refusing to comply with a warrant demanding data on what its Echo device heard at a crime scene.
The particulars of the case are weird. Amazon’s Echo does not constantly record; it only listens for its name. So it’s unclear that there is any evidence to be turned over. But this general issue isn’t going away. We are all under ubiquitous surveillance, but it is surveillance by the companies that control the Internet-connected devices in our lives. The rules by which police and intelligence agencies get access to that data will come under increasing pressure for change.
Related: A newscaster discussed Amazon’s Echo on the news, causing devices in the same room as tuned-in televisions to order unwanted products. This year, the same technology is coming to LG appliances such as refrigerators.
Amazon Unlimited is an all-you-can-read service. You pay one price and can read anything that’s in the program. Amazon pays authors out of a fixed pool, on the basis of how many people read their books. More interestingly, it pays by the page. An author makes more money if someone reads his book through to page 200 than if they give up at page 50, and even more if they make it through to the end. This makes sense; it doesn’t pay authors for books people download but don’t read, or read the first few pages of and then decide not to read the rest.
This payment structure requires surveillance, and the Kindle does watch people as they read. The problem is that the Kindle doesn’t know if the reader actually reads the book—only what page they’re on. So Kindle Unlimited records the furthest page the reader synched, and pays based on that.
This opens up the possibility for fraud. If an author can create a thousand-page book and trick the reader into reading page 1,000, he gets paid the maximum. Scam authors are doing this through a variety of tricks.
What’s interesting is that while Amazon is definitely concerned about this kind of fraud, it doesn’t affect its bottom line. The fixed payment pool doesn’t change; just who gets how much of it does.
EDITED TO ADD: John Scalzi comments.
In theory, the Internet of Things—the connected network of tiny computers inside home appliances, household objects, even clothing—promises to make your life easier and your work more efficient. These computers will communicate with each other and the Internet in homes and public spaces, collecting data about their environment and making changes based on the information they receive. In theory, connected sensors will anticipate your needs, saving you time, money, and energy.
Except when the companies that make these connected objects act in a way that runs counter to the consumer’s best interests—as the technology company Philips did recently with its smart ambient-lighting system, Hue, which consists of a central controller that can remotely communicate with light bulbs. In mid-December, the company pushed out a software update that made the system incompatible with some other manufacturers’ light bulbs, including bulbs that had previously been supported.
The complaints began rolling in almost immediately. The Hue system was supposed to be compatible with an industry standard called ZigBee, but the bulbs that Philips cut off were ZigBee-compliant. Philips backed down and restored compatibility a few days later.
But the story of the Hue debacle—the story of a company using copy protection technology to lock out competitors—isn’t a new one. Plenty of companies set up proprietary standards to ensure that their customers don’t use someone else’s products with theirs. Keurig, for example, puts codes on its single-cup coffee pods, and engineers its coffeemakers to work only with those codes. HP has done the same thing with its printers and ink cartridges.
To stop competitors just reverse-engineering the proprietary standard and making compatible peripherals (for example, another coffee manufacturer putting Keurig’s codes on its own pods), these companies rely on a 1998 law called the Digital Millennium Copyright Act (DCMA). The law was originally passed to prevent people from pirating music and movies; while it hasn’t done a lot of good in that regard (as anyone who uses BitTorrent can attest), it has done a lot to inhibit security and compatibility research.
Specifically, the DMCA includes an anti-circumvention provision, which prohibits companies from circumventing “technological protection measures” that “effectively control access” to copyrighted works. That means it’s illegal for someone to create a Hue-compatible light bulb without Philips’ permission, a K-cup-compatible coffee pod without Keurigs’, or an HP-printer compatible cartridge without HP’s.
By now, we’re used to this in the computer world. In the 1990s, Microsoft used a strategy it called “embrace, extend, extinguish,” in which it gradually added proprietary capabilities to products that already adhered to widely used standards. Some more recent examples: Amazon’s e-book format doesn’t work on other companies’ readers, music purchased from Apple’s iTunes store doesn’t work with other music players, and every game console has its own proprietary game cartridge format.
Because companies can enforce anti-competitive behavior this way, there’s a litany of things that just don’t exist, even though they would make life easier for consumers in significant ways. You can’t have custom software for your cochlear implant, or your programmable thermostat, or your computer-enabled Barbie doll. An auto repair shop can’t design a better diagnostic system that interfaces with a car’s computers. And John Deere has claimed that it owns the software on all of its tractors, meaning the farmers that purchase them are prohibited from repairing or modifying their property.
As the Internet of Things becomes more prevalent, so too will this kind of anti-competitive behavior—which undercuts the purpose of having smart objects in the first place. We’ll want our light bulbs to communicate with a central controller, regardless of manufacturer. We’ll want our clothes to communicate with our washing machines and our cars to communicate with traffic signs.
We can’t have this when companies can cut off compatible products, or use the law to prevent competitors from reverse-engineering their products to ensure compatibility across brands. For the Internet of Things to provide any value, what we need is a world that looks like the automotive industry, where you can go to a store and buy replacement parts made by a wide variety of different manufacturers. Instead, the Internet of Things is on track to become a battleground of competing standards, as companies try to build monopolies by locking each other out.
This essay previously appeared on TheAtlantic.com.
EDITED TO ADD (1/5): Interesting commentary.
On Thursday, a Brazilian judge ordered the text messaging service WhatsApp shut down for 48 hours. It was a monumental action.
WhatsApp is the most popular app in Brazil, used by about 100 million people. The Brazilian telecoms hate the service because it entices people away from more expensive text messaging services, and they have been lobbying for months to convince the government that it’s unregulated and illegal. A judge finally agreed.
In Brazil’s case, WhatsApp was blocked for allegedly failing to respond to a court order. Another judge reversed the ban 12 hours later, but there is a pattern forming here. In Egypt, Vodafone has complained about the legality of WhatsApp’s free voice-calls, while India’s telecoms firms have been lobbying hard to curb messaging apps such as WhatsApp and Viber. Earlier this year, the United Arab Emirates blocked WhatsApp’s free voice call feature.
All this is part of a massive power struggle going on right now between traditional companies and new Internet companies, and we’re all in the blast radius.
It’s one aspect of a tech policy problem that has been plaguing us for at least 25 years: technologists and policymakers don’t understand each other, and they inflict damage on society because of that. But it’s worse today. The speed of technological progress makes it worse. And the types of technology—especially the current Internet of mobile devices everywhere, cloud computing, always-on connections and the Internet of Things—make it worse.
The Internet has been disrupting and destroying long-standing business models since its popularization in the mid-1990s. And traditional industries have long fought back with every tool at their disposal. The movie and music industries have tried for decades to hamstring computers in an effort to prevent illegal copying of their products. Publishers have battled with Google over whether their books could be indexed for online searching.
More recently, municipal taxi companies and large hotel chains are fighting with ride-sharing companies such as Uber and apartment-sharing companies such as Airbnb. Both the old companies and the new upstarts have tried to bend laws to their will in an effort to outmaneuver each other.
Sometimes the actions of these companies harm the users of these systems and services. And the results can seem crazy. Why would the Brazilian telecoms want to provoke the ire of almost everyone in the country? They’re trying to protect their monopoly. If they win in not just shutting down WhatsApp, but Telegram and all the other text-message services, their customers will have no choice. This is how high-stakes these battles can be.
This isn’t just companies competing in the marketplace. These are battles between competing visions of how technology should apply to business, and traditional businesses and “disruptive” new businesses. The fundamental problem is that technology and law are in conflict, and what’s worked in the past is increasingly failing today.
First, the speeds of technology and law have reversed. Traditionally, new technologies were adopted slowly over decades. There was time for people to figure them out, and for their social repercussions to percolate through society. Legislatures and courts had time to figure out rules for these technologies and how they should integrate into the existing legal structures.
They don’t always get it right— the sad history of copyright law in the United States is an example of how they can get it badly wrong again and again—but at least they had a chance before the technologies become widely adopted.
That’s just not true anymore. A new technology can go from zero to a hundred million users in a year or less. That’s just too fast for the political or legal process. By the time they’re asked to make rules, these technologies are well-entrenched in society.
Second, the technologies have become more complicated and specialized. This means that the normal system of legislators passing laws, regulators making rules based on those laws and courts providing a second check on those rules fails. None of these people has the expertise necessary to understand these technologies, let alone the subtle and potentially pernicious ramifications of any rules they make.
We see the same thing between governments and law-enforcement and militaries. In the United States, we’re expecting policymakers to understand the debate between the FBI’s desire to read the encrypted e-mails and computers of crime suspects and the security researchers who maintain that giving them that capability will render everyone insecure. We’re expecting legislators to provide meaningful oversight over the National Security Agency, when they can only read highly technical documents about the agency’s activities in special rooms and without any aides who might be conversant in the issues.
The result is that we end up in situations such as the one Brazil finds itself in. WhatsApp went from zero to 100 million users in five years. The telecoms are advancing all sorts of weird legal arguments to get the service banned, and judges are ill-equipped to separate fact from fiction.
This isn’t a simple matter of needing government to get out of the way and let companies battle in the marketplace. These companies are for-profit entities, and their business models are so complicated that they regularly don’t do what’s best for their users. (For example, remember that you’re not really Facebook’s customer. You’re their product.)
The fact that people’s resumes are effectively the first 10 hits on a Google search of their name is a problem— something that the European “right to be forgotten” tried ham-fistedly to address. There’s a lot of smart writing that says that Uber’s disruption of traditional taxis will be worse for the people who regularly use the services. And many people worry about Amazon’s increasing dominance of the publishing industry.
We need a better way of regulating new technologies.
That’s going to require bridging the gap between technologists and policymakers. Each needs to understand the other —not enough to be experts in each other’s fields, but enough to engage in meaningful conversations and debates. That’s also going to require laws that are agile and written to be as technologically invariant as possible.
It’s a tall order, I know, and one that has been on the wish list of every tech policymaker for decades. But today, the stakes are higher and the issues come faster. Not doing so will become increasingly harmful for all of us.
This essay originally appeared on CNN.com.
EDITED TO ADD (12/23): Slashdot thread.
A worker in Amazon’s packaging department in India figured out how to deliver electronics to himself:
Since he was employed with the packaging department, he had easy access to order numbers. Using the order numbers, he packed his order himself; but instead of putting pressure cookers in the box, he stuffed it with iPhones, iPads, watches, cameras, and other expensive electronics in the pressure cooker box. Before dispatching the order, the godown also has a mechanism to weigh the package. To dodge this, Bhamble stuffed equipment of equivalent weight,” an officer from Vithalwadi police station said. Bhamble confessed to the cops that he had ordered pressure cookers thrice in the last 15 days. After he placed the order, instead of, say, packing a five-kg pressure cooker, he would stuff gadgets of equivalent weight. After receiving delivery clearance, he would then deliver the goods himself and store it at his house. Speaking to mid-day, Deputy Commissioner of Police (Zone IV) Vasant Jadhav said, “Bhamble’s job profile was of goods packaging at Amazon.com’s warehouse in Bhiwandi.
This is an interesting story of a reviewer who had her review deleted because Amazon believed she knew the author personally.
Leaving completely aside the ethics of friends reviewing friends’ books, what is Amazon doing conducting this kind of investigative surveillance? Do reviewers know that Amazon is keeping tabs on who their friends are?
Facebook regularly abuses the privacy of its users. Google has stopped supporting its popular RSS feeder. Apple prohibits all iPhone apps that are political or sexual. Microsoft might be cooperating with some governments to spy on Skype calls, but we don’t know which ones. Both Twitter and LinkedIn have recently suffered security breaches that affected the data of hundreds of thousands of their users.
If you’ve started to think of yourself as a hapless peasant in a Game of Thrones power struggle, you’re more right than you may realize. These are not traditional companies, and we are not traditional customers. These are feudal lords, and we are their vassals, peasants, and serfs.
Power has shifted in IT, in favor of both cloud-service providers and closed-platform vendors. This power shift affects many things, and it profoundly affects security.
Traditionally, computer security was the user’s responsibility. Users purchased their own antivirus software and firewalls, and any breaches were blamed on their inattentiveness. It’s kind of a crazy business model. Normally we expect the products and services we buy to be safe and secure, but in IT we tolerated lousy products and supported an enormous aftermarket for security.
Now that the IT industry has matured, we expect more security “out of the box.” This has become possible largely because of two technology trends: cloud computing and vendor-controlled platforms. The first means that most of our data resides on other networks: Google Docs, Salesforce.com, Facebook, Gmail. The second means that our new Internet devices are both closed and controlled by the vendors, giving us limited configuration control: iPhones, ChromeBooks, Kindles, BlackBerry PDAs. Meanwhile, our relationship with IT has changed. We used to use our computers to do things. We now use our vendor-controlled computing devices to go places. All of these places are owned by someone.
The new security model is that someone else takes care of it—without telling us any of the details. I have no control over the security of my Gmail or my photos on Flickr. I can’t demand greater security for my presentations on Prezi or my task list on Trello, no matter how confidential they are. I can’t audit any of these cloud services. I can’t delete cookies on my iPad or ensure that files are securely erased. Updates on my Kindle happen automatically, without my knowledge or consent. I have so little visibility into the security of Facebook that I have no idea what operating system they’re using.
There are a lot of good reasons why we’re all flocking to these cloud services and vendor-controlled platforms. The benefits are enormous, from cost to convenience to reliability to security itself. But it is inherently a feudal relationship. We cede control of our data and computing platforms to these companies and trust that they will treat us well and protect us from harm. And if we pledge complete allegiance to them—if we let them control our email and calendar and address book and photos and everything—we get even more benefits. We become their vassals; or, on a bad day, their serfs.
There are a lot of feudal lords out there. Google and Apple are the obvious ones, but Microsoft is trying to control both user data and the end-user platform as well. Facebook is another lord, controlling much of the socializing we do on the Internet. Other feudal lords are smaller and more specialized—Amazon, Yahoo, Verizon, and so on—but the model is the same.
To be sure, feudal security has its advantages. These companies are much better at security than the average user. Automatic backup has saved a lot of data after hardware failures, user mistakes, and malware infections. Automatic updates have increased security dramatically. This is also true for small organizations; they are more secure than they would be if they tried to do it themselves. For large corporations with dedicated IT security departments, the benefits are less clear. Sure, even large companies outsource critical functions like tax preparation and cleaning services, but large companies have specific requirements for security, data retention, audit, and so on—and that’s just not possible with most of these feudal lords.
The feudal relationship is inherently based on power. In Medieval Europe, people would pledge their allegiance to a feudal lord in exchange for that lord’s protection. This arrangement changed as the lords realized that they had all the power and could do whatever they wanted. Vassals were used and abused; peasants were tied to their land and became serfs.
It’s the Internet lords’ popularity and ubiquity that enable them to profit; laws and government relationships make it easier for them to hold onto power. These lords are vying with each other for profits and power. By spending time on their sites and giving them our personal information—whether through search queries, e-mails, status updates, likes, or simply our behavioral characteristics—we are providing the raw material for that struggle. In this way we are like serfs, toiling the land for our feudal lords. If you don’t believe me, try to take your data with you when you leave Facebook. And when war breaks out among the giants, we become collateral damage.
So how do we survive? Increasingly, we have little alternative but to trust someone, so we need to decide who we trust—and who we don’t—and then act accordingly. This isn’t easy; our feudal lords go out of their way not to be transparent about their actions, their security, or much of anything. Use whatever power you have—as individuals, none; as large corporations, more—to negotiate with your lords. And, finally, don’t be extreme in any way: politically, socially, culturally. Yes, you can be shut down without recourse, but it’s usually those on the edges that are affected. Not much solace, I agree, but it’s something.
On the policy side, we have an action plan. In the short term, we need to keep circumvention—the ability to modify our hardware, software, and data files—legal and preserve net neutrality. Both of these things limit how much the lords can take advantage of us, and they increase the possibility that the market will force them to be more benevolent. The last thing we want is the government—that’s us—spending resources to enforce one particular business model over another and stifling competition.
In the longer term, we all need to work to reduce the power imbalance. Medieval feudalism evolved into a more balanced relationship in which lords had responsibilities as well as rights. Today’s Internet feudalism is both ad hoc and one-sided. We have no choice but to trust the lords, but we receive very few assurances in return. The lords have a lot of rights, but few responsibilities or limits. We need to balance this relationship, and government intervention is the only way we’re going to get it. In medieval Europe, the rise of the centralized state and the rule of law provided the stability that feudalism lacked. The Magna Carta first forced responsibilities on governments and put humans on the long road toward government by the people and for the people.
We need a similar process to rein in our Internet lords, and it’s not something that market forces are likely to provide. The very definition of power is changing, and the issues are far bigger than the Internet and our relationships with our IT providers.
This essay originally appeared on the Harvard Business Review website. It is an update of this earlier essay on the same topic. “Feudal security” is a metaphor I have been using a lot recently; I wrote this essay without rereading my previous essay.
EDITED TO ADD (6/13): There is another way the feudal metaphor applies to the Internet. There is no commons; every part of the Internet is owned by someone. This article explores that aspect of the metaphor.
The Finnish phone giant has since admitted that it decrypts secure data that passes through HTTPS connections—including social networking accounts, online banking, email and other secure sessions—in order to compress the data and speed up the loading of Web pages.
The basic problem is that https sessions are opaque as they travel through the network. That’s the point—it’s more secure—but it also means that the network can’t do anything about them. They can’t be compressed, cached, or otherwise optimized. They can’t be rendered remotely. They can’t be inspected for security vulnerabilities. All the network can do is transmit the data back and forth.
But in our cloud-centric world, it makes more and more sense to process web data in the cloud. Nokia isn’t alone here. Opera’s mobile browser performs all sorts of optimizations on web pages before they are sent over the air to your smart phone. Amazon does the same thing with browsing on the Kindle. MobileScope, a really good smart-phone security application, performs the same sort of man-in-the-middle attack against https sessions to detect and prevent data leakage. I think Umbrella does as well. Nokia’s mistake was that they did it without telling anyone. With appropriate consent, it’s perfectly reasonable for most people and organizations to give both performance and security companies that ability to decrypt and re-encrypt https sessions—at least most of the time.
This is an area where security concerns are butting up against other issues. Nokia’s answer, which is basically “trust us, we’re not looking at your data,” is going to increasingly be the norm.
Sidebar photo of Bruce Schneier by Joe MacInnis.