Entries Tagged "Uber"

Page 1 of 1

NTSB Investigation of Fatal Driverless Car Accident

Autonomous systems are going to have to do much better than this.

The Uber car that hit and killed Elaine Herzberg in Tempe, Ariz., in March 2018 could not recognize all pedestrians, and was being driven by an operator likely distracted by streaming video, according to documents released by the U.S. National Transportation Safety Board (NTSB) this week.

But while the technical failures and omissions in Uber’s self-driving car program are shocking, the NTSB investigation also highlights safety failures that include the vehicle operator’s lapses, lax corporate governance of the project, and limited public oversight.

The details of what happened in the seconds before the collision are worth reading. They describe a cascading series of issues that led to the collision and the fatality.

As computers continue to become part of things, and affect the world in a direct physical manner, this kind of thing will become even more important.

Posted on November 13, 2019 at 6:16 AMView Comments

Uber Data Hack

Uber was hacked, losing data on 57 million driver and rider accounts. The company kept it quiet for over a year. The details are particularly damning:

The two hackers stole data about the company’s riders and drivers ­– including phone numbers, email addresses and names — from a third-party server and then approached Uber and demanded $100,000 to delete their copy of the data, the employees said.

Uber acquiesced to the demands, and then went further. The company tracked down the hackers and pushed them to sign nondisclosure agreements, according to the people familiar with the matter. To further conceal the damage, Uber executives also made it appear as if the payout had been part of a “bug bounty” — a common practice among technology companies in which they pay hackers to attack their software to test for soft spots.

And almost certainly illegal:

While it is not illegal to pay money to hackers, Uber may have violated several laws in its interaction with them.

By demanding that the hackers destroy the stolen data, Uber may have violated a Federal Trade Commission rule on breach disclosure that prohibits companies from destroying any forensic evidence in the course of their investigation.

The company may have also violated state breach disclosure laws by not disclosing the theft of Uber drivers’ stolen data. If the data stolen was not encrypted, Uber would have been required by California state law to disclose that driver’s license data from its drivers had been stolen in the course of the hacking.

Posted on November 27, 2017 at 9:13 AMView Comments

Uber Drivers Hacking the System to Cause Surge Pricing

Interesting story about Uber drivers who have figured out how to game the company’s algorithms to cause surge pricing:

According to the study. drivers manipulate Uber’s algorithm by logging out of the app at the same time, making it think that there is a shortage of cars.

[…]

The study said drivers have been coordinating forced surge pricing, after interviews with drivers in London and New York, and research on online forums such as Uberpeople.net. In a post on the website for drivers, seen by the researchers, one person said: “Guys, stay logged off until surge. Less supply high demand = surge.”

.

Passengers, of course, have long had tricks to avoid surge pricing.

I expect to see more of this sort of thing as algorithms become more prominent in our lives.

Posted on August 8, 2017 at 9:35 AMView Comments

Uber Uses Ubiquitous Surveillance to Identify and Block Regulators

The New York Times reports that Uber developed apps that identified and blocked government regulators using the app to find evidence of illegal behavior:

Yet using its app to identify and sidestep authorities in places where regulators said the company was breaking the law goes further in skirting ethical lines — and potentially legal ones, too. Inside Uber, some of those who knew about the VTOS program and how the Greyball tool was being used were troubled by it.

[…]

One method involved drawing a digital perimeter, or “geofence,” around authorities’ offices on a digital map of the city that Uber monitored. The company watched which people frequently opened and closed the app — a process internally called “eyeballing” — around that location, which signified that the user might be associated with city agencies.

Other techniques included looking at the user’s credit card information and whether that card was tied directly to an institution like a police credit union.

Enforcement officials involved in large-scale sting operations to catch Uber drivers also sometimes bought dozens of cellphones to create different accounts. To circumvent that tactic, Uber employees went to that city’s local electronics stores to look up device numbers of the cheapest mobile phones on sale, which were often the ones bought by city officials, whose budgets were not sizable.

In all, there were at least a dozen or so signifiers in the VTOS program that Uber employees could use to assess whether users were new riders or very likely city officials.

If those clues were not enough to confirm a user’s identity, Uber employees would search social media profiles and other available information online. Once a user was identified as law enforcement, Uber Greyballed him or her, tagging the user with a small piece of code that read Greyball followed by a string of numbers.

When Edward Snowden exposed the fact that the NSA does this sort of thing, I commented that the technologies will eventually become cheap enough for corporations to do it. Now, it has.

One discussion we need to have is whether or not this behavior is legal. But another, more important, discussion is whether or not it is ethical. Do we want to live in a society where corporations wield this sort of power against government? Against individuals? Because if we don’t align government against this kind of behavior, it’ll become the norm.

Posted on March 6, 2017 at 6:24 AMView Comments

Replacing Judgment with Algorithms

China is considering a new “social credit” system, designed to rate everyone’s trustworthiness. Many fear that it will become a tool of social control — but in reality it has a lot in common with the algorithms and systems that score and classify us all every day.

Human judgment is being replaced by automatic algorithms, and that brings with it both enormous benefits and risks. The technology is enabling a new form of social control, sometimes deliberately and sometimes as a side effect. And as the Internet of Things ushers in an era of more sensors and more data — and more algorithms — we need to ensure that we reap the benefits while avoiding the harms.

Right now, the Chinese government is watching how companies use “social credit” scores in state-approved pilot projects. The most prominent one is Sesame Credit, and it’s much more than a financial scoring system.

Citizens are judged not only by conventional financial criteria, but by their actions and associations. Rumors abound about how this system works. Various news sites are speculating that your score will go up if you share a link from a state-sponsored news agency and go down if you post pictures of Tiananmen Square. Similarly, your score will go up if you purchase local agricultural products and down if you purchase Japanese anime. Right now the worst fears seem overblown, but could certainly come to pass in the future.

This story has spread because it’s just the sort of behavior you’d expect from the authoritarian government in China. But there’s little about the scoring systems used by Sesame Credit that’s unique to China. All of us are being categorized and judged by similar algorithms, both by companies and by governments. While the aim of these systems might not be social control, it’s often the byproduct. And if we’re not careful, the creepy results we imagine for the Chinese will be our lot as well.

Sesame Credit is largely based on a US system called FICO. That’s the system that determines your credit score. You actually have a few dozen different ones, and they determine whether you can get a mortgage, car loan or credit card, and what sorts of interest rates you’re offered. The exact algorithm is secret, but we know in general what goes into a FICO score: how much debt you have, how good you’ve been at repaying your debt, how long your credit history is and so on.

There’s nothing about your social network, but that might change. In August, Facebook was awarded a patent on using a borrower’s social network to help determine if he or she is a good credit risk. Basically, your creditworthiness becomes dependent on the creditworthiness of your friends. Associate with deadbeats, and you’re more likely to be judged as one.

Your associations can be used to judge you in other ways as well. It’s now common for employers to use social media sites to screen job applicants. This manual process is increasingly being outsourced and automated; companies like Social Intelligence, Evolv and First Advantage automatically process your social networking activity and provide hiring recommendations for employers. The dangers of this type of system — from discriminatory biases resulting from the data to an obsession with scores over more social measures — are too many.

The company Klout tried to make a business of measuring your online influence, hoping its proprietary system would become an industry standard used for things like hiring and giving out free product samples.

The US government is judging you as well. Your social media postings could get you on the terrorist watch list, affecting your ability to fly on an airplane and even get a job. In 2012, a British tourist’s tweet caused the US to deny him entry into the country. We know that the National Security Agency uses complex computer algorithms to sift through the Internet data it collects on both Americans and foreigners.

All of these systems, from Sesame Credit to the NSA’s secret algorithms, are made possible by computers and data. A couple of generations ago, you would apply for a home mortgage at a bank that knew you, and a bank manager would make a determination of your creditworthiness. Yes, the system was prone to all sorts of abuses, ranging from discrimination to an old-boy network of friends helping friends. But the system also couldn’t scale. It made no sense for a bank across the state to give you a loan, because they didn’t know you. Loans stayed local.

FICO scores changed that. Now, a computer crunches your credit history and produces a number. And you can take that number to any mortgage lender in the country. They don’t need to know you; your score is all they need to decide whether you’re trustworthy.

This score enabled the home mortgage, car loan, credit card and other lending industries to explode, but it brought with it other problems. People who don’t conform to the financial norm — having and using credit cards, for example — can have trouble getting loans when they need them. The automatic nature of the system enforces conformity.

The secrecy of the algorithms further pushes people toward conformity. If you are worried that the US government will classify you as a potential terrorist, you’re less likely to friend Muslims on Facebook. If you know that your Sesame Credit score is partly based on your not buying “subversive” products or being friends with dissidents, you’re more likely to overcompensate by not buying anything but the most innocuous books or corresponding with the most boring people.

Uber is an example of how this works. Passengers rate drivers and drivers rate passengers; both risk getting booted out of the system if their rankings get too low. This weeds out bad drivers and passengers, but also results in marginal people being blocked from the system, and everyone else trying to not make any special requests, avoid controversial conversation topics, and generally behave like good corporate citizens.

Many have documented a chilling effect among American Muslims, with them avoiding certain discussion topics lest they be taken the wrong way. Even if nothing would happen because of it, their free speech has been curtailed because of the secrecy surrounding government surveillance. How many of you are reluctant to Google “pressure cooker bomb”? How many are a bit worried that I used it in this essay?

This is what social control looks like in the Internet age. The Cold-War-era methods of undercover agents, informants living in your neighborhood, and agents provocateur is too labor-intensive and inefficient. These automatic algorithms make possible a wholly new way to enforce conformity. And by accepting algorithmic classification into our lives, we’re paving the way for the same sort of thing China plans to put into place.

It doesn’t have to be this way. We can get the benefits of automatic algorithmic systems while avoiding the dangers. It’s not even hard.

The first step is to make these algorithms public. Companies and governments both balk at this, fearing that people will deliberately try to game them, but the alternative is much worse.

The second step is for these systems to be subject to oversight and accountability. It’s already illegal for these algorithms to have discriminatory outcomes, even if they’re not deliberately designed in. This concept needs to be expanded. We as a society need to understand what we expect out of the algorithms that automatically judge us and ensure that those expectations are met.

We also need to provide manual systems for people to challenge their classifications. Automatic algorithms are going to make mistakes, whether it’s by giving us bad credit scores or flagging us as terrorists. We need the ability to clear our names if this happens, through a process that restores human judgment.

Sesame Credit sounds like a dystopia because we can easily imagine how the Chinese government can use a system like this to enforce conformity and stifle dissent. Our own systems seem safer, because we don’t believe the corporations and governments that run them are malevolent. But the dangers are inherent in the technologies. As we move into a world where we are increasingly judged by algorithms, we need to ensure that they do so fairly and properly.

This essay previously appeared on CNN.com.

Posted on January 8, 2016 at 5:21 AMView Comments

Using Law against Technology

On Thursday, a Brazilian judge ordered the text messaging service WhatsApp shut down for 48 hours. It was a monumental action.

WhatsApp is the most popular app in Brazil, used by about 100 million people. The Brazilian telecoms hate the service because it entices people away from more expensive text messaging services, and they have been lobbying for months to convince the government that it’s unregulated and illegal. A judge finally agreed.

In Brazil’s case, WhatsApp was blocked for allegedly failing to respond to a court order. Another judge reversed the ban 12 hours later, but there is a pattern forming here. In Egypt, Vodafone has complained about the legality of WhatsApp’s free voice-calls, while India’s telecoms firms have been lobbying hard to curb messaging apps such as WhatsApp and Viber. Earlier this year, the United Arab Emirates blocked WhatsApp’s free voice call feature.

All this is part of a massive power struggle going on right now between traditional companies and new Internet companies, and we’re all in the blast radius.

It’s one aspect of a tech policy problem that has been plaguing us for at least 25 years: technologists and policymakers don’t understand each other, and they inflict damage on society because of that. But it’s worse today. The speed of technological progress makes it worse. And the types of technology­ — especially the current Internet of mobile devices everywhere, cloud computing, always-on connections and the Internet of Things — ­make it worse.

The Internet has been disrupting and destroying long-standing business models since its popularization in the mid-1990s. And traditional industries have long fought back with every tool at their disposal. The movie and music industries have tried for decades to hamstring computers in an effort to prevent illegal copying of their products. Publishers have battled with Google over whether their books could be indexed for online searching.

More recently, municipal taxi companies and large hotel chains are fighting with ride-sharing companies such as Uber and apartment-sharing companies such as Airbnb. Both the old companies and the new upstarts have tried to bend laws to their will in an effort to outmaneuver each other.

Sometimes the actions of these companies harm the users of these systems and services. And the results can seem crazy. Why would the Brazilian telecoms want to provoke the ire of almost everyone in the country? They’re trying to protect their monopoly. If they win in not just shutting down WhatsApp, but Telegram and all the other text-message services, their customers will have no choice. This is how high-stakes these battles can be.

This isn’t just companies competing in the marketplace. These are battles between competing visions of how technology should apply to business, and traditional businesses and “disruptive” new businesses. The fundamental problem is that technology and law are in conflict, and what’s worked in the past is increasingly failing today.

First, the speeds of technology and law have reversed. Traditionally, new technologies were adopted slowly over decades. There was time for people to figure them out, and for their social repercussions to percolate through society. Legislatures and courts had time to figure out rules for these technologies and how they should integrate into the existing legal structures.

They don’t always get it right –­ the sad history of copyright law in the United States is an example of how they can get it badly wrong again and again­ — but at least they had a chance before the technologies become widely adopted.

That’s just not true anymore. A new technology can go from zero to a hundred million users in a year or less. That’s just too fast for the political or legal process. By the time they’re asked to make rules, these technologies are well-entrenched in society.

Second, the technologies have become more complicated and specialized. This means that the normal system of legislators passing laws, regulators making rules based on those laws and courts providing a second check on those rules fails. None of these people has the expertise necessary to understand these technologies, let alone the subtle and potentially pernicious ramifications of any rules they make.

We see the same thing between governments and law-enforcement and militaries. In the United States, we’re expecting policymakers to understand the debate between the FBI’s desire to read the encrypted e-mails and computers of crime suspects and the security researchers who maintain that giving them that capability will render everyone insecure. We’re expecting legislators to provide meaningful oversight over the National Security Agency, when they can only read highly technical documents about the agency’s activities in special rooms and without any aides who might be conversant in the issues.

The result is that we end up in situations such as the one Brazil finds itself in. WhatsApp went from zero to 100 million users in five years. The telecoms are advancing all sorts of weird legal arguments to get the service banned, and judges are ill-equipped to separate fact from fiction.

This isn’t a simple matter of needing government to get out of the way and let companies battle in the marketplace. These companies are for-profit entities, and their business models are so complicated that they regularly don’t do what’s best for their users. (For example, remember that you’re not really Facebook’s customer. You’re their product.)

The fact that people’s resumes are effectively the first 10 hits on a Google search of their name is a problem –­ something that the European “right to be forgotten” tried ham-fistedly to address. There’s a lot of smart writing that says that Uber’s disruption of traditional taxis will be worse for the people who regularly use the services. And many people worry about Amazon’s increasing dominance of the publishing industry.

We need a better way of regulating new technologies.

That’s going to require bridging the gap between technologists and policymakers. Each needs to understand the other ­– not enough to be experts in each other’s fields, but enough to engage in meaningful conversations and debates. That’s also going to require laws that are agile and written to be as technologically invariant as possible.

It’s a tall order, I know, and one that has been on the wish list of every tech policymaker for decades. But today, the stakes are higher and the issues come faster. Not doing so will become increasingly harmful for all of us.

This essay originally appeared on CNN.com.

EDITED TO ADD (12/23): Slashdot thread.

Posted on December 23, 2015 at 6:48 AMView Comments

Corporations Misusing Our Data

In the Internet age, we have no choice but to entrust our data with private companies: e-mail providers, service providers, retailers, and so on.

We realize that this data is at risk from hackers. But there’s another risk as well: the employees of the companies who are holding our data for us.

In the early years of Facebook, employees had a master password that enabled them to view anything they wanted in any account. NSA employees occasionally snoop on their friends and partners. The agency even has a name for it: LOVEINT. And well before the Internet, people with access to police or medical records occasionally used that power to look up either famous people or people they knew.

The latest company accused of allowing this sort of thing is Uber, the Internet car-ride service. The company is under investigation for spying on riders without their permission. Called the “god view,” some Uber employees are able to see who is using the service and where they’re going — and used this at least once in 2011 as a party trick to show off the service. A senior executive also suggested the company should hire people to dig up dirt on their critics, making their database of people’s rides even more “useful.”

None of us wants to be stalked — whether it’s from looking at our location data, our medical data, our emails and texts, or anything else — by friends or strangers who have access due to their jobs. Unfortunately, there are few rules protecting us.

Government employees are prohibited from looking at our data, although none of the NSA LOVEINT creeps were ever prosecuted. The HIPAA law protects the privacy of our medical records, but we have nothing to protect most of our other information.

Your Facebook and Uber data are only protected by company culture. There’s nothing in their license agreements that you clicked “agree” to but didn’t read that prevents those companies from violating your privacy.

This needs to change. Corporate databases containing our data should be secured from everyone who doesn’t need access for their work. Voyeurs who peek at our data without a legitimate reason should be punished.

There are audit technologies that can detect this sort of thing, and they should be required. As long as we have to give our data to companies and government agencies, we need assurances that our privacy will be protected.

This essay previously appeared on CNN.com.

Posted on December 5, 2014 at 6:45 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.