Entries Tagged "ISPs"

Page 1 of 2

Congress Removes FCC Privacy Protections on Your Internet Usage

Think about all of the websites you visit every day. Now imagine if the likes of Time Warner, AT&T, and Verizon collected all of your browsing history and sold it on to the highest bidder. That’s what will probably happen if Congress has its way.

This week, lawmakers voted to allow Internet service providers to violate your privacy for their own profit. Not only have they voted to repeal a rule that protects your privacy, they are also trying to make it illegal for the Federal Communications Commission to enact other rules to protect your privacy online.

That this is not provoking greater outcry illustrates how much we’ve ceded any willingness to shape our technological future to for-profit companies and are allowing them to do it for us.

There are a lot of reasons to be worried about this. Because your Internet service provider controls your connection to the Internet, it is in a position to see everything you do on the Internet. Unlike a search engine or social networking platform or news site, you can’t easily switch to a competitor. And there’s not a lot of competition in the market, either. If you have a choice between two high-speed providers in the US, consider yourself lucky.

What can telecom companies do with this newly granted power to spy on everything you’re doing? Of course they can sell your data to marketers—and the inevitable criminals and foreign governments who also line up to buy it. But they can do more creepy things as well.

They can snoop through your traffic and insert their own ads. They can deploy systems that remove encryption so they can better eavesdrop. They can redirect your searches to other sites. They can install surveillance software on your computers and phones. None of these are hypothetical.

They’re all things Internet service providers have done before, and they are some of the reasons the FCC tried to protect your privacy in the first place. And now they’ll be able to do all of these things in secret, without your knowledge or consent. And, of course, governments worldwide will have access to these powers. And all of that data will be at risk of hacking, either by criminals and other governments.

Telecom companies have argued that other Internet players already have these creepy powers—although they didn’t use the word “creepy”—so why should they not have them as well? It’s a valid point.

Surveillance is already the business model of the Internet, and literally hundreds of companies spy on your Internet activity against your interests and for their own profit.

Your e-mail provider already knows everything you write to your family, friends, and colleagues. Google already knows our hopes, fears, and interests, because that’s what we search for.

Your cellular provider already tracks your physical location at all times: it knows where you live, where you work, when you go to sleep at night, when you wake up in the morning, and—because everyone has a smartphone—who you spend time with and who you sleep with.

And some of the things these companies do with that power is no less creepy. Facebook has run experiments in manipulating your mood by changing what you see on your news feed. Uber used its ride data to identify one-night stands. Even Sony once installed spyware on customers’ computers to try and detect if they copied music files.

Aside from spying for profit, companies can spy for other purposes. Uber has already considered using data it collects to intimidate a journalist. Imagine what an Internet service provider can do with the data it collects: against politicians, against the media, against rivals.

Of course the telecom companies want a piece of the surveillance capitalism pie. Despite dwindling revenues, increasing use of ad blockers, and increases in clickfraud, violating our privacy is still a profitable business—especially if it’s done in secret.

The bigger question is: why do we allow for-profit corporations to create our technological future in ways that are optimized for their profits and anathema to our own interests?

When markets work well, different companies compete on price and features, and society collectively rewards better products by purchasing them. This mechanism fails if there is no competition, or if rival companies choose not to compete on a particular feature. It fails when customers are unable to switch to competitors. And it fails when what companies do remains secret.

Unlike service providers like Google and Facebook, telecom companies are infrastructure that requires government involvement and regulation. The practical impossibility of consumers learning the extent of surveillance by their Internet service providers, combined with the difficulty of switching them, means that the decision about whether to be spied on should be with the consumer and not a telecom giant. That this new bill reverses that is both wrong and harmful.

Today, technology is changing the fabric of our society faster than at any other time in history. We have big questions that we need to tackle: not just privacy, but questions of freedom, fairness, and liberty. Algorithms are making decisions about policing, healthcare.

Driverless vehicles are making decisions about traffic and safety. Warfare is increasingly being fought remotely and autonomously. Censorship is on the rise globally. Propaganda is being promulgated more efficiently than ever. These problems won’t go away. If anything, the Internet of things and the computerization of every aspect of our lives will make it worse.

In today’s political climate, it seems impossible that Congress would legislate these things to our benefit. Right now, regulatory agencies such as the FTC and FCC are our best hope to protect our privacy and security against rampant corporate power. That Congress has decided to reduce that power leaves us at enormous risk.

It’s too late to do anything about this bill—Trump will certainly sign it—but we need to be alert to future bills that reduce our privacy and security.

This post previously appeared on the Guardian.

EDITED TO ADD: Former FCC Commissioner Tom Wheeler wrote a good op-ed on the subject. And here’s an essay laying out what this all means to the average Internet user.

EDITED TO ADD (4/12): States are stepping in.

Posted on March 31, 2017 at 12:07 PMView Comments

Details on NSA/FBI Eavesdropping

We’re starting to see Internet companies talk about the mechanics of how the US government spies on their users. Here, a Utah ISP owner describes his experiences with NSA eavesdropping:

We had to facilitate them to set up a duplicate port to tap in to monitor that customer’s traffic. It was a 2U (two-unit) PC that we ran a mirrored ethernet port to.

[What we ended up with was] a little box in our systems room that was capturing all the traffic to this customer. Everything they were sending and receiving.

Declan McCullagh explains how the NSA coerces companies to cooperate with its surveillance efforts. Basically, they want to avoid what happened with the Utah ISP.

Some Internet companies have reluctantly agreed to work with the government to conduct legally authorized surveillance on the theory that negotiations are less objectionable than the alternative—federal agents showing up unannounced with a court order to install their own surveillance device on a sensitive internal network. Those devices, the companies fear, could disrupt operations, introduce security vulnerabilities, or intercept more than is legally permitted.

“Nobody wants it on-premises,” said a representative of a large Internet company who has negotiated surveillance requests with government officials. “Nobody wants a box in their network…[Companies often] find ways to give tools to minimize disclosures, to protect users, to keep the government off the premises, and to come to some reasonable compromise on the capabilities.”

Precedents were established a decade or so ago when the government obtained legal orders compelling companies to install custom eavesdropping hardware on their networks.

And Brewster Kahle of the Internet Archive explains how he successfully fought a National Security Letter.

Posted on July 25, 2013 at 12:27 PMView Comments

Dictators Shutting Down the Internet

Excellent article: “How to Shut Down Internets.”

First, he describes what just happened in Syria. Then:

Egypt turned off the internet by using the Border Gateway Protocol trick, and also by switching off DNS. This has a similar effect to throwing bleach over a map. The location of every street and house in the country is blotted out. All the Egyptian ISPs were, and probably still are, government licensees. It took nothing but a short series of phone calls to effect the shutdown.

There are two reasons why these shutdowns happen in this manner. The first is that these governments wish to black out activities like, say, indiscriminate slaughter. That much is obvious. The second is sometimes not so obvious. These governments intend to turn the internet back on. Deep down, they believe they will be in their seats the next month and have the power to turn it back on. They believe they will win. It is the arrogance of power: they take their future for granted, and need only hide from the world the corpses it will be built on.

Cory Doctorow asks: “Why would a basket-case dictator even allow his citizenry to access the Internet in the first place?” and “Why not shut down the Internet the instant trouble breaks out?” The reason is that the Internet is a valuable tool for social control. Dictators can use the Internet for surveillance and propaganda as well as censorship, and they only resort to extreme censorship when the value of that outweighs the value of doing all three in some sort of totalitarian balance.

Related: Two articles on the countries most vulnerable to an Internet shutdown, based on their connectivity architecture.

Posted on December 11, 2012 at 6:08 AMView Comments

Telex Anti-Censorship System

This is really clever:

Many anticensorship systems work by making an encrypted connection (called a “tunnel”) from the user’s computer to a trusted proxy server located outside the censor’s network. This server relays requests to censored websites and returns the responses to the user over the encrypted tunnel. This approach leads to a cat-and-mouse game, where the censor attempts to discover and block the proxy servers. Users need to learn the address and login information for a proxy server somehow, and it’s very difficult to broadcast this information to a large number of users without the censor also learning it.

Telex turns this approach on its head to create what is essentially a proxy server without an IP address. In fact, users don’t need to know any secrets to connect. The user installs a Telex client app (perhaps by downloading it from an intermittently available website or by making a copy from a friend). When the user wants to visit a blacklisted site, the client establishes an encrypted HTTPS connection to a non-blacklisted web server outside the censor’s network, which could be a normal site that the user regularly visits. Since the connection looks normal, the censor allows it, but this connection is only a decoy.

The client secretly marks the connection as a Telex request by inserting a cryptographic tag into the headers. We construct this tag using a mechanism called public-key steganography. This means anyone can tag a connection using only publicly available information, but only the Telex service (using a private key) can recognize that a connection has been tagged.

As the connection travels over the Internet en route to the non-blacklisted site, it passes through routers at various ISPs in the core of the network. We envision that some of these ISPs would deploy equipment we call Telex stations. These devices hold a private key that lets them recognize tagged connections from Telex clients and decrypt these HTTPS connections. The stations then divert the connections to anti­censorship services, such as proxy servers or Tor entry points, which clients can use to access blocked sites. This creates an encrypted tunnel between the Telex user and Telex station at the ISP, redirecting connections to any site on the Internet.

EDITED TO ADD (8/1): Another article.

EDITED TO ADD (8/13): Another article.

Posted on July 19, 2011 at 9:59 AMView Comments

Internet Quarantines

Last month, Scott Charney of Microsoft proposed that infected computers be quarantined from the Internet. Using a public health model for Internet security, the idea is that infected computers spreading worms and viruses are a risk to the greater community and thus need to be isolated. Internet service providers would administer the quarantine, and would also clean up and update users’ computers so they could rejoin the greater Internet.

This isn’t a new idea. Already there are products that test computers trying to join private networks, and only allow them access if their security patches are up-to-date and their antivirus software certifies them as clean. Computers denied access are sometimes shunned to a limited-capability sub-network where all they can do is download and install the updates they need to regain access. This sort of system has been used with great success at universities and end-user-device-friendly corporate networks. They’re happy to let you log in with any device you want—this is the consumerization trend in action—as long as your security is up to snuff.

Charney’s idea is to do that on a larger scale. To implement it we have to deal with two problems. There’s the technical problem—making the quarantine work in the face of malware designed to evade it, and the social problem—ensuring that people don’t have their computers unduly quarantined. Understanding the problems requires us to understand quarantines in general.

Quarantines have been used to contain disease for millennia. In general several things need to be true for them to work. One, the thing being quarantined needs to be easily recognized. It’s easier to quarantine a disease if it has obvious physical characteristics: fever, boils, etc. If there aren’t any obvious physical effects, or if those effects don’t show up while the disease is contagious, a quarantine is much less effective.

Similarly, it’s easier to quarantine an infected computer if that infection is detectable. As Charney points out, his plan is only effective against worms and viruses that our security products recognize, not against those that are new and still undetectable.

Two, the separation has to be effective. The leper colonies on Molokai and Spinalonga both worked because it was hard for the quarantined to leave. Quarantined medieval cities worked less well because it was too easy to leave, or—when the diseases spread via rats or mosquitoes—because the quarantine was targeted at the wrong thing.

Computer quarantines have been generally effective because the users whose computers are being quarantined aren’t sophisticated enough to break out of the quarantine, and find it easier to update their software and rejoin the network legitimately.

Three, only a small section of the population must need to be quarantined. The solution works only if it’s a minority of the population that’s affected, either with physical diseases or computer diseases. If most people are infected, overall infection rates aren’t going to be slowed much by quarantining. Similarly, a quarantine that tries to isolate most of the Internet simply won’t work.

Fourth, the benefits must outweigh the costs. Medical quarantines are expensive to maintain, especially if people are being quarantined against their will. Determining who to quarantine is either expensive (if it’s done correctly) or arbitrary, authoritative and abuse-prone (if it’s done badly). It could even be both. The value to society must be worth it.

It’s the last point that Charney and others emphasize. If Internet worms were only damaging to the infected, we wouldn’t need a societally imposed quarantine like this. But they’re damaging to everyone else on the Internet, spreading and infecting others. At the same time, we can implement systems that quarantine cheaply. The value to society far outweighs the cost.

That makes sense, but once you move quarantines from isolated private networks to the general Internet, the nature of the threat changes. Imagine an intelligent and malicious infectious disease: That’s what malware is. The current crop of malware ignores quarantines; they’re few and far enough between not to affect their effectiveness.

If we tried to implement Internet-wide—or even countrywide—quarantining, worm-writers would start building in ways to break the quarantine. So instead of nontechnical users not bothering to break quarantines because they don’t know how, we’d have technically sophisticated virus-writers trying to break quarantines. Implementing the quarantine at the ISP level would help, and if the ISP monitored computer behavior, not just specific virus signatures, it would be somewhat effective even in the face of evasion tactics. But evasion would be possible, and we’d be stuck in another computer security arms race. This isn’t a reason to dismiss the proposal outright, but it is something we need to think about when weighing its potential effectiveness.

Additionally, there’s the problem of who gets to decide which computers to quarantine. It’s easy on a corporate or university network: the owners of the network get to decide. But the Internet doesn’t have that sort of hierarchical control, and denying people access without due process is fraught with danger. What are the appeal mechanisms? The audit mechanisms? Charney proposes that ISPs administer the quarantines, but there would have to be some central authority that decided what degree of infection would be sufficient to impose the quarantine. Although this is being presented as a wholly technical solution, it’s these social and political ramifications that are the most difficult to determine and the easiest to abuse.

Once we implement a mechanism for quarantining infected computers, we create the possibility of quarantining them in all sorts of other circumstances. Should we quarantine computers that don’t have their patches up to date, even if they’re uninfected? Might there be a legitimate reason for someone to avoid patching his computer? Should the government be able to quarantine someone for something he said in a chat room, or a series of search queries he made? I’m sure we don’t think it should, but what if that chat and those queries revolved around terrorism? Where’s the line?

Microsoft would certainly like to quarantine any computers it feels are not running legal copies of its operating system or applications software.The music and movie industry will want to quarantine anyone it decides is downloading or sharing pirated media files—they’re already pushing similar proposals.

A security measure designed to keep malicious worms from spreading over the Internet can quickly become an enforcement tool for corporate business models. Charney addresses the need to limit this kind of function creep, but I don’t think it will be easy to prevent; it’s an enforcement mechanism just begging to be used.

Once you start thinking about implementation of quarantine, all sorts of other social issues emerge. What do we do about people who need the Internet? Maybe VoIP is their only phone service. Maybe they have an Internet-enabled medical device. Maybe their business requires the Internet to run. The effects of quarantining these people would be considerable, even potentially life-threatening. Again, where’s the line?

What do we do if people feel they are quarantined unjustly? Or if they are using nonstandard software unfamiliar to the ISP? Is there an appeals process? Who administers it? Surely not a for-profit company.

Public health is the right way to look at this problem. This conversation—between the rights of the individual and the rights of society—is a valid one to have, and this solution is a good possibility to consider.

There are some applicable parallels. We require drivers to be licensed and cars to be inspected not because we worry about the danger of unlicensed drivers and uninspected cars to themselves, but because we worry about their danger to other drivers and pedestrians. The small number of parents who don’t vaccinate their kids have already caused minor outbreaks of whooping cough and measles among the greater population. We all suffer when someone on the Internet allows his computer to get infected. How we balance that with individuals’ rights to maintain their own computers as they see fit is a discussion we need to start having.

This essay previously appeared on Forbes.com.

EDITED TO ADD (11/15): From an anonymous reader:

In your article you mention that for quarantines to work, you must be able to detect infected individuals. It must also be detectable quickly, before the individual has the opportunity to infect many others. Quarantining an individual after they’ve infected most of the people they regularly interact with is of little value. You must quarantine individuals when they have infected, on average, less than one other person.

Just as worm-writers would respond to the technical mechanisms to implement a quarantine by investing in ways to get around them, they would also likely invest in outpacing the quarantine. If a worm is designed to spread fast, even the best quarantine mechanisms may be unable to keep up.

Another concern with quarantining mechanisms is the damage that attackers could do if they were able to compromise the mechanism itself. This is of especially great concern if the mechanism were to include code within end-user’s TCBs to scan computers­ essentially a built-in root kit. Without a scanner in the end-user’s TCB, it’s hard to see how you could reliably detect infections.

Posted on November 15, 2010 at 4:55 AMView Comments

Bulletproof Service Providers

From Brian Krebs:

Hacked and malicious sites designed to steal data from unsuspecting users via malware and phishing are a dime a dozen, often located in the United States, and are a key target for takedown by ISPs and security researchers. But when online miscreants seek stability in their Web projects, they often turn to so-called “bulletproof hosting” providers, mini-ISPs that specialize in offering services that are largely immune from takedown requests and pressure from Western law enforcement agencies.

Posted on November 11, 2010 at 12:45 PMView Comments

Internet Kill Switch

Last month, Sen. Joe Lieberman, I-Conn., introduced a bill (text here) that might—we’re not really sure—give the president the authority to shut down all or portions of the Internet in the event of an emergency. It’s not a new idea. Sens. Jay Rockefeller, D-W.Va., and Olympia Snowe, R-Maine, proposed the same thing last year, and some argue that the president can already do something like this. If this or a similar bill ever passes, the details will change considerably and repeatedly. So let’s talk about the idea of an Internet kill switch in general.

It’s a bad one.

Security is always a trade-off: costs versus benefits. So the first question to ask is: What are the benefits? There is only one possible use of this sort of capability, and that is in the face of a warfare-caliber enemy attack. It’s the primary reason lawmakers are considering giving the president a kill switch. They know that shutting off the Internet, or even isolating the U.S. from the rest of the world, would cause damage, but they envision a scenario where not doing so would cause even more.

That reasoning is based on several flawed assumptions.

The first flawed assumption is that cyberspace has traditional borders, and we could somehow isolate ourselves from the rest of the world using an electronic Maginot Line. We can’t.

Yes, we can cut off almost all international connectivity, but there are lots of ways to get out onto the Internet: satellite phones, obscure ISPs in Canada and Mexico, long-distance phone calls to Asia.

The Internet is the largest communications system mankind has ever created, and it works because it is distributed. There is no central authority. No nation is in charge. Plugging all the holes isn’t possible.

Even if the president ordered all U.S. Internet companies to block, say, all packets coming from China, or restrict non-military communications, or just shut down access in the greater New York area, it wouldn’t work. You can’t figure out what packets do just by looking at them; if you could, defending against worms and viruses would be much easier.

And packets that come with return addresses are easy to spoof. Remember the cyberattack July 4, 2009, that probably came from North Korea, but might have come from England, or maybe Florida? On the Internet, disguising traffic is easy. And foreign cyberattackers could always have dial-up accounts via U.S. phone numbers and make long-distance calls to do their misdeeds.

The second flawed assumption is that we can predict the effects of such a shutdown. The Internet is the most complex machine mankind has ever built, and shutting down portions of it would have all sorts of unforeseen ancillary effects.

Would ATMs work? What about the stock exchanges? Which emergency services would fail? Would trucks and trains be able to route their cargo? Would airlines be able to route their passengers? How much of the military’s logistical system would fail?

That’s to say nothing of the variety of corporations that rely on the Internet to function, let alone the millions of Americans who would need to use it to communicate with their loved ones in a time of crisis.

Even worse, these effects would spill over internationally. The Internet is international in complex and surprising ways, and it would be impossible to ensure that the effects of a shutdown stayed domestic and didn’t cause similar disasters in countries we’re friendly with.

The third flawed assumption is that we could build this capability securely. We can’t.

Once we engineered a selective shutdown switch into the Internet, and implemented a way to do what Internet engineers have spent decades making sure never happens, we would have created an enormous security vulnerability. We would make the job of any would-be terrorist intent on bringing down the Internet much easier.

Computer and network security is hard, and every Internet system we’ve ever created has security vulnerabilities. It would be folly to think this one wouldn’t as well. And given how unlikely the risk is, any actual shutdown would be far more likely to be a result of an unfortunate error or a malicious hacker than of a presidential order.

But the main problem with an Internet kill switch is that it’s too coarse a hammer.

Yes, the bad guys use the Internet to communicate, and they can use it to attack us. But the good guys use it, too, and the good guys far outnumber the bad guys.

Shutting the Internet down, either the whole thing or just a part of it, even in the face of a foreign military attack would do far more damage than it could possibly prevent. And it would hurt others whom we don’t want to hurt.

For years we’ve been bombarded with scare stories about terrorists wanting to shut the Internet down. They’re mostly fairy tales, but they’re scary precisely because the Internet is so critical to so many things.

Why would we want to terrorize our own population by doing exactly what we don’t want anyone else to do? And a national emergency is precisely the worst time to do it.

Just implementing the capability would be very expensive; I would rather see that money going toward securing our nation’s critical infrastructure from attack.

Defending his proposal, Sen. Lieberman pointed out that China has this capability. It’s debatable whether or not it actually does, but it’s actively pursuing the capability because the country cares less about its citizens.

Here in the U.S., it is both wrong and dangerous to give the president the power and ability to commit Internet suicide and terrorize Americans in this way.

This essay was originally published on AOL.com News.

Posted on July 12, 2010 at 7:07 AMView Comments

Marc Rotenberg on Google's Italian Privacy Case

Interesting commentary:

I don’t think this is really a case about ISP liability at all. It is a case about the use of a person’s image, without their consent, that generates commercial value for someone else. That is the essence of the Italian law at issue in this case. It is also how the right of privacy was first established in the United States.

The video at the center of this case was very popular in Italy and drove lots of users to the Google Video site. This boosted advertising and support for other Google services. As a consequence, Google actually had an incentive not to respond to the many requests it received before it actually took down the video.

Back in the U.S., here is the relevant history: after Brandeis and Warren published their famous article on the right to privacy in 1890, state courts struggled with its application. In a New York state case in 1902, a court rejected the newly proposed right. In a second case, a Georgia state court in 1905 endorsed it.

What is striking is that both cases involved the use of a person’s image without their consent. In New York, it was a young girl, whose image was drawn and placed on an oatmeal box for advertising purposes. In Georgia, a man’s image was placed in a newspaper, without his consent, to sell insurance.

Also important is the fact that the New York judge who rejected the privacy claim, suggested that the state assembly could simple pass a law to create the right. The New York legislature did exactly that and in 1903 New York enacted the first privacy law in the United States to protect a person’s “name or likeness” for commercial use.

The whole thing is worth reading.

EDITED TO ADD (3/18): A rebuttal.

Posted on March 9, 2010 at 12:36 PMView Comments

The Problem of Vague Laws

The average American commits three felonies a day: the title of a new book by Harvey Silverglate. More specifically, the problem is the intersection of vague laws and fast-moving technology:

Technology moves so quickly we can barely keep up, and our legal system moves so slowly it can’t keep up with itself. By design, the law is built up over time by court decisions, statutes and regulations. Sometimes even criminal laws are left vague, to be defined case by case. Technology exacerbates the problem of laws so open and vague that they are hard to abide by, to the point that we have all become potential criminals.

Boston civil-liberties lawyer Harvey Silverglate calls his new book “Three Felonies a Day,” referring to the number of crimes he estimates the average American now unwittingly commits because of vague laws. New technology adds its own complexity, making innocent activity potentially criminal.

[…]

In 2001, a man named Bradford Councilman was charged in Massachusetts with violating the wiretap laws. He worked at a company that offered an online book-listing service and also acted as an Internet service provider to book dealers. As an ISP, the company routinely intercepted and copied emails as part of the process of shuttling them through the Web to recipients.

The federal wiretap laws, Mr. Silverglate writes, were “written before the dawn of the Internet, often amended, not always clear, and frequently lagging behind the whipcrack speed of technological change.” Prosecutors chose to interpret the ISP role of momentarily copying messages as they made their way through the system as akin to impermissibly listening in on communications. The case went through several rounds of litigation, with no judge making the obvious point that this is how ISPs operate. After six years, a jury found Mr. Councilman not guilty.

Other misunderstandings of the Web criminalize the exercise of First Amendment rights. A Saudi student in Idaho was charged in 2003 with offering “material support” to terrorists. He had operated Web sites for a Muslim charity that focused on normal religious training, but was prosecuted on the theory that if a user followed enough links off his site, he would find violent, anti-American comments on other sites. The Internet is a series of links, so if there’s liability for anything in an online chain, it would be hard to avoid prosecution.

EDITED TO ADD (10/12): Audio interview with Harvey Silvergate.

Posted on September 29, 2009 at 1:08 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.