Entries Tagged "Internet"

Page 15 of 20

Second SHB Workshop Liveblogging (6)

The first session of the morning was “Foundations,” which is kind of a catch-all for a variety of things that didn’t really fit anywhere else. Rachel Greenstadt moderated.

Terence Taylor, International Council for the Live Sciences (suggested video to watch: Darwinian Security; Natural Security), talked about the lessons evolution teaches about living with risk. Successful species didn’t survive by eliminating the risks of their environment, they survived by adaptation. Adaptation isn’t always what you think. For example, you could view the collapse of the Soviet Union as a failure to adapt, but you could also view it as successful adaptation. Risk is good. Risk is essential for the survival of a society, because risk-takers are the drivers of change. In the discussion phase, John Mueller pointed out a key difference between human and biological systems: humans tend to respond dramatically to anomalous events (the anthrax attacks), while biological systems respond to sustained change. And David Livingstone Smith asked about the difference between biological adaptation that affects the reproductive success of an organism’s genes, even at the expense of the organism, with security adaptation. (I recommend the book he edited: Natural Security: A Darwinian Approach to a Dangerous World.)

Andrew Odlyzko, University of Minnesota (suggested reading: Network Neutrality, Search Neutrality, and the Never-Ending Conflict between Efficiency and Fairness in Markets, Economics, Psychology, and Sociology of Security), discussed human-space vs. cyberspace. People cannot build secure systems—we know that—but people also cannot live with secure systems. We require a certain amount of flexibility in our systems. And finally, people don’t need secure systems. We survive with an astounding amount of insecurity in our world. The problem with cyberspace is that it was originally conceived as separate from the physical world, and that it could correct for the inadequacies of the physical world. Really, the two are intertwined, and that human space more often corrects for the inadequacies of cyberspace. Lessons: build messy systems, not clean ones; create a web of ties to other systems; create permanent records.

danah boyd, Microsoft Research (suggested reading: Taken Out of Context—American Teen Sociality in Networked Publics), does ethnographic studies of teens in cyberspace. Teens tend not to lie to their friends in cyberspace, but they lie to the system. Since an early age, they’ve been taught that they need to lie online to be safe. Teens regularly share their passwords: with their parents when forced, or with their best friend or significant other. This is a way of demonstrating trust. It’s part of the social protocol for this generation. In general, teens don’t use social media in the same way as adults do. And when they grow up, they won’t use social media in the same way as today’s adults do. Teens view privacy in terms of control, and take their cues about privacy from celebrities and how they use social media. And their sense of privacy is much more nuanced and complicated. In the discussion phase, danah wasn’t sure whether the younger generation would be more or less susceptible to Internet scams than the rest of us—they’re not nearly as technically savvy as we might think they are. “The only thing that saves teenagers is fear of their parents”; they try to lock them out, and lock others out in the process. Socio-economic status matters a lot, in ways that she is still trying to figure out. There are three different types of social networks: personal networks, articulated networks, and behavioral networks, and they’re different.

Mark Levine, Lancaster University (suggested reading: The Kindness of Crowds; Intra-group Regulation of Violence: Bystanders and the (De)-escalation of Violence), does social psychology. He argued against the common belief that groups are bad (mob violence, mass hysteria, peer group pressure). He collected data from UK CCTV cameras, searches for aggressive behavior, and studies when and how bystanders either help escalate or de-escalate the situations. Results: as groups get bigger, there is no increase of anti-social acts and a significant increase in pro-social acts. He has much more analysis and results, too complicated to summarize here. One key finding: when a third party intervenes in an aggressive interaction, it is much more likely to de-escalate. Basically, groups can act against violence. “When it comes to violence (and security), group processes are part of the solution—not part of the problem?”

Jeff MacKie-Mason, University of Michigan (suggested reading: Humans are smart devices, but not programmable; Security when people matter; A Social Mechanism for Supporting Home Computer Security), is an economist: “Security problems are incentive problems.” He discussed motivation, and how to design systems to take motivation into account. Humans are smart devices; they can’t be programmed, but they can be influenced through the sciences of motivational behavior: microeconomics, game theory, social psychology, psychodynamics, and personality psychology. He gave a couple of general examples of how these theories can inform security system design.

Joe Bonneau, Cambridge University, talked about social networks like Facebook, and privacy. People misunderstand why privacy and security is important in social networking sites like Facebook. People underestimate of what Facebook really is; it really is a reimplementation of the entire Internet. “Everything on the Internet is becoming social,” and that makes security different. Phishing is different, 419-style scams are different. Social context makes some scams easier; social networks are fun, noisy, and unpredictable. “People use social networking systems with their brain turned off.” But social context can be used to spot frauds and anomalies, and can be used to establish trust.

Three more sessions to go. (I am enjoying liveblogging the event. It’s helping me focus and pay closer attention.)

Adam Shostack’s liveblogging is here. Ross Anderson’s liveblogging is in his blog post’s comments. Matt Blaze’s audio is here.

Posted on June 12, 2009 at 9:54 AMView Comments

Fear of Aerial Images

Time for some more fear about terrorists using maps and images on the Internet.

But the more striking images come when Portzline clicks on the “bird’s-eye” option offered by the map service. The overhead views, which come chiefly from satellites, are replaced with strikingly clear oblique-angle photos, chiefly shot from aircraft. By clicking another button, he can see the same building from all four sides.

“What we’re seeing here is a guard shack,” Portzline said, pointing to a rooftop structure. “This is a communications device for the nuclear plant.”

He added, “This particular building is the air intake for the control room. And there’s some nasty thing you could do to disable the people in the control room. So this type of information should not be available. I look at this and just say, ‘Wow.’ ”

Terror expert and author Brian Jenkins agreed that the pictures are “extraordinarily impressive.”

“If I were a terrorist planning an attack, I would want that imagery. That would facilitate that mission,” he said. “And given the choice between renting an airplane or trying some other way to get it, versus tapping in some things on my computer, I certainly want to do the latter. (It will) reduce my risk, and the first they’re going to know about my attack is when it takes place.”

Gadzooks, people, enough with the movie plots.

Joel Anderson, a member of the California Assembly, has more expansive goals. He has introduced a bill in the state Legislature that would prohibit “virtual globe” services from providing unblurred pictures of schools, churches and government or medical facilities in California. It also would prohibit those services from providing street-view photos of those buildings.

“It struck me that a person in a tent halfway around the world could target an attack like that with a laptop computer,” said Anderson, a Republican legislator who represents San Diego’s East County. Anderson said he doesn’t want to limit technology, but added, “There’s got to be some common sense.”

I wonder why he thinks that “schools, churches and government or medical facilities” are terrorist targets worth protecting, and movie theaters, stadiums, concert halls, restaurants, train stations, shopping malls, Toys-R-Us stores on the day after Thanksgiving, train stations, and theme parks are not. After all, “there’s got to be some common sense.”

Now, both have launched efforts to try to get Internet map services to remove or blur images of sensitive sites, saying the same technology that allows people to see a neighbor’s swimming pool can be used by terrorists to chose targets and plan attacks.

Yes, and the same technology that allows people to call their friends can be used by terrorists to choose targets and plan attacks. And the same technology that allows people to commute to work can be used by terrorists to plan and execute attacks. And the same technology that allows you to read this blog post…repeat until tired.

Of course, this is nothing I haven’t said before:

Criminals have used telephones and mobile phones since they were invented. Drug smugglers use airplanes and boats, radios and satellite phones. Bank robbers have long used cars and motorcycles as getaway vehicles, and horses before then. I haven’t seen it talked about yet, but the Mumbai terrorists used boats as well. They also wore boots. They ate lunch at restaurants, drank bottled water, and breathed the air. Society survives all of this because the good uses of infrastructure far outweigh the bad uses, even though the good uses are—by and large—small and pedestrian and the bad uses are rare and spectacular. And while terrorism turns society’s very infrastructure against itself, we only harm ourselves by dismantling that infrastructure in response—just as we would if we banned cars because bank robbers used them too.

You’re not going to stop terrorism by deliberately degrading our infrastructure. Refuse to be terrorized, everyone.

Posted on June 8, 2009 at 6:15 AMView Comments

Here Comes Everybody Review

In 1937, Ronald Coase answered one of the most perplexing questions in economics: if markets are so great, why do organizations exist? Why don’t people just buy and sell their own services in a market instead? Coase, who won the 1991 Nobel Prize in Economics, answered the question by noting a market’s transaction costs: buyers and sellers need to find one another, then reach agreement, and so on. The Coase theorem implies that if these transaction costs are low enough, direct markets of individuals make a whole lot of sense. But if they are too high, it makes more sense to get the job done by an organization that hires people.

Economists have long understood the corollary concept of Coase’s ceiling, a point above which organizations collapse under their own weight—where hiring someone, however competent, means more work for everyone else than the new hire contributes. Software projects often bump their heads against Coase’s ceiling: recall Frederick P. Brooks Jr.’s seminal study, The Mythical Man-Month (Addison-Wesley, 1975), which showed how adding another person onto a project can slow progress and increase errors.

What’s new is something consultant and social technologist Clay Shirky calls "Coase’s Floor," below which we find projects and activities that aren’t worth their organizational costs—things so esoteric, so frivolous, so nonsensical, or just so thoroughly unimportant that no organization, large or small, would ever bother with them. Things that you shake your head at when you see them and think, "That’s ridiculous."

Sounds a lot like the Internet, doesn’t it? And that’s precisely Shirky’s point. His new book, Here Comes Everybody: The Power of Organizing Without Organizations, explores a world where organizational costs are close to zero and where ad hoc, loosely connected groups of unpaid amateurs can create an encyclopedia larger than the Britannica and a computer operating system to challenge Microsoft’s.

Shirky teaches at New York University’s Interactive Telecommunications Program, but this is no academic book. Sacrificing rigor for readability, Here Comes Everybody is an entertaining as well as informative romp through some of the Internet’s signal moments—the Howard Dean phenomenon, Belarusian protests organized on LiveJournal, the lost cellphone of a woman named Ivanna, Meetup.com, flash mobs, Twitter, and more—which Shirky uses to illustrate his points.

The book is filled with bits of insight and common sense, explaining why young people take better advantage of social tools, how the Internet affects social change, and how most Internet discourse falls somewhere between dinnertime conversation and publishing.

Shirky notes that "most user-generated content isn’t ‘content’ at all, in the sense of being created for general consumption, any more than a phone call between you and a sibling is ‘family-generated content.’ Most of what gets created on any given day is just the ordinary stuff of life—gossip, little updates, thinking out loud—but now it’s done in the same medium as professionally produced material. Unlike professionally produced material, however, Internet content can be organized after the fact."

No one coordinates Flickr’s 6 million to 8 million users. Yet Flickr had the first photos from the 2005 London Transport bombings, beating the traditional news media. Why? People with cellphone cameras uploaded their photos to Flickr. They coordinated themselves using tools that Flickr provides. This is the sort of impromptu organization the Internet is ideally suited for. Shirky explains how these moments are harbingers of a future that can self-organize without formal hierarchies.

These nonorganizations allow for contributions from a wider group of people. A newspaper has to pay someone to take photos; it can’t be bothered to hire someone to stand around London underground stations waiting for a major event. Similarly, Microsoft has to pay a programmer full time, and Encyclopedia Britannica has to pay someone to write articles. But Flickr can make use of a person with just one photo to contribute, Linux can harness the work of a programmer with little time, and Wikipedia benefits if someone corrects just a single typo. These aggregations of millions of actions that were previously below the Coasean floor have enormous potential.

But a flash mob is still a mob. In a world where the Coasean floor is at ground level, all sorts of organizations appear, including ones you might not like: violent political organizations, hate groups, Holocaust deniers, and so on. (Shirky’s discussion of teen anorexia support groups makes for very disturbing reading.) This has considerable implications for security, both online and off.

We never realized how much our security could be attributed to distance and inconvenience—how difficult it is to recruit, organize, coordinate, and communicate without formal organizations. That inadvertent measure of security is now gone. Bad guys, from hacker groups to terrorist groups, will use the same ad hoc organizational technologies that the rest of us do. And while there has been some success in closing down individual Web pages, discussion groups, and blogs, these are just stopgap measures.

In the end, a virtual community is still a community, and it needs to be treated as such. And just as the best way to keep a neighborhood safe is for a policeman to walk around it, the best way to keep a virtual community safe is to have a virtual police presence.

Crime isn’t the only danger; there is also isolation. If people can segregate themselves in ever-increasingly specialized groups, then they’re less likely to be exposed to alternative ideas. We see a mild form of this in the current political trend of rival political parties having their own news sources, their own narratives, and their own facts. Increased radicalization is another danger lurking below the Coasean floor.

There’s no going back, though. We’ve all figured out that the Internet makes freedom of speech a much harder right to take away. As Shirky demonstrates, Web 2.0 is having the same effect on freedom of assembly. The consequences of this won’t be fully seen for years.

Here Comes Everybody covers some of the same ground as Yochai Benkler’s Wealth of Networks. But when I had to explain to one of my corporate attorneys how the Internet has changed the nature of public discourse, Shirky’s book is the one I recommended.

This essay previously appeared in IEEE Spectrum.

EDITED TO ADD (12/13): Interesting Clay Shirky podcast.

Posted on November 25, 2008 at 7:39 AMView Comments

Online Age Verification

A discussion of a security trade-off:

Child-safety activists charge that some of the age-verification firms want to help Internet companies tailor ads for children. They say these firms are substituting one exaggerated threat—the menace of online sex predators—with a far more pervasive danger from online marketers like junk food and toy companies that will rush to advertise to children if they are told revealing details about the users.

It’s an old story: protecting against the rare and spectacular by making yourself more vulnerable to the common and pedestrian.

Posted on November 21, 2008 at 11:47 AMView Comments

Most Spam Came from a Single Web Hosting Firm

Really:

Experts say the precipitous drop-off in spam comes from Internet providers unplugging McColo Corp., a hosting provider in Northern California that was the home base for machines responsible for coordinating the sending of roughly 75 percent of all spam each day.

Certainly this won’t last:

Bhandari said he expects the spam volume to recover to normal levels in about a week, as the spam operations that were previously hosted at McColo move to a new home.

“We’re seeing a slow recovery,” Bhandari. “We fully expect this to recover completely, and to go into the highest ever spam period during the upcoming holiday season.”

But with all the talk of massive botnets sending spam, it’s interesting that most of it still comes from hosting services. You’d think this would make the job of detecting spam a lot easier.

EDITED TO ADD (12/13): I should clarify that this is not the site where most of the spam was sent from, but the site where most of the spam sending bots were controlled from.

Posted on November 17, 2008 at 5:11 AMView Comments

Censorship in Dubai

I was in Dubai last weekend for the World Economic Forum Summit on the Global Agenda. (I was on the “Future of the Internet” council; fellow council members Ethan Zuckerman and Jeff Jarvis have written about the event.)

As part of the United Arab Emirates, Dubai censors the Internet:

The government of the United Arab Emirates (UAE) pervasively filters Web sites that contain pornography or relate to alcohol and drug use, gay and lesbian issues, or online dating or gambling. Web-based applications and religious and political sites are also filtered, though less extensively. Additionally, legal controls limit free expression and behavior, restricting political discourse and dissent online.

More detail here.

What was interesting to me about how reasonable the execution of the policy was. Unlike some countries—China for example—that simply block objectionable content, the UAE displays a screen indicating that the URL has been blocked and offers information about its appeals process.

Posted on November 12, 2008 at 12:56 PMView Comments

The NSA Teams Up with the Chinese Government to Limit Internet Anonymity

Definitely strange bedfellows:

A United Nations agency is quietly drafting technical standards, proposed by the Chinese government, to define methods of tracing the original source of Internet communications and potentially curbing the ability of users to remain anonymous.

The U.S. National Security Agency is also participating in the “IP Traceback” drafting group, named Q6/17, which is meeting next week in Geneva to work on the traceback proposal. Members of Q6/17 have declined to release key documents, and meetings are closed to the public.

[…]

A second, apparently leaked ITU document offers surveillance and monitoring justifications that seem well-suited to repressive regimes:

A political opponent to a government publishes articles putting the government in an unfavorable light. The government, having a law against any opposition, tries to identify the source of the negative articles but the articles having been published via a proxy server, is unable to do so protecting the anonymity of the author.

This is being sold as a way to go after the bad guys, but it won’t help. Here’s Steve Bellovin on that issue:

First, very few attacks these days use spoofed source addresses; the real IP address already tells you where the attack is coming from. Second, in case of a DDoS attack, there are too many sources; you can’t do anything with the information. Third, the machine attacking you is almost certainly someone else’s hacked machine and tracking them down (and getting them to clean it up) is itself time-consuming.

TraceBack is most useful in monitoring the activities of large masses of people. But of course, that’s why the Chinese and the NSA are so interested in this proposal in the first place.

It’s hard to figure out what the endgame is; the U.N. doesn’t have the authority to impose Internet standards on anyone. In any case, this idea is counter to the U.N. Universal Declaration of Human Rights, Article 19: “Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers.” In the U.S., it’s counter to the First Amendment, which has long permitted anonymous speech. On the other hand, basic human and constitutional rights have been jettisoned left and right in the years after 9/11; why should this be any different?

But when the Chinese government and the NSA get together to enhance their ability to spy on us all, you have to wonder what’s gone wrong with the world.

Posted on September 18, 2008 at 6:34 AMView Comments

BT, Phorm, and Me

Over the past year I have gotten many requests, both public and private, to comment on the BT and Phorm incident.

I was not involved with BT and Phorm, then or now. Everything I know about Phorm and BT’s relationship with Phorm came from the same news articles you read. I have not gotten involved as an employee of BT. But anything I say is—by definition—said by a BT executive. That’s not good.

So I’m sorry that I can’t write about Phorm. But—honestly—lots of others have been giving their views on the issue.

Posted on September 8, 2008 at 6:23 AMView Comments

1 13 14 15 16 17 20

Sidebar photo of Bruce Schneier by Joe MacInnis.