Entries Tagged "economics of security"

Page 32 of 39

For-Profit Botnet

Interesting article about someone convicted for running a for-profit botnet:

November’s 52-page indictment, along with papers filed last week, offer an unusually detailed glimpse into a shadowy world where hackers, often not old enough to vote, brag in online chat groups about their prowess in taking over vast numbers of computers and herding them into large armies of junk mail robots and arsenals for so-called denial of service attacks on Web sites.

Ancheta one-upped his hacking peers by advertising his network of “bots,” short for robots, on Internet chat channels.

A Web site Ancheta maintained included a schedule of prices he charged people who wanted to rent out the machines, along with guidelines on how many bots were required to bring down a particular type of Web site.

In July 2004, he told one chat partner he had more than 40,000 machines available, “more than I can handle,” according to the indictment. A month later, Ancheta told another person he controlled at least 100,000 bots, and that his network had added another 10,000 machines in a week and a half.

In a three-month span starting in June 2004, Ancheta rented out or sold bots to at least 10 “different nefarious computer users,” according to the plea agreement. He pocketed $3,000 in the process by accepting payments through the online PayPal service, prosecutors said.

Starting in August 2004, Ancheta turned to a new, more lucrative method to profit from his botnets, prosecutors said. Working with a juvenile in Boca Raton, Fla., whom prosecutors identified by his Internet nickname “SoBe,” Ancheta infected more than 400,000 computers.

Ancheta and SoBe signed up as affiliates in programs maintained by online advertising companies that pay people each time they get a computer user to install software that displays ads and collects information about the sites a user visits.

Posted on February 2, 2006 at 6:06 AMView Comments

Privatizing Registered Traveler

Last week the TSA announced details of its Registered Traveler program. Basically, you pay money for a background check and get a biometric ID—a fingerprint—that gets you through airline security faster. (See also this and this AP story.)

I’ve already written about why this is a bad idea for security:

What the Trusted Traveler program does is create two different access paths into the airport: high security and low security. The intent is that only good guys will take the low-security path, and the bad guys will be forced to take the high-security path, but it rarely works out that way. You have to assume that the bad guys will find a way to take the low-security path.

The Trusted Traveler program is based on the dangerous myth that terrorists match a particular profile and that we can somehow pick terrorists out of a crowd if we only can identify everyone. That’s simply not true. Most of the 9/11 terrorists were unknown and not on any watch list. Timothy McVeigh was an upstanding US citizen before he blew up the Oklahoma City Federal Building. Palestinian suicide bombers in Israel are normal, nondescript people. Intelligence reports indicate that Al Qaeda is recruiting non-Arab terrorists for US operations.

But what the TSA is actually doing is even more bizarre. The TSA is privatizing this system. They want the companies that sell for-profit, Registered Traveler passes to do the background checks. They want the companies to use error-filled commercial databases to do this. What incentive do these companies have to not sell someone a pass? Who is liable for mistakes?

I thought airline security was important.

This essay is an excellent discussion of the problems here.

Welcome to the brave new world of “market-driven” airport security, where different private security firms run and operate different lanes at different checkpoints, offering varied levels of accelerated screening depending on how much a user paid and how deep of a background check he or she submitted to. Thus the speed at which you move through a checkpoint will theoretically depend on a multiplicity of factors, only two of which are under your control (the depth of your background check and the firm(s) with which you’ve contracted). Other factors affecting your screening time, like which private security firm is manning a checkpoint and what resources that particular firm has invested in a particular checkpoint (e.g. extra personnel, more screening equipment, and so on) at a particular time of day, are entirely out of your control.

This is certainly a good point:

What’s worse than having identity thieves impersonate you to Chase Bank? Having terrorists impersonate you to the TSA.

Posted on February 1, 2006 at 6:11 AMView Comments

The Failure of US-VISIT

US-VISIT is the program to program to fingerprint and otherwise keep tabs on foriegn visitors to the U.S. This article talks about how the program is being rolled out, but the last paragraph is the most interesting:

Since January 2004, US-VISIT has processed more than 44 million visitors. It has spotted and apprehended nearly 1,000 people with criminal or immigration violations, according to a DHS press release.

I wrote about US-VISIT in 2004, and back then I said that it was too expensive and a bad trade-off. The price tag for “the next phase” was $15B; I’m sure the total cost is much higher.

But take that $15B number. One thousand bad guys, most of them not very bad, caught through US-VISIT. That’s $15M per bad guy caught.

Surely there’s a more cost-effective way to catch bad guys?

Posted on January 31, 2006 at 4:07 PMView Comments

Election Machine Conflict of Interests

From EPIC:

EPIC FOIA Notes #11: No-Bid Contracts Go to Vendors with Close Ties to Election Advisory Group

Documents obtained by EPIC from the Election Assistance Commission describe two no-bid contracts for work on voting system standards given to vendors with ties to the Commission’s technical advisory committee.

From a security perspective, this seems like a really bad idea.

Posted on January 26, 2006 at 7:35 AMView Comments

REAL ID Harder Than Legislators Thought

According to the Associated Press:

State motor vehicle officials nationwide who will have to carry out the Real ID Act say its authors grossly underestimated its logistical, technological and financial demands.

In a comprehensive survey obtained by The Associated Press and in follow-up interviews, officials cast doubt on the states’ ability to comply with the law on time and fretted that it will be a budget buster.

I’ve already written about REAL ID, including the obscene costs:

REAL ID is expensive. It’s an unfunded mandate: the federal government is forcing the states to spend their own money to comply with the act. I’ve seen estimates that the cost to the states of complying with REAL ID will be $120 million. That’s $120 million that can’t be spent on actual security.

According to the AP, I was way off:

Pennsylvania alone estimated a hit of up to $85 million. Washington state projected at least $46 million annually in the first several years.

Separately, a December report to Virginia’s governor pegged the potential price tag for that state as high as $169 million, with $63 million annually in successive years. Of the initial cost, $33 million would be just to redesign computing systems.

Remember, security is a trade-off. REAL ID is a bad idea primarily because the security gained is not worth the enormous expense.

See also the ACLU’s site on REAL ID.

Posted on January 13, 2006 at 1:23 PMView Comments

Anyone Can Get Anyone’s Phone Records

Interested in who your spouse is talking to? Your boss? A celebrity? A politician?

The Chicago Police Department is warning officers their cell phone records are available to anyone—for a price. Dozens of online services are selling lists of cell phone calls, raising security concerns among law enforcement and privacy experts….

How well do the services work? The Chicago Sun-Times paid $110 to Locatecell.com to purchase a one-month record of calls for this reporter’s company cell phone. It was as simple as e-mailing the telephone number to the service along with a credit card number. The request was made Friday after the service was closed for the New Year’s holiday.

On Tuesday, when it reopened, Locatecell.com e-mailed a list of 78 telephone numbers this reporter called on his cell phone between Nov. 19 and Dec. 17. The list included calls to law enforcement sources, story subjects and other Sun-Times reporters and editors.

EDITED TO ADD (1/9): More information on BoingBoing.

EDITED TO ADD (1/9): Also see this on EPIC West.

EDITED TO ADD (1/14): Daniel Solove has some good commentary.

Posted on January 9, 2006 at 6:59 AMView Comments

Cell Phone Companies and Security

This is a fascinating story of cell phone fraud, security, economics, and externalities. Its moral is obvious, and demonstrates how economic considerations drive security decisions.

Susan Drummond was a customer of Rogers Wireless, a large Canadaian cell phone company. Her phone was cloned while she was on vacation, and she got a $12,237.60 phone bill (her typical bill was $75). Rogers maintains that there is nothing to be done, and that Drummond has to pay.

Like all cell phone companies, Rogers has automatic fraud detection systems that detect this kind of abnormal cell phone usage. They don’t turn the cell phones off, though, because they don’t want to annoy their customers.

Ms. Hopper [a manager in Roger’s security department] said terrorist groups had identified senior cellphone company officers as perfect targets, since the company was loath to shut off their phones for reasons that included inconvenience to busy executives and, of course, the public-relations debacle that would take place if word got out.

As long as Rogers can get others to pay for the fraud, this makes perfect sense. Shutting off a phone based on an automatic fraud-detection system costs the phone company in two ways: people inconvenienced by false alarms, and bad press. But the major cost of not shutting off a phone remains an externality: the customer pays for it.

In fact, there seems be some evidence that Rogers decides whether or not to shut off a suspecious phone based on the customer’s ability to pay:

Ms. Innes [a vice-president with Rogers Communications] said that Rogers has a policy of contacting consumers if fraud is suspected. In some cases, she admitted, phones are shut off automatically, but refused to say what criteria were used. (Ms. Drummond and Mr. Gefen believe that the company bases the decision on a customer’s creditworthiness. “If you have the financial history, they let the meter run,” Ms. Drummond said.) Ms. Drummond noted that she has a salary of more than $100,000, and a sterling credit history. “They knew something was wrong, but they thought they could get the money out of me. It’s ridiculous.”

Makes sense from Rogers’ point of view. High-paying customers are 1) more likely to pay, and 2) more damaging if pissed off in a false alarm. Again, economic considerations trump security.

Rogers is defending itself in court, and shows no signs of backing down:

In court filings, the company has made it clear that it intends to hold Ms. Drummond responsible for the calls made on her phone. “. . . the plaintiff is responsible for all calls made on her phone prior to the date of notification that her phone was stolen,” the company says. “The Plaintiff’s failure to mitigate deprived the Defendant of the opportunity to take any action to stop fraudulent calls prior to the 28th of August 2005.”

The solution here is obvious: Rogers should not be able to charge its customers for telephone calls they did not make. Ms. Drummond’s phone was cloned; there is no possible way she could notify Rogers of this before she saw calls she did not make on her bill. She is also completely powerless to affect the anti-cloning security in the Rogers phone system. To make her liable for the fraud is to ensure that the problem never gets fixed.

Rogers is the only party in a position to do something about the problem. The company can, and according to the article has, implemented automatic fraud-detection software.

Rogers customers will pay for the fraud in any case. If they are responsible for the loss, either they’ll take their chances and pay a lot only if they are the victims, or there’ll be some insurance scheme that spreads the cost over the entire customer base. If Rogers is responsible for the loss, then the customers will pay in the form of slightly higher prices. But only if Rogers is responsible for the loss will they implement security countermeasures to limit fraud.

And if they do that, everyone benefits.

There is a Slashdot thread on the topic.

Posted on December 19, 2005 at 1:10 PMView Comments

1 30 31 32 33 34 39

Sidebar photo of Bruce Schneier by Joe MacInnis.