Entries Tagged "crime"

Page 25 of 39

Security-Theater Cameras Coming to New York

In this otherwise lopsided article about security cameras, this one quote stands out:

But Steve Swain, who served for years with the London Metropolitan Police and its counter-terror operations, doubts the power of cameras to deter crime.

“I don’t know of a single incident where CCTV has actually been used to spot, apprehend or detain offenders in the act,” he said, referring to the London system. Swain now works for Control Risk, an international security firm.

Asked about their role in possibly stopping acts of terror, he said pointedly: “The presence of CCTV is irrelevant for those who want to sacrifice their lives to carry out a terrorist act.”

[…]

Swain does believe the cameras have great value in investigation work. He also said they are necessary to reassure the public that law enforcement is being aggressive.

“You need to do this piece of theater so that if the terrorists are looking at you, they can see that you’ve got some measures in place,” he said.

Did you get that? Swain doesn’t believe that cameras deter crime, but he wants cities to spend millions on them so that the terrorists “can see that you’ve got some measures in place.”

Anyone have any idea why we’re better off doing this than other things that may actually deter crime and terrorism?

Posted on August 6, 2007 at 3:23 PMView Comments

Transporting a $1.9M Rare Coin

Excellent story of security by obscurity:

Feigenbaum put the dime, encased in a 3-inch-square block of plastic, in his pocket and, accompanied by a security guard, drove in an ordinary sedan directly to San Jose airport to catch the red-eye to Newark.

The overnight flight, he said, was the only way to make sure the dime would be in New York by the time the buyer’s bank opened in the morning. People who pay $1.9 million for dimes do not like to be kept waiting for them.

Feigenbaum had purchased a coach ticket, to avoid suspicion, but found himself upgraded to first class. That was a worry, because people in flip-flops, T-shirts and grubby jeans do not regularly ride in first class. But it would have been more suspicious to decline a free upgrade. So Feigenbaum forced himself to sit in first class, where he found himself to be the only passenger in flip-flops.

He was too nervous to sleep, he said. He did not watch the in-flight movie, which was “Firehouse Dog.” He turned down a Reuben sandwich and sensibly declined all offers of alcoholic beverages.

Shortly after boarding the plane, he transferred the dime from his pants pocket to his briefcase.

“I was worried that the dime might fall out of my pocket while I was sitting down,” Feigenbaum said.

All across the country, Feigenbaum kept checking to make sure the dime was safe by reaching into his briefcase to feel for it. Feigenbaum did not actually take the dime out of his briefcase, as it is suspicious to stare at dimes.

This isn’t the first time security through obscurity was employed to transport a very small and very valuable object. From Beyond Fear, pp 211-212:

At 3,106 carats, a little under a pound and a half, the Cullinan Diamond was the largest uncut diamond ever discovered. It was extracted from the earth at the Premier Mine, near Pretoria, South Africa, in 1905. Appreciating the literal enormity of the find, the Transvaal government bought the diamond as a gift for King Edward VII. Transporting the stone to England was a huge security problem, of course, and there was much debate on how best to do it. Detectives were sent from London to guard it on its journey. News leaked that a certain steamer was carrying it, and the presence of the detectives confirmed this. But the diamond on that steamer was a fake. Only a few people knew of the real plan; they packed the Cullinan in a small box, stuck a three-shilling stamp on it, and sent it to England anonymously by unregistered parcel post.

Like all security measures, security by obscurity has its place. I wrote a lot more about the general concepts in this 2002 essay.

Posted on July 30, 2007 at 4:30 PMView Comments

Bush's Watch Stolen?

Watch this video very carefully; it’s President Bush working the crowds in Albania. At 0.50 minutes into the clip, Bush has a watch. At 1.04 minutes into the clip, he had a watch.

The U.S. is denying that his watch was stolen:

Photographs showed Bush, surrounded by five bodyguards, putting his hands behind his back so one of the bodyguards could remove his watch.

I simply don’t see that in the video. Bush’s arm is out in front of him during the entire nine seconds between those stills.

Another denial:

An Albanian bodyguard who accompanied Bush in the town told The Associated Press he had seen one of his U.S. colleagues close to Bush bend down and pick up the watch.

That’s certainly possible; it may have fallen off.

But possibly the pickpocket of the century. (Although would anyone actually be stupid enough to try? There must be a zillion easier-to-steal watches in that crowd, many of them nicer than Bush’s.)

EDITED TO ADD (6/12): This article says that he wears ar $50 Timex. It also has some more odd denials.

EDITED TO ADD (6/13): In this video, from another angle, it seems clear that Bush removes the watch himself.

Posted on June 12, 2007 at 10:52 AMView Comments

Cyberwar

I haven’t posted anything about the cyberwar between Russia and Estonia because, well, because I didn’t think there was anything new to say. We know that this kind of thing is possible. We don’t have any definitive proof that Russia was behind it. But it would be foolish to think that the various world’s militaries don’t have capabilities like this.

And anyway, I wrote about cyberwar back in January 2005.

But it seems that the essay never made it into the blog. So here it is again.


Cyberwar

The first problem with any discussion about cyberwar is definitional. I’ve been reading about cyberwar for years now, and there seem to be as many definitions of the term as there are people who write about the topic. Some people try to limit cyberwar to military actions taken during wartime, while others are so inclusive that they include the script kiddies who deface websites for fun.

I think the restrictive definition is more useful, and would like to define four different terms as follows:

Cyberwar—Warfare in cyberspace. This includes warfare attacks against a nation’s military—forcing critical communications channels to fail, for example—and attacks against the civilian population.

Cyberterrorism—The use of cyberspace to commit terrorist acts. An example might be hacking into a computer system to cause a nuclear power plant to melt down, a dam to open, or two airplanes to collide. In a previous Crypto-Gram essay, I discussed how realistic the cyberterrorism threat is.

Cybercrime—Crime in cyberspace. This includes much of what we’ve already experienced: theft of intellectual property, extortion based on the threat of DDOS attacks, fraud based on identity theft, and so on.

Cybervandalism—The script kiddies who deface websites for fun are technically criminals, but I think of them more as vandals or hooligans. They’re like the kids who spray paint buses: in it more for the thrill than anything else.

At first glance, there’s nothing new about these terms except the “cyber” prefix. War, terrorism, crime, even vandalism are old concepts. That’s correct, the only thing new is the domain; it’s the same old stuff occurring in a new arena. But because the arena of cyberspace is different from other arenas, there are differences worth considering.

One thing that hasn’t changed is that the terms overlap: although the goals are different, many of the tactics used by armies, terrorists, and criminals are the same. Just as all three groups use guns and bombs, all three groups can use cyberattacks. And just as every shooting is not necessarily an act of war, every successful Internet attack, no matter how deadly, is not necessarily an act of cyberwar. A cyberattack that shuts down the power grid might be part of a cyberwar campaign, but it also might be an act of cyberterrorism, cybercrime, or even—if it’s done by some fourteen-year-old who doesn’t really understand what he’s doing—cybervandalism. Which it is will depend on the motivations of the attacker and the circumstances surrounding the attack…just as in the real world.

For it to be cyberwar, it must first be war. And in the 21st century, war will inevitably include cyberwar. For just as war moved into the air with the development of kites and balloons and then aircraft, and war moved into space with the development of satellites and ballistic missiles, war will move into cyberspace with the development of specialized weapons, tactics, and defenses.

The Waging of Cyberwar

There should be no doubt that the smarter and better-funded militaries of the world are planning for cyberwar, both attack and defense. It would be foolish for a military to ignore the threat of a cyberattack and not invest in defensive capabilities, or to disregard the strategic or tactical possibility of launching an offensive cyberattack against an enemy during wartime. And while history has taught us that many militaries are indeed foolish and ignore the march of progress, cyberwar has been discussed too much in military circles to be ignored.

This implies that at least some of our world’s militaries have Internet attack tools that they’re saving in case of wartime. They could be denial-of-service tools. They could be exploits that would allow military intelligence to penetrate military systems. They could be viruses and worms similar to what we’re seeing now, but perhaps country- or network-specific. They could be Trojans that eavesdrop on networks, disrupt network operations, or allow an attacker to penetrate still other networks.

Script kiddies are attackers who run exploit code written by others, but don’t really understand the intricacies of what they’re doing. Conversely, professional attackers spend an enormous amount of time developing exploits: finding vulnerabilities, writing code to exploit them, figuring out how to cover their tracks. The real professionals don’t release their code to the script kiddies; the stuff is much more valuable if it remains secret until it is needed. I believe that militaries have collections of vulnerabilities in common operating systems, generic applications, or even custom military software that their potential enemies are using, and code to exploit those vulnerabilities. I believe that these militaries are keeping these vulnerabilities secret, and that they are saving them in case of wartime or other hostilities. It would be irresponsible for them not to.

The most obvious cyberattack is the disabling of large parts of the Internet, at least for a while. Certainly some militaries have the capability to do this, but in the absence of global war I doubt that they would do so; the Internet is far too useful an asset and far too large a part of the world economy. More interesting is whether they would try to disable national pieces of it. If Country A went to war with Country B, would Country A want to disable Country B’s portion of the Internet, or remove connections between Country B’s Internet and the rest of the world? Depending on the country, a low-tech solution might be the easiest: disable whatever undersea cables they’re using as access. Could Country A’s military turn its own Internet into a domestic-only network if they wanted?

For a more surgical approach, we can also imagine cyberattacks designed to destroy particular organizations’ networks; e.g., as the denial-of-service attack against the Al Jazeera website during the recent Iraqi war, allegedly by pro-American hackers but possibly by the government. We can imagine a cyberattack against the computer networks at a nation’s military headquarters, or the computer networks that handle logistical information.

One important thing to remember is that destruction is the last thing a military wants to do with a communications network. A military only wants to shut an enemy’s network down if they aren’t getting useful information from it. The best thing to do is to infiltrate the enemy’s computers and networks, spy on them, and surreptitiously disrupt select pieces of their communications when appropriate. The next best thing is to passively eavesdrop. After that, the next best is to perform traffic analysis: analyze who is talking to whom and the characteristics of that communication. Only if a military can’t do any of that do they consider shutting the thing down. Or if, as sometimes but rarely happens, the benefits of completely denying the enemy the communications channel outweigh all of the advantages.

Properties of Cyberwar

Because attackers and defenders use the same network hardware and software, there is a fundamental tension between cyberattack and cyberdefense. The National Security Agency has referred to this as the “equities issue,” and it can be summarized as follows. When a military discovers a vulnerability in a common product, they can either alert the manufacturer and fix the vulnerability, or not tell anyone. It’s not an easy decision. Fixing the vulnerability gives both the good guys and the bad guys a more secure system. Keeping the vulnerability secret means that the good guys can exploit the vulnerability to attack the bad guys, but it also means that the good guys are vulnerable. As long as everyone uses the same microprocessors, operating systems, network protocols, applications software, etc., the equities issue will always be a consideration when planning cyberwar.

Cyberwar can take on aspects of espionage, and does not necessarily involve open warfare. (In military talk, cyberwar is not necessarily “hot.”) Since much of cyberwar will be about seizing control of a network and eavesdropping on it, there may not be any obvious damage from cyberwar operations. This means that the same tactics might be used in peacetime by national intelligence agencies. There’s considerable risk here. Just as U.S. U2 flights over the Soviet Union could have been viewed as an act of war, the deliberate penetration of a country’s computer networks might be as well.

Cyberattacks target infrastructure. In this way they are no different than conventional military attacks against other networks: power, transportation, communications, etc. All of these networks are used by both civilians and the military during wartime, and attacks against them inconvenience both groups of people. For example, when the Allies bombed German railroad bridges during World War II, that affected both civilian and military transport. And when the United States bombed Iraqi communications links in both the First and Second Iraqi Wars, that affected both civilian and military communications. Cyberattacks, even attacks targeted as precisely as today’s smart bombs, are likely to have collateral effects.

Cyberattacks can be used to wage information war. Information war is another topic that’s received considerable media attention of late, although it is not new. Dropping leaflets on enemy soldiers to persuade them to surrender is information war. Broadcasting radio programs to enemy troops is information war. As people get more and more of their information over cyberspace, cyberspace will increasingly become a theater for information war. It’s not hard to imagine cyberattacks designed to co-opt the enemy’s communications channels and use them as a vehicle for information war.

Because cyberwar targets information infrastructure, the waging of it can be more damaging to countries that have significant computer-network infrastructure. The idea is that a technologically poor country might decide that a cyberattack that affects the entire world would disproportionately affect its enemies, because rich nations rely on the Internet much more than poor ones. In some ways this is the dark side of the digital divide, and one of the reasons countries like the United States are so worried about cyberdefense.

Cyberwar is asymmetric, and can be a guerrilla attack. Unlike conventional military offensives involving divisions of men and supplies, cyberattacks are carried out by a few trained operatives. In this way, cyberattacks can be part of a guerrilla warfare campaign.

Cyberattacks also make effective surprise attacks. For years we’ve heard dire warnings of an “electronic Pearl Harbor.” These are largely hyperbole today. I discuss this more in that previous Crypto-Gram essay on cyberterrorism, but right now the infrastructure just isn’t sufficiently vulnerable in that way.

Cyberattacks do not necessarily have an obvious origin. Unlike other forms of warfare, misdirection is more likely a feature of a cyberattack. It’s possible to have damage being done, but not know where it’s coming from. This is a significant difference; there’s something terrifying about not knowing your opponent—or knowing it, and then being wrong. Imagine if, after Pearl Harbor, we did not know who attacked us?

Cyberwar is a moving target. In the previous paragraph, I said that today the risks of an electronic Pearl Harbor are unfounded. That’s true; but this, like all other aspects of cyberspace, is continually changing. Technological improvements affect everyone, including cyberattack mechanisms. And the Internet is becoming critical to more of our infrastructure, making cyberattacks more attractive. There will be a time in the future, perhaps not too far into the future, when a surprise cyberattack becomes a realistic threat.

And finally, cyberwar is a multifaceted concept. It’s part of a larger military campaign, and attacks are likely to have both real-world and cyber components. A military might target the enemy’s communications infrastructure through both physical attack—bombings of selected communications facilities and transmission cables—and virtual attack. An information warfare campaign might include dropping of leaflets, usurpation of a television channel, and mass sending of e-mail. And many cyberattacks still have easier non-cyber equivalents: A country wanting to isolate another country’s Internet might find a low-tech solution, involving the acquiescence of backbone companies like Cable & Wireless, easier than a targeted worm or virus. Cyberwar doesn’t replace war; it’s just another arena in which the larger war is fought.

People overplay the risks of cyberwar and cyberterrorism. It’s sexy, and it gets media attention. And at the same time, people underplay the risks of cybercrime. Today crime is big business on the Internet, and it’s getting bigger all the time. But luckily, the defenses are the same. The countermeasures aimed at preventing both cyberwar and cyberterrorist attacks will also defend against cybercrime and cybervandalism. So even if organizations secure their networks for the wrong reasons, they’ll do the right thing.

Here’s my previous essay on cyberterrorism.

Posted on June 4, 2007 at 6:13 AMView Comments

1933 Anti-Spam Doorbell

Here’s a great description of an anti-spam doorbell from 1933. A visitor had to deposit a dime into a slot to make the doorbell ring. If the homeowner appreciated the visit, he would return the dime. Otherwise, the dime became the cost of disturbing the homeowner.

This kind of system has been proposed for e-mail as well: the sender has to pay the receiver—or someone else in the system—a nominal amount for each e-mail sent. This money is returned if the e-mail is wanted, and forfeited if it is spam. The result would be to raise the cost of sending spam to the point where it is uneconomical.

I think it’s worth comparing the two systems—the doorbell system and the e-mail system—to demonstrate why it won’t work for spam.

The doorbell system fails for three reasons: the percentage of annoying visitors is small enough to make the system largely unnecessary, visitors don’t generally have dimes on them (presumably fixable if the system becomes ubiquitous), and it’s too easy to successfully bypass the system by knocking (not true for an apartment building).

The anti-spam system doesn’t suffer from the first two problems: spam is an enormous percentage of total e-mail, and an automated accounting system makes the financial mechanics easy. But the anti-spam system is too easy to bypass, and it’s too easy to hack. And once you set up a financial system, you’re simply inviting hacks.

The anti-spam system fails because spammers don’t have to send e-mail directly—they can take over innocent computers and send it from them. So it’s the people whose computers have been hacked into, victims in their own right, who will end up paying for spam. This risk can be limited by letting people put an upper limit on the money in their accounts, but it is still serious.

And criminals can exploit the system in the other direction, too. They could hack into innocent computers and have them send “spam” to their email addresses, collecting money in the process.

Trying to impose some sort of economic penalty on unwanted e-mail is a good idea, but it won’t work unless the endpoints are trusted. And we’re nowhere near that trust today.

Posted on May 10, 2007 at 5:57 AMView Comments

Weird Lottery Hack

This is a weird story:

On January 4, 2005 Dr Lee and Ms Day presented their Lotto ticket at the World Square Newsagency Bookshop. A friend took their photo with the ticket before they handed it in and filled in a claim form.

After the transaction, the employee who had served them, Chrishartato Ongkoputra, known as Chris Ong, substituted their claim form for one of his own. He then sent his form, and their winning ticket, to NSW Lotteries.

“The stars really aligned for him,” said the barrister James Stevenson, SC, who is representing newsagents Michael Pavellis and his partner Sheila Urech-Tan.

Mr Ong knew that NSW Lotteries would not pay out for 14 days. He told his boss he was having visa problems and needed to return temporarily to Indonesia. He gambled that the backpackers would not chase up their win until after he had left the country.

Gutsy.

Posted on May 7, 2007 at 11:07 AMView Comments

Attackers Exploiting Security Procedures

In East Belfast, burglars called in a bomb threat. Residents evacuated their homes, and then the burglars proceeded to rob eight empty houses on the block.

I’ve written about this sort of thing before: sometimes security procedures themselves can be exploited by attackers. It was Step 4 of my “five-step process” from Beyond Fear (pages 14-15). A national ID card make identity theft more lucrative; forcing people to remove their laptops at airport security checkpoints makes laptop theft more common.

Moral: you can’t just focus on one threat. You need to look at the broad spectrum of threats, and pay attention to how security against one affects the others.

Posted on April 30, 2007 at 12:27 PMView Comments

Top 10 Internet Crimes of 2006

According to the Internet Crime Complaint Center and reported in U.S. News and World Report, auction fraud and non-delivery of items purchased are far and away the most common Internet crimes. Identity theft is way down near the bottom.

Although the number of complaints last year­207,492­fell by 10 percent, the overall losses hit a record $198 million. By far the most reported crime: Internet auction fraud, garnering 45 percent of all complaints. Also big was nondelivery of merchandise or payment, which notched second at 19 percent. The biggest money losers: those omnipresent Nigerian scam letters, which fleeced victims on average of $5,100 ­followed by check fraud at $3,744 and investment fraud at $2,694.

[…]

The feds caution that these figures don’t represent a scientific sample of just how much Net crime is out there. They note, for example, that the high number of auction fraud complaints is due, in part, to eBay and other big E-commerce outfits offering customers direct links to the IC3 website. And it’s tough to measure what may be the Web’s biggest scourge, child porn, simply by complaints. Still, the survey is a useful snapshot, even if it tells us what we already know: that the Internet, like the rest of life, is full of bad guys. Caveat emptor.

Posted on April 24, 2007 at 12:25 PMView Comments

Cameras "Predict" Crimes

New developments from surveillance-camera-happy England:

The £7,000 device, nicknamed “the Bug”, consists of a ring of eight cameras scanning in all directions. It uses software to detect whether anybody is walking or loitering in a way that marks them out from the crowd. A ninth camera then zooms in to follow them if it thinks they are behaving suspiciously.

[…]

“The camera picks up on unusual movement, zooms in on someone and gathers evidence from a face and clothing, acting as a 24-hour operator without someone having to be there,” said Jason Butler, head of CCTV at Luton borough council. “We have kids with Asbos telling us they hate the thing because it follows them wherever they go.”

This is interesting. It moves us further along the continuum into thoughtcrimes, but near as I can tell, the system just collects evidence on people it thinks suspicious, just in case. Assuming the data is erased immediately after, it’s much less invasive than actually accosting someone for thoughtcrime; the costs for false alarms is minimal.

I doubt it works nearly as well as the article claims, but that’s likely to change in 5 to 10 years. For example, there’s a lot of research being done in the area of microfacial expressions to detect lying and other thoughts. This is the sort of technological advance that we need to be talking about in terms of security, privacy, and liberty.

Posted on April 19, 2007 at 6:20 AMView Comments

1 23 24 25 26 27 39

Sidebar photo of Bruce Schneier by Joe MacInnis.