Entries Tagged "law enforcement"

Page 22 of 46

The Problems with Unscientific Security

From the Open Access Journal of Forensic Psychology, by a whole list of authors: “A Call for Evidence-Based Security Tools“:

Abstract: Since the 2001 attacks on the twin towers, policies on security have changed drastically, bringing about an increased need for tools that allow for the detection of deception. Many of the solutions offered today, however, lack scientific underpinning.

We recommend two important changes to improve the (cost) effectiveness of security policy. To begin with, the emphasis of deception research should shift from technological to behavioural sciences. Secondly, the burden of proof should lie with the manufacturers of the security tools. Governments should not rely on security tools that have not passed scientific scrutiny, and should only employ those methods that have been proven effective. After all, the use of tools that do not work will only get us further from the truth.

One excerpt:

In absence of systematic research, users will base their evaluation on data generated by field use. Because people tend to follow heuristics rather than the rules of probability theory, perceived effectiveness can substantially differ from true effectiveness (Tversky & Kahneman, 1973). For example, one well-known problem associated with field studies is that of selective feedback. Investigative authorities are unlikely to receive feedback from liars who are erroneously considered truthful. They will occasionally receive feedback when correctly detecting deception, for example through confessions (Patrick & Iacono, 1991; Vrij, 2008). The perceived effectiveness that follows from this can be further reinforced through confirmation bias: Evidence confirming one’s preconception is weighted more heavily than evidence contradicting it (Lord, Ross, & Lepper, 1979). As a result, even techniques that perform at chance level may be perceived as highly effective (Iacono, 1991). This unwarranted confidence can have profound effects on citizens’ safety and civil liberty: Criminals may escape detection while innocents may be falsely accused. The Innocence Project (Unvalidated or improper science, no date) demonstrates that unvalidated or improper forensic science can indeed lead to wrongful convictions (see also Saks & Koehler, 2005).

Article on the paper.

Posted on November 5, 2009 at 6:11 AMView Comments

The FBI and Wiretaps

To aid their Wall Street investigations, the FBI used DCSNet, their massive surveillance system.

Prosecutors are using the FBI’s massive surveillance system, DCSNet, which stands for Digital Collection System Network. According to Wired magazine, this system connects FBI wiretapping rooms to switches controlled by traditional land-line operators, internet-telephony providers and cellular companies. It can be used to instantly wiretap almost any communications device in the U.S.—wireless or tethered. In other words, you and I have no privacy. The government can listen in on any call made in the continental U.S. (This is all well and good if you trust every government employee. But what if an attorney general running for higher office will do anything to finger a high-profile target? Or what if a prosecutor has a personal grudge he’d like to fulfill? It seems to me it would be easy for this power to fall into the wrong hands.)

Posted on November 2, 2009 at 8:57 AMView Comments

Helpful Hint for Fugitives: Don't Update Your Location on Facebook

Fugitive caught after updating his status on Facebook.”

Investigators scoured social networking sites such as Facebook and MySpace but initially could find no trace of him and were unable to pin down his location in Mexico.

Several months later, a secret service agent, Seth Reeg, checked Facebook again and up popped MaxiSopo. His photo showed him partying in front of a backdrop featuring logos of BMW and Courvoisier cognac, sporting a black jacket adorned with a not-so-subtle white lion.

Although Sopo’s profile was set to private, his list of friends was not. Scoville started combing through it and was surprised to see that one friend listed an affiliation with the justice department. He sent a message requesting a phone call.

“We figured this was a person we could probably trust to keep our inquiry discreet,” Scoville said.

Proving the 2.0 adage that a friend on Facebook is rarely a friend indeed, the former official said he had met Sopo in Cancun’s nightclubs a few times, but did not really know him and had no idea he was a fugitive. The official learned where Sopo was living and passed that information back to Scoville, who provided it to Mexican authorities. They arrested Sopo last month.

It’s easy to say “so dumb,” and it would be true, but what’s interesting is how people just don’t think through the privacy implications of putting their information on the Internet. Facebook is how we interact with friends, and we think of it in the frame of interacting with friends. We don’t think that our employers might be looking—they’re not our friends!—that the information will be around forever, or that it might be abused. Privacy isn’t salient; chatting with friends is.

Posted on October 19, 2009 at 7:55 AMView Comments

Computer-Assisted Witness Identification

Witnesses are much more accurate at identifying criminals when computers assist in the identification process, not police officers.

A major cause of miscarriages of justice could be avoided if computers, rather than detectives, guided witnesses through the identification of suspects. That’s according to Brent Daugherty at the University of North Carolina in Charlotte and colleagues, who say that too often officers influence witnesses’ choices.

The problem was highlighted in 2003 when the Innocence Project in New York analysed the case histories of 130 wrongly imprisoned people later freed by DNA evidence. Mistaken eyewitness identification was a factor in 77 per cent of the cases examined.

Makes sense to me.

Posted on October 7, 2009 at 7:12 AMView Comments

Subpoenas as a Security Threat

Blog post from Ed Felten:

Usually when the threat model mentions subpoenas, the bigger threats in reality come from malicious intruders or insiders. The biggest risk in storing my documents on CloudCorp’s servers is probably that somebody working at CloudCorp, or a contractor hired by them, will mess up or misbehave.

So why talk about subpoenas rather than intruders or insiders? Perhaps this kind of talk is more diplomatic than the alternative. If I’m talking about the risks of Gmail, I might prefer not to point out that my friends at Google could hire someone who is less than diligent, or less than honest. If I talk about subpoenas as the threat, nobody in the room is offended, and the security measures I recommend might still be useful against intruders and insiders. It’s more polite to talk about data losses that are compelled by a mysterious, powerful Other—in this case an Anonymous Lawyer.

Politeness aside, overemphasizing subpoena threats can be harmful in at least two ways. First, we can easily forget that enforcement of subpoenas is often, though not always, in society’s interest. Our legal system works better when fact-finders have access to a broader range of truthful evidence. That’s why we have subpoenas in the first place. Not all subpoenas are good—and in some places with corrupt or evil legal systems, subpoenas deserve no legitimacy at all—but we mustn’t lose sight of society’s desire to balance the very real cost imposed on the subpoena’s target and affected third parties, against the usefulness of the resulting evidence in administering justice.

The second harm is to security. To the extent that we focus on the subpoena threat, rather than the larger threats of intruders and insiders, we risk finding “solutions” that fail to solve our biggest problems. We might get lucky and end up with a solution that happens to address the bigger threats too. We might even design a solution for the bigger threats, and simply use subpoenas as a rhetorical device in explaining our solution—though it seems risky to mislead our audience about our motivations. If our solution flows from our threat model, as it should, then we need to be very careful to get our threat model right.

Posted on September 4, 2009 at 6:18 AMView Comments

Matthew Weigman

Fascinating story of a 16-year-old blind phone phreaker.

One afternoon, not long after Proulx was swatted, Weigman came home to find his mother talking to what sounded like a middle-aged male. The man introduced himself as Special Agent Allyn Lynd of the FBI’s cyber squad in Dallas, which investigates hacking and other computer crimes. A West Point grad, Lynd had spent 10 years combating phreaks and hackers. Now, with Proulx’s cooperation, he was aiming to take down Stuart Rosoff and the Wrecking Crew—and he wanted Weigman’s help.

Lynd explained that Rosoff, Roberson and other party-liners were being investigated in a swatting conspiracy. Because Weigman was a minor, however, he would not be charged—as long as he cooperated with the authorities. Realizing that this was a chance to turn his life around, Weigman confessed his role in the phone assaults.

Weigman’s auditory skills had always been central to his exploits, the means by which he manipulated the phone system. Now he gave Lynd a first-hand display of his powers. At one point during the visit, Lynd’s cellphone rang. “I can’t talk to you right now,” the agent told the caller. “I’m out doing something.” When he hung up, Weigman turned to him from across the room. “Oh,” the kid asked, “is that Billy Smith from Verizon?”

Lynd was stunned. William Smith was a fraud investigator with Verizon who had been working with him on the swatting case. Weigman not only knew all about the man and his role in the investigation, but he had identified Smith simply by hearing his Southern-accented voice on the cellphone—a sound which would have been inaudible to anyone else in the room. Weigman then shocked Lynd again, rattling off the names of a host of investigators working for other phone companies. Matt, it turned out, had spent weeks identifying phone-company employees, gaining their trust and obtaining confidential information about the FBI investigation against him. Even the phone account in his house, he revealed to Lynd, had been opened under the name of a telephone-company investigator. Lynd had rarely seen anything like it—even from cyber gangs who tried to hack into systems at the White House and the FBI. “Weigman flabbergasted me,” he later testified.

Posted on September 1, 2009 at 6:21 AMView Comments

Actual Security Theater

As part of their training, federal agents engage in mock exercises in public places. Sometimes, innocent civilians get involved.

Every day, as Washingtonians go about their overt lives, the FBI, CIA, Capitol Police, Secret Service and U.S. Marshals Service stage covert dramas in and around the capital where they train. Officials say the scenarios help agents and officers integrate the intellectual, physical and emotional aspects of classroom instruction. Most exercises are performed inside restricted compounds. But they also unfold in public parks, suburban golf clubs and downtown transit stations.

Curtain up on threat theater—a growing, clandestine art form. Joseph Persichini, Jr., assistant director of the FBI’s Washington field office, says, “What better way to adapt agents or analysts to cultural idiosyncrasies than role play?”

For the public, there are rare, startling peeks: At a Holiday Inn, a boy in water wings steps out of his seventh floor room into a stampede of federal agents; at a Bowie retirement home, an elderly woman panics as a role-player collapses, believing his seizure is real; at a county museum, a father sweeps his daughter into his arms, running for the exit, while a raving, bearded man resists arrest.

EDITED TO ADD (9/11): It happened in D.C., in the Potomac River, with the Coast Guard.

Posted on August 25, 2009 at 6:43 AMView Comments

Cybercrime Paper

Distributed Security: A New Model of Law Enforcement,” Susan W. Brenner and Leo L. Clarke.

Abstract:
Cybercrime, which is rapidly increasing in frequency and in severity, requires us to rethink how we should enforce our criminal laws. The current model of reactive, police-based enforcement, with its origins in real-world urbanization, does not and cannot protect society from criminals using computer technology. This article proposes a new model of distributed security that can supplement the traditional model and allow us to deal effectively with cybercrime. The new model employs criminal sanctions, primarily fines, to induce computer users and those who provide access to cyberspace to employ reasonable security measures as deterrents. We argue that criminal sanctions are preferable in this context to civil liability, and we suggest a system of administrative regulation backed by criminal sanctions that will provide the incentives necessary to create a workable deterrent to cybercrime.

It’s from 2005, but I’ve never seen it before.

Posted on July 20, 2009 at 6:43 AMView Comments

1 20 21 22 23 24 46

Sidebar photo of Bruce Schneier by Joe MacInnis.