Entries Tagged "laws"

Page 15 of 35

Scaring the Senate Intelligence Committee

This is unconscionable:

At Tuesday’s hearing, Senator Dianne Feinstein, Democrat of California and chairwoman of the Senate Intelligence Committee, asked Mr. Blair [the Director of National Intelligence] to assess the possibility of an attempted attack in the United States in the next three to six months.

He replied, “The priority is certain, I would say”—a response that was reaffirmed by the top officials of the C.I.A. and the F.B.I.

I don’t know what “the priority is certain” actually means, but now everyone is reporting that these agencies claim there will be a terrorist attack in the U.S. during the next six months.

Posted on February 5, 2010 at 11:59 AMView Comments

Online Credit/Debit Card Security Failure

Ross Anderson reports:

Online transactions with credit cards or debit cards are increasingly verified using the 3D Secure system, which is branded as “Verified by VISA” and “MasterCard SecureCode”. This is now the most widely-used single sign-on scheme ever, with over 200 million cardholders registered. It’s getting hard to shop online without being forced to use it.

In a paper I’m presenting today at Financial Cryptography, Steven Murdoch and I analyse 3D Secure. From the engineering point of view, it does just about everything wrong, and it’s becoming a fat target for phishing. So why did it succeed in the marketplace?

Quite simply, it has strong incentives for adoption. Merchants who use it push liability for fraud back to banks, who in turn push it on to cardholders. Properly designed single sign-on systems, like OpenID and InfoCard, can’t offer anything like this. So this is yet another case where security economics trumps security engineering, but in a predatory way that leaves cardholders less secure. We conclude with a suggestion on what bank regulators might do to fix the problem.

Posted on February 1, 2010 at 6:26 AMView Comments

Decertifying "Terrorist" Pilots

This article reads like something written by the company’s PR team.

When it comes to sleuthing these days, knowing your way within a database is as valued a skill as the classic, Sherlock Holmes-styled powers of detection.

Safe Banking Systems Software proved this very point in a demonstration of its algorithm acumen—one that resulted in a disclosure that convicted terrorists actually maintained working licenses with the U.S. Federal Aviation Administration.

The algorithm seems to be little more than matching up names and other basic info:

It used its algorithm-detection software to sift out uncommon names such as Abdelbaset Ali Elmegrahi, aka the Lockerbie bomber. It found that a number of licensed airmen all had the same P.O. box as their listed address—one that happened to be in Tripoli, Libya. These men all had working FAA certificates. And while the FAA database information investigated didn’t contain date-of-birth information, Safe Banking was able to use content on the FAA Website to determine these key details as well, to further gain a positive and clear identification of the men in question.

In any case, they found these three people with pilot’s licenses:

Elmegrahi, who had been posted on the FBI Most Wanted list for a decade and was convicted of blowing up Pan Am Flight 103, killing 259 people in 1988 over Lockerbie, Scotland. Elmegrahi was an FAA-certified aircraft dispatcher.

Re Tabib, a California resident who was convicted in 2007 for illegally exporting U.S. military aircraft parts—specifically export maintenance kits for F-14 fighter jets—to Iran. Tabib received three FAA licenses after his conviction, qualifying to be a flight instructor, ground instructor and transport pilot.

Myron Tereshchuk, who pleaded guilty to possession of a biological weapon after the FBI caught him with a brew of ricin, explosive powder and other essentials in Maryland in 2004. Tereshchuk was a licensed mechanic and student pilot.

And the article concludes with:

Suffice to say, after the FAA was made aware of these criminal histories, all three men have since been decertified.

Although I’m all for annoying international arms dealers, does anyone know the procedures for FAA decertification? Did the FAA have the legal right to do this, after being “made aware” of some information by a third party?

Of course, they don’t talk about all the false positives their system also found. How many innocents were also decertified? And they don’t mention the fact that, in the 9/11 attacks, FAA certification wasn’t really an issue. “Excuse me, young man. You can’t hijack and fly this aircraft. It says right here that the FAA decertified you.”

Posted on November 23, 2009 at 2:36 PMView Comments

FBI/CIA/NSA Information Sharing Before 9/11

It’s conventional wisdom that the legal “wall” between intelligence and law enforcement was one of the reasons we failed to prevent 9/11. The 9/11 Comission evaluated that claim, and published a classified report in 2004. The report was released, with a few redactions, over the summer: “Legal Barriers to Information Sharing: The Erection of a Wall Between Intelligence and Law Enforcement Investigations,” 9/11 Commission Staff Monograph by Barbara A. Grewe, Senior Counsel for Special Projects, August 20, 2004.

The report concludes otherwise:

“The information sharing failures in the summer of 2001 were not the result of legal barriers but of the failure of individuals to understand that the barriers did not apply to the facts at hand,” the 35-page monograph concludes. “Simply put, there was no legal reason why the information could not have been shared.”

The prevailing confusion was exacerbated by numerous complicating circumstances, the monograph explains. The Foreign Intelligence Surveillance Court was growing impatient with the FBI because of repeated errors in applications for surveillance. Justice Department officials were uncomfortable requesting intelligence surveillance of persons and facilities related to Osama bin Laden since there was already a criminal investigation against bin Laden underway, which normally would have preempted FISA surveillance. Officials were reluctant to turn to the FISA Court of Review for clarification of their concerns since one of the judges on the court had expressed doubts about the constitutionality of FISA in the first place. And so on. Although not mentioned in the monograph, it probably didn’t help that public interest critics in the 1990s (myself included) were accusing the FISA Court of serving as a “rubber stamp” and indiscriminately approving requests for intelligence surveillance.

In the end, the monograph implicitly suggests that if the law was not the problem, then changing the law may not be the solution.

James Bamford comes to much the same conclusion in his book, The Shadow Factory: The NSA from 9/11 to the Eavesdropping on America: there was no legal wall that prevented intelligence and law enforcement from sharing the information necessary to prevent 9/11; it was inter-agency rivalries and turf battles.

Posted on November 12, 2009 at 2:26 PMView Comments

The Problem of Vague Laws

The average American commits three felonies a day: the title of a new book by Harvey Silverglate. More specifically, the problem is the intersection of vague laws and fast-moving technology:

Technology moves so quickly we can barely keep up, and our legal system moves so slowly it can’t keep up with itself. By design, the law is built up over time by court decisions, statutes and regulations. Sometimes even criminal laws are left vague, to be defined case by case. Technology exacerbates the problem of laws so open and vague that they are hard to abide by, to the point that we have all become potential criminals.

Boston civil-liberties lawyer Harvey Silverglate calls his new book “Three Felonies a Day,” referring to the number of crimes he estimates the average American now unwittingly commits because of vague laws. New technology adds its own complexity, making innocent activity potentially criminal.

[…]

In 2001, a man named Bradford Councilman was charged in Massachusetts with violating the wiretap laws. He worked at a company that offered an online book-listing service and also acted as an Internet service provider to book dealers. As an ISP, the company routinely intercepted and copied emails as part of the process of shuttling them through the Web to recipients.

The federal wiretap laws, Mr. Silverglate writes, were “written before the dawn of the Internet, often amended, not always clear, and frequently lagging behind the whipcrack speed of technological change.” Prosecutors chose to interpret the ISP role of momentarily copying messages as they made their way through the system as akin to impermissibly listening in on communications. The case went through several rounds of litigation, with no judge making the obvious point that this is how ISPs operate. After six years, a jury found Mr. Councilman not guilty.

Other misunderstandings of the Web criminalize the exercise of First Amendment rights. A Saudi student in Idaho was charged in 2003 with offering “material support” to terrorists. He had operated Web sites for a Muslim charity that focused on normal religious training, but was prosecuted on the theory that if a user followed enough links off his site, he would find violent, anti-American comments on other sites. The Internet is a series of links, so if there’s liability for anything in an online chain, it would be hard to avoid prosecution.

EDITED TO ADD (10/12): Audio interview with Harvey Silvergate.

Posted on September 29, 2009 at 1:08 PMView Comments

The "Hidden Cost" of Privacy

Forbes ran an article talking about the “hidden” cost of privacy. Basically, the point was that privacy regulations are expensive to comply with, and a lot of that expense gets eaten up by the mechanisms of compliance and doesn’t go toward improving anyone’s actual privacy. This is a valid point, and one that I make in talks about privacy all the time. It’s particularly bad in the United States, because we have a patchwork of different privacy laws covering different types of information and different situations and not a single comprehensive privacy law.

The meta-problem is simple to describe: those entrusted with our privacy often don’t have much incentive to respect it. Examples include: credit bureaus such as TransUnion and Experian, who don’t have any business relationship at all with the people whose data they collect and sell; companies such as Google who give away services—and collect personal data as a part of that—as an incentive to view ads, and make money by selling those ads to other companies; medical insurance companies, who are chosen by a person’s employer; and computer software vendors, who can have monopoly powers over the market. Even worse, it can be impossible to connect an effect of a privacy violation with the violation itself—if someone opens a bank account in your name, how do you know who was to blame for the privacy violation?—so even when there is a business relationship, there’s no clear cause-and-effect relationship.

What this all means is that protecting individual privacy remains an externality for many companies, and that basic market dynamics won’t work to solve the problem. Because the efficient market solution won’t work, we’re left with inefficient regulatory solutions. So now the question becomes: how do we make regulation as efficient as possible? I have some suggestions:

  1. Broad privacy regulations are better than narrow ones.
  2. Simple and clear regulations are better than complex and confusing ones.
  3. It’s far better to regulate results than methodology.
  4. Penalties for bad behavior need to be expensive enough to make good behavior the rational choice.

We’ll never get rid of the inefficiencies of regulation—that’s the nature of the beast, and why regulation only makes sense when the market fails—but we can reduce them.

Posted on June 15, 2009 at 6:45 AMView Comments

Fear of Aerial Images

Time for some more fear about terrorists using maps and images on the Internet.

But the more striking images come when Portzline clicks on the “bird’s-eye” option offered by the map service. The overhead views, which come chiefly from satellites, are replaced with strikingly clear oblique-angle photos, chiefly shot from aircraft. By clicking another button, he can see the same building from all four sides.

“What we’re seeing here is a guard shack,” Portzline said, pointing to a rooftop structure. “This is a communications device for the nuclear plant.”

He added, “This particular building is the air intake for the control room. And there’s some nasty thing you could do to disable the people in the control room. So this type of information should not be available. I look at this and just say, ‘Wow.’ ”

Terror expert and author Brian Jenkins agreed that the pictures are “extraordinarily impressive.”

“If I were a terrorist planning an attack, I would want that imagery. That would facilitate that mission,” he said. “And given the choice between renting an airplane or trying some other way to get it, versus tapping in some things on my computer, I certainly want to do the latter. (It will) reduce my risk, and the first they’re going to know about my attack is when it takes place.”

Gadzooks, people, enough with the movie plots.

Joel Anderson, a member of the California Assembly, has more expansive goals. He has introduced a bill in the state Legislature that would prohibit “virtual globe” services from providing unblurred pictures of schools, churches and government or medical facilities in California. It also would prohibit those services from providing street-view photos of those buildings.

“It struck me that a person in a tent halfway around the world could target an attack like that with a laptop computer,” said Anderson, a Republican legislator who represents San Diego’s East County. Anderson said he doesn’t want to limit technology, but added, “There’s got to be some common sense.”

I wonder why he thinks that “schools, churches and government or medical facilities” are terrorist targets worth protecting, and movie theaters, stadiums, concert halls, restaurants, train stations, shopping malls, Toys-R-Us stores on the day after Thanksgiving, train stations, and theme parks are not. After all, “there’s got to be some common sense.”

Now, both have launched efforts to try to get Internet map services to remove or blur images of sensitive sites, saying the same technology that allows people to see a neighbor’s swimming pool can be used by terrorists to chose targets and plan attacks.

Yes, and the same technology that allows people to call their friends can be used by terrorists to choose targets and plan attacks. And the same technology that allows people to commute to work can be used by terrorists to plan and execute attacks. And the same technology that allows you to read this blog post…repeat until tired.

Of course, this is nothing I haven’t said before:

Criminals have used telephones and mobile phones since they were invented. Drug smugglers use airplanes and boats, radios and satellite phones. Bank robbers have long used cars and motorcycles as getaway vehicles, and horses before then. I haven’t seen it talked about yet, but the Mumbai terrorists used boats as well. They also wore boots. They ate lunch at restaurants, drank bottled water, and breathed the air. Society survives all of this because the good uses of infrastructure far outweigh the bad uses, even though the good uses are—by and large—small and pedestrian and the bad uses are rare and spectacular. And while terrorism turns society’s very infrastructure against itself, we only harm ourselves by dismantling that infrastructure in response—just as we would if we banned cars because bank robbers used them too.

You’re not going to stop terrorism by deliberately degrading our infrastructure. Refuse to be terrorized, everyone.

Posted on June 8, 2009 at 6:15 AMView Comments

IEDs Are Now Weapons of Mass Destruction

In an article on the recent arrests in New York:

On Wednesday night, they planted one of the mock improvised explosive devices in a trunk of a car outside the temple and two mock bombs in the back seat of a car outside the Jewish center, the authorities said. Shortly thereafter, police officers swooped in and broke the windows on the suspects’ black sport utility vehicle and charged them with conspiracy to use weapons of mass destruction within the United States and conspiracy to acquire and use antiaircraft missiles.

I’ve covered this before. According to the law, almost any weapon is a weapon of mass destruction.

From the complaint:

… knowingly did combine, conspire, confederate and agree together and with each other to use a weapon of mass destruction, to wit, a surface-to-air guided missile system and an improvised explosive device (“IED”) containing over 30 pounds of Composition 4 (‘C-4″) military grade plastic explosive material against persons and property within the United States.

Posted on May 21, 2009 at 3:54 PMView Comments

Software Problems with a Breath Alcohol Detector

This is an excellent lesson in the security problems inherent in trusting proprietary software:

After two years of attempting to get the computer based source code for the Alcotest 7110 MKIII-C, defense counsel in State v. Chun were successful in obtaining the code, and had it analyzed by Base One Technologies, Inc.

Draeger, the manufacturer maintained that the system was perfect, and that revealing the source code would be damaging to its business. They were right about the second part, of course, because it turned out that the code was terrible.

2. Readings are Not Averaged Correctly: When the software takes a series of readings, it first averages the first two readings. Then, it averages the third reading with the average just computed. Then the fourth reading is averaged with the new average, and so on. There is no comment or note detailing a reason for this calculation, which would cause the first reading to have more weight than successive readings. Nonetheless, the comments say that the values should be averaged, and they are not.

3. Results Limited to Small, Discrete Values: The A/D converters measuring the IR readings and the fuel cell readings can produce values between 0 and 4095. However, the software divides the final average(s) by 256, meaning the final result can only have 16 values to represent the five-volt range (or less), or, represent the range of alcohol readings possible. This is a loss of precision in the data; of a possible twelve bits of information, only four bits are used. Further, because of an attribute in the IR calculations, the result value is further divided in half. This means that only 8 values are possible for the IR detection, and this is compared against the 16 values of the fuel cell.

4. Catastrophic Error Detection Is Disabled: An interrupt that detects that the microprocessor is trying to execute an illegal instruction is disabled, meaning that the Alcotest software could appear to run correctly while executing wild branches or invalid code for a period of time. Other interrupts ignored are the Computer Operating Property (a watchdog timer), and the Software Interrupt.

Basically, the system was designed to return some sort of result regardless.

This is important. As we become more and more dependent on software for evidentiary and other legal applications, we need to be able to carefully examine that software for accuracy, reliability, etc. Every government contract for breath alcohol detectors needs to include the requirement for public source code. “You can’t look at our code because we don’t want you to” simply isn’t good enough.

Posted on May 13, 2009 at 2:07 PMView Comments

1 13 14 15 16 17 35

Sidebar photo of Bruce Schneier by Joe MacInnis.