Entries Tagged "laws"

Page 25 of 35

Securing Wireless Networks with Stickers

Does anyone think this California almost-law (it’s awaiting the governor’s signature) will do any good at all?

From 1 October 2007, manufacturers must place warning labels on all equipment capable of receiving Wi-Fi signals, according to the new state law. These can take the form of box stickers, special notification in setup software, notification during the router setup, or through automatic securing of the connection. One warning sticker must be positioned so that it must be removed by a consumer before the product can be used.

Posted on September 5, 2006 at 1:56 PMView Comments

Terrorists as Pirates

The Dread Pirate Bin Laden” argues that, legally, terrorists should be treated as pirates under international law:

More than 2,000 years ago, Marcus Tullius Cicero defined pirates in Roman law as hostis humani generis, “enemies of the human race.” From that day until now, pirates have held a unique status in the law as international criminals subject to universal jurisdiction—meaning that they may be captured wherever they are found, by any person who finds them. The ongoing war against pirates is the only known example of state vs. nonstate conflict until the advent of the war on terror, and its history is long and notable. More important, there are enormous potential benefits of applying this legal definition to contemporary terrorism.

[…]

President Bush and others persist in depicting this new form of state vs. nonstate warfare in traditional terms, as with the president’s declaration of June 2, 2004, that “like the Second World War, our present conflict began with a ruthless surprise attack on the United States.” He went on: “We will not forget that treachery and we will accept nothing less than victory over the enemy.” What constitutes ultimate victory against an enemy that lacks territorial boundaries and governmental structures, in a war without fields of battle or codes of conduct? We can’t capture the enemy’s capital and hoist our flag in triumph. The possibility of perpetual embattlement looms before us.

If the war on terror becomes akin to war against the pirates, however, the situation would change. First, the crime of terrorism would be defined and proscribed internationally, and terrorists would be properly understood as enemies of all states. This legal status carries significant advantages, chief among them the possibility of universal jurisdiction. Terrorists, as hostis humani generis, could be captured wherever they were found, by anyone who found them. Pirates are currently the only form of criminals subject to this special jurisdiction.

Second, this definition would deter states from harboring terrorists on the grounds that they are “freedom fighters” by providing an objective distinction in law between legitimate insurgency and outright terrorism. This same objective definition could, conversely, also deter states from cracking down on political dissidents as “terrorists,” as both Russia and China have done against their dissidents.

Recall the U.N. definition of piracy as acts of “depredation [committed] for private ends.” Just as international piracy is viewed as transcending domestic criminal law, so too must the crime of international terrorism be defined as distinct from domestic homicide or, alternately, revolutionary activities. If a group directs its attacks on military or civilian targets within its own state, it may still fall within domestic criminal law. Yet once it directs those attacks on property or civilians belonging to another state, it exceeds both domestic law and the traditional right of self-determination, and becomes akin to a pirate band.

Third, and perhaps most important, nations that now balk at assisting the United States in the war on terror might have fewer reservations if terrorism were defined as an international crime that could be prosecuted before the International Criminal Court.

Ross Anderson recognized the parallels between terrorism and piracy back in 2001.

Posted on August 30, 2006 at 7:57 AMView Comments

Broadening CALEA

In 1994, Congress passed the Communications Assistance for Law Enforcement Act (CALEA). Basically, this is the law that forces the phone companies to make your telephone calls—including cell phone calls—available for government wiretapping.

But now the government wants access to VoIP calls, and SMS messages, and everything else. They’re doing their best to interpret CALEA as broadly as possible, but they’re also pursuing a legal angle. Ars Technica has the story:

The government hopes to shore up the legal basis for the program by passing amended legislation. The EFF took a look at the amendments and didn’t like what it found.

According to the Administration, the proposal would “confirm [CALEA’s] coverage of push-to-talk, short message service, voice mail service and other communications services offered on a commercial basis to the public,” along with “confirm[ing] CALEA’s application to providers of broadband Internet access, and certain types of ‘Voice-Over-Internet-Protocol’ (VOIP).” Many of CALEA’s express exceptions and limitations are also removed. Most importantly, while CALEA’s applicability currently depends on whether broadband and VOIP can be considered “substantial replacements” for existing telephone services, the new proposal would remove this limit.

Posted on July 28, 2006 at 11:09 AMView Comments

Unreliable Programming

One response to software liability:

Now suppose that there was a magical wand for taking snapshots of computer states just before crashes. Or that the legal system would permit claims on grounds of only the second part of the proof. Then there would be a strong positive incentive to write software that fails unreproducibly: “If our software’s errors cannot be demonstrated reliably in court, we will never lose money in product liability cases.”

Follow the link for examples.

Posted on July 11, 2006 at 7:47 AMView Comments

Congress Learns How Little Privacy We Have

Reuters story:

Almost every piece of personal information that Americans try to keep secret—including bank account statements, e-mail messages and telephone records—is semi-public and available for sale.

That was the lesson Congress learned over the last week during a series of hearings aimed at exposing peddlers of personal data, from whom banks, car dealers, jealous lovers and even some law enforcement officers have covertly purchased information to use as they wish.

And:

The committee subpoenaed representatives from 11 companies that use the Internet and phone calls to obtain, market, and sell personal data, but they refused to talk.

All invoked their constitutional right to not incriminate themselves when asked whether they sold “personal, non-public information” that had been obtained by lying or impersonating someone.

Posted on June 28, 2006 at 7:39 AMView Comments

Privacy as Contextual Integrity

Interesting law review article by Helen Nissenbaum:

Abstract: The practices of public surveillance, which include the monitoring of individuals in public through a variety of media (e.g., video, data, online), are among the least understood and controversial challenges to privacy in an age of information technologies. The fragmentary nature of privacy policy in the United States reflects not only the oppositional pulls of diverse vested interests, but also the ambivalence of unsettled intuitions on mundane phenomena such as shopper cards, closed-circuit television, and biometrics. This Article, which extends earlier work on the problem of privacy in public, explains why some of the prominent theoretical approaches to privacy, which were developed over time to meet traditional privacy challenges, yield unsatisfactory conclusions in the case of public surveillance. It posits a new construct, ‘contextual integrity’ as an alternative benchmark for privacy, to capture the nature of challenges posed by information technologies. Contextual integrity ties adequate protection for privacy to norms of specific contexts, demanding that information gathering and dissemination be appropriate to that context and obey the governing norms of distribution within it. Building on the idea of ‘spheres of justice’ developed by political philosopher Michael Walzer, this Article argues that public surveillance violates a right to privacy because it violates contextual integrity; as such, it constitutes injustice and even tyranny.

Posted on June 9, 2006 at 7:11 AMView Comments

Lying to Government Agents

“How to Avoid Going to Jail under 18 U.S.C. Section 1001 for Lying to Government Agents”

Title 18, United States Code, Section 1001 makes it a crime to: 1) knowingly and willfully; 2) make any materially false, fictitious or fraudulent statement or representation; 3) in any matter within the jurisdiction of the executive, legislative or judicial branch of the United States. Your lie does not even have to be made directly to an employee of the national government as long as it is “within the jurisdiction” of the ever expanding federal bureaucracy. Though the falsehood must be “material” this requirement is met if the statement has the “natural tendency to influence or [is] capable of influencing, the decision of the decisionmaking body to which it is addressed.” United States v. Gaudin, 515 U.S. 506, 510 (1995). (In other words, it is not necessary to show that your particular lie ever really influenced anyone.) Although you must know that your statement is false at the time you make it in order to be guilty of this crime, you do not have to know that lying to the government is a crime or even that the matter you are lying about is “within the jurisdiction” of a government agency. United States v. Yermian, 468 U.S. 63, 69 (1984). For example, if you lie to your employer on your time and attendance records and, unbeknownst to you, he submits your records, along with those of other employees, to the federal government pursuant to some regulatory duty, you could be criminally liable.

Posted on June 5, 2006 at 1:24 PMView Comments

Dangers of Reporting a Computer Vulnerability

This essay makes the case that there no way to safely report a computer vulnerability.

The first reason is that whenever you do something “unnecessary,” such as reporting a vulnerability, police wonder why, and how you found out. Police also wonders if you found one vulnerability, could you have found more and not reported them? Who did you disclose that information to? Did you get into the web site, and do anything there that you shouldn’t have? It’s normal for the police to think that way. They have to. Unfortunately, it makes it very uninteresting to report any problems.

A typical difficulty encountered by vulnerability researchers is that administrators or programmers often deny that a problem is exploitable or is of any consequence, and request a proof. This got Eric McCarty in trouble—the proof is automatically a proof that you breached the law, and can be used to prosecute you! Thankfully, the administrators of the web site believed our report without trapping us by requesting a proof in the form of an exploit and fixed it in record time. We could have been in trouble if we had believed that a request for a proof was an authorization to perform penetration testing. I believe that I would have requested a signed authorization before doing it, but it is easy to imagine a well-meaning student being not as cautious (or I could have forgotten to request the written authorization, or they could have refused to provide it…). Because the vulnerability was fixed in record time, it also protected us from being accused of the subsequent break-in, which happened after the vulnerability was fixed, and therefore had to use some other means. If there had been an overlap in time, we could have become suspects.

Interesting essay, and interesting comments. And here’s an article on the essay.

Remember, full disclosure is the best tool we have to improve security. It’s an old argument, and I wrote about it way back in 2001. If people can’t report security vulnerabilities, then vendors won’t fix them.

EDITED TO ADD (5/26): Robert Lemos on “Ethics and the Eric McCarty Case.”

Posted on May 26, 2006 at 7:35 AMView Comments

Man Sues Compaq for False Advertising

Convicted felon Michael Crooker is suing Compaq (now HP) for false advertising. He bought a computer promised to be secure, but the FBI got his data anyway:

He bought it in September 2002, expressly because it had a feature called DriveLock, which freezes up the hard drive if you don’t have the proper password.

The computer’s manual claims that “if one were to lose his Master Password and his User Password, then the hard drive is useless and the data cannot be resurrected even by Compaq’s headquarters staff,” Crooker wrote in the suit.

Crooker has a copy of an ATF search warrant for files on the computer, which includes a handwritten notation: “Computer lock not able to be broken/disabled. Computer forwarded to FBI lab.” Crooker says he refused to give investigators the password, and was told the computer would be broken into “through a backdoor provided by Compaq,” which is now part of HP.

It’s unclear what was done with the laptop, but Crooker says a subsequent search warrant for his e-mail account, issued in January 2005, showed investigators had somehow gained access to his 40 gigabyte hard drive. The FBI had broken through DriveLock and accessed his e-mails (both deleted and not) as well as lists of websites he’d visited and other information. The only files they couldn’t read were ones he’d encrypted using Wexcrypt, a software program freely available on the Internet.

I think this is great. It’s about time that computer companies were held liable for their advertising claims.

But his lawsuit against HP may be a long shot. Crooker appears to face strong counterarguments to his claim that HP is guilty of breach of contract, especially if the FBI made the company provide a backdoor.

“If they had a warrant, then I don’t see how his case has any merit at all,” said Steven Certilman, a Stamford attorney who heads the Technology Law section of the Connecticut Bar Association. “Whatever means they used, if it’s covered by the warrant, it’s legitimate.”

If HP claimed DriveLock was unbreakable when the company knew it was not, that might be a kind of false advertising.

But while documents on HP’s web site do claim that without the correct passwords, a DriveLock’ed hard drive is “permanently unusable,” such warnings may not constitute actual legal guarantees.

According to Certilman and other computer security experts, hardware and software makers are careful not to make themselves liable for the performance of their products.

“I haven’t heard of manufacturers, at least for the consumer market, making a promise of computer security. Usually you buy naked hardware and you’re on your own,” Certilman said. In general, computer warrantees are “limited only to replacement and repair of the component, and not to incidental consequential damages such as the exposure of the underlying data to snooping third parties,” he said. “So I would be quite surprised if there were a gaping hole in their warranty that would allow that kind of claim.”

That point meets with agreement from the noted computer security skeptic Bruce Schneier, the chief technology officer at Counterpane Internet Security in Mountain View, Calif.

“I mean, the computer industry promises nothing,” he said last week. “Did you ever read a shrink-wrapped license agreement? You should read one. It basically says, if this product deliberately kills your children, and we knew it would, and we decided not to tell you because it might harm sales, we’re not liable. I mean, it says stuff like that. They’re absurd documents. You have no rights.”

My final quote in the article:

“Unfortunately, this probably isn’t a great case,” Schneier said. “Here’s a man who’s not going to get much sympathy. You want a defendant who bought the Compaq computer, and then, you know, his competitor, or a rogue employee, or someone who broke into his office, got the data. That’s a much more sympathetic defendant.”

Posted on May 3, 2006 at 9:26 AMView Comments

NSA Warrantless Wiretapping and Total Information Awareness

Technology Review has an interesting article discussing some of the technologies used by the NSA in its warrantless wiretapping program, some of them from the killed Total Information Awareness (TIA) program.

Washington’s lawmakers ostensibly killed the TIA project in Section 8131 of the Department of Defense Appropriations Act for fiscal 2004. But legislators wrote a classified annex to that document which preserved funding for TIA’s component technologies, if they were transferred to other government agencies, say sources who have seen the document, according to reports first published in The National Journal. Congress did stipulate that those technologies should only be used for military or foreign intelligence purposes against non-U.S. citizens. Still, while those component projects’ names were changed, their funding remained intact, sometimes under the same contracts.

Thus, two principal components of the overall TIA project have migrated to the Advanced Research and Development Activity (ARDA), which is housed somewhere among the 60-odd buildings of “Crypto City,” as NSA headquarters in Fort Meade, MD, is nicknamed. One of the TIA components that ARDA acquired, the Information Awareness Prototype System, was the core architecture that would have integrated all the information extraction, analysis, and dissemination tools developed under TIA. According to The National Journal, it was renamed “Basketball.” The other, Genoa II, used information technologies to help analysts and decision makers anticipate and pre-empt terrorist attacks. It was renamed “Topsail.”

Posted on April 28, 2006 at 8:01 AMView Comments

1 23 24 25 26 27 35

Sidebar photo of Bruce Schneier by Joe MacInnis.