Entries Tagged "children"

Page 1 of 9

UK Government to Launch PR Campaign Undermining End-to-End Encryption

Rolling Stone is reporting that the UK government has hired the M&C Saatchi advertising agency to launch an anti-encryption advertising campaign. Presumably they’ll lean heavily on the “think of the children!” rhetoric we’re seeing in this current wave of the crypto wars. The technical eavesdropping mechanisms have shifted to client-side scanning, which won’t actually help—but since that’s not really the point, it’s not argued on its merits.

Posted on January 18, 2022 at 6:05 AMView Comments

Teaching Cybersecurity to Children

A new draft of an Australian educational curriculum proposes teaching children as young as five cybersecurity:

The proposed curriculum aims to teach five-year-old children—an age at which Australian kids first attend school—not to share information such as date of birth or full names with strangers, and that they should consult parents or guardians before entering personal information online.

Six-and-seven-year-olds will be taught how to use usernames and passwords, and the pitfalls of clicking on pop-up links to competitions.

By the time kids are in third and fourth grade, they’ll be taught how to identify the personal data that may be stored by online services, and how that can reveal their location or identity. Teachers will also discuss “the use of nicknames and why these are important when playing online games.”

By late primary school, kids will be taught to be respectful online, including “responding respectfully to other people’s opinions even if they are different from personal opinions.”

I have mixed feeling about this. Norms around these things are changing so fast, and it’s not likely that we in the older generation will get to dictate what the younger generation does. But these sorts of online privacy conversations are worth having around the same time children learn about privacy in other contexts.

Posted on May 7, 2021 at 8:36 AMView Comments

Facebook Helped Develop a Tails Exploit

This is a weird story:

Hernandez was able to evade capture for so long because he used Tails, a version of Linux designed for users at high risk of surveillance and which routes all inbound and outbound connections through the open-source Tor network to anonymize it. According to Vice, the FBI had tried to hack into Hernandez’s computer but failed, as the approach they used “was not tailored for Tails.” Hernandez then proceeded to mock the FBI in subsequent messages, two Facebook employees told Vice.

Facebook had tasked a dedicated employee to unmasking Hernandez, developed an automated system to flag recently created accounts that messaged minors, and made catching Hernandez a priority for its security teams, according to Vice. They also paid a third party contractor “six figures” to help develop a zero-day exploit in Tails: a bug in its video player that enabled them to retrieve the real I.P. address of a person viewing a clip. Three sources told Vice that an intermediary passed the tool onto the FBI, who then obtained a search warrant to have one of the victims send a modified video file to Hernandez (a tactic the agency has used before).

[…]

Facebook also never notified the Tails team of the flaw—breaking with a long industry tradition of disclosure in which the relevant developers are notified of vulnerabilities in advance of them becoming public so they have a chance at implementing a fix. Sources told Vice that since an upcoming Tails update was slated to strip the vulnerable code, Facebook didn’t bother to do so, though the social media company had no reason to believe Tails developers had ever discovered the bug.

[…]

“The only acceptable outcome to us was Buster Hernandez facing accountability for his abuse of young girls,” a Facebook spokesperson told Vice. “This was a unique case, because he was using such sophisticated methods to hide his identity, that we took the extraordinary steps of working with security experts to help the FBI bring him to justice.”

I agree with that last paragraph. I’m fine with the FBI using vulnerabilities: lawful hacking, it’s called. I’m less okay with Facebook paying for a Tails exploit, giving it to the FBI, and then keeping its existence secret.

Another article.

EDITED TO ADD: This post has been translated into Portuguese.

Posted on June 12, 2020 at 6:23 AMView Comments

The EARN-IT Act

Prepare for another attack on encryption in the U.S. The EARN-IT Act purports to be about protecting children from predation, but it’s really about forcing the tech companies to break their encryption schemes:

The EARN IT Act would create a “National Commission on Online Child Sexual Exploitation Prevention” tasked with developing “best practices” for owners of Internet platforms to “prevent, reduce, and respond” to child exploitation. But far from mere recommendations, those “best practices” would be approved by Congress as legal requirements: if a platform failed to adhere to them, it would lose essential legal protections for free speech.

It’s easy to predict how Attorney General William Barr would use that power: to break encryption. He’s said over and over that he thinks the “best practice” is to force encrypted messaging systems to give law enforcement access to our private conversations. The Graham-Blumenthal bill would finally give Barr the power to demand that tech companies obey him or face serious repercussions, including both civil and criminal liability. Such a demand would put encryption providers like WhatsApp and Signal in an awful conundrum: either face the possibility of losing everything in a single lawsuit or knowingly undermine their users’ security, making all of us more vulnerable to online criminals.

Matthew Green has a long explanation of the bill and its effects:

The new bill, out of Lindsey Graham’s Judiciary committee, is designed to force providers to either solve the encryption-while-scanning problem, or stop using encryption entirely. And given that we don’t yet know how to solve the problem—and the techniques to do it are basically at the research stage of R&D—it’s likely that “stop using encryption” is really the preferred goal.

EARN IT works by revoking a type of liability called Section 230 that makes it possible for providers to operate on the Internet, by preventing the provider for being held responsible for what their customers do on a platform like Facebook. The new bill would make it financially impossible for providers like WhatsApp and Apple to operate services unless they conduct “best practices” for scanning their systems for CSAM.

Since there are no “best practices” in existence, and the techniques for doing this while preserving privacy are completely unknown, the bill creates a government-appointed committee that will tell technology providers what technology they have to use. The specific nature of the committee is byzantine and described within the bill itself. Needless to say, the makeup of the committee, which can include as few as zero data security experts, ensures that end-to-end encryption will almost certainly not be considered a best practice.

So in short: this bill is a backdoor way to allow the government to ban encryption on commercial services. And even more beautifully: it doesn’t come out and actually ban the use of encryption, it just makes encryption commercially infeasible for major providers to deploy, ensuring that they’ll go bankrupt if they try to disobey this committee’s recommendations.

It’s the kind of bill you’d come up with if you knew the thing you wanted to do was unconstitutional and highly unpopular, and you basically didn’t care.

Another criticism of the bill. Commentary by EPIC. Kinder analysis.

Sign a petition against this act.

Posted on March 13, 2020 at 6:20 AMView Comments

The Whisper Secret-Sharing App Exposed Locations

This is a big deal:

Whisper, the secret-sharing app that called itself the “safest place on the Internet,” left years of users’ most intimate confessions exposed on the Web tied to their age, location and other details, raising alarm among cybersecurity researchers that users could have been unmasked or blackmailed.

[…]

The records were viewable on a non-password-protected database open to the public Web. A Post reporter was able to freely browse and search through the records, many of which involved children: A search of users who had listed their age as 15 returned 1.3 million results.

[…]

The exposed records did not include real names but did include a user’s stated age, ethnicity, gender, hometown, nickname and any membership in groups, many of which are devoted to sexual confessions and discussion of sexual orientation and desires.

The data also included the location coordinates of the users’ last submitted post, many of which pointed back to specific schools, workplaces and residential neighborhoods.

Or homes. I hope people didn’t confess things from their bedrooms.

Posted on March 12, 2020 at 6:30 AMView Comments

Worst-Case Thinking Breeds Fear and Irrationality

Here’s a crazy story from the UK. Basically, someone sees a man and a little girl leaving a shopping center. Instead of thinking “it must be a father and daughter, which happens millions of times a day and is perfectly normal,” he thinks “this is obviously a case of child abduction and I must alert the authorities immediately.” And the police, instead of thinking “why in the world would this be a kidnapping and not a normal parental activity,” thinks “oh my god, we must all panic immediately.” And they do, scrambling helicopters, searching cars leaving the shopping center, and going door-to-door looking for clues. Seven hours later, the police eventually came to realize that she was safe asleep in bed.

Lenore Skenazy writes further:

Can we agree that something is wrong when we leap to the worst possible conclusion upon seeing something that is actually nice? In an email Furedi added that now, “Some fathers told me that they think and look around before they kiss their kids in public. Society is all too ready to interpret the most innocent of gestures as a prelude to abusing a child.”

So our job is to try to push the re-set button.

If you see an adult with a child in plain daylight, it is not irresponsible to assume they are caregiver and child. Remember the stat from David Finkelhor, head of the Crimes Against Children Research Center at the University of New Hampshire. He has heard of NO CASE of a child kidnapped from its parents in public and sold into sex trafficking.

We are wired to see “Taken” when we’re actually witnessing something far less exciting called Everyday Life. Let’s tune in to reality.

This is the problem with the “see something, say something” mentality. As I wrote back in 2007:

If you ask amateurs to act as front-line security personnel, you shouldn’t be surprised when you get amateur security.

And the police need to understand the base-rate fallacy better.

Posted on November 18, 2018 at 1:12 PMView Comments

COPPA Compliance

Interesting research: “‘Won’t Somebody Think of the Children?’ Examining COPPA Compliance at Scale“:

Abstract: We present a scalable dynamic analysis framework that allows for the automatic evaluation of the privacy behaviors of Android apps. We use our system to analyze mobile apps’ compliance with the Children’s Online Privacy Protection Act (COPPA), one of the few stringent privacy laws in the U.S. Based on our automated analysis of 5,855 of the most popular free children’s apps, we found that a majority are potentially in violation of COPPA, mainly due to their use of third-party SDKs. While many of these SDKs offer configuration options to respect COPPA by disabling tracking and behavioral advertising, our data suggest that a majority of apps either do not make use of these options or incorrectly propagate them across mediation SDKs. Worse, we observed that 19% of children’s apps collect identifiers or other personally identifiable information (PII) via SDKs whose terms of service outright prohibit their use in child-directed apps. Finally, we show that efforts by Google to limit tracking through the use of a resettable advertising ID have had little success: of the 3,454 apps that share the resettable ID with advertisers, 66% transmit other, non-resettable, persistent identifiers as well, negating any intended privacy-preserving properties of the advertising ID.

Posted on April 13, 2018 at 6:43 AMView Comments

1 2 3 9

Sidebar photo of Bruce Schneier by Joe MacInnis.