Entries Tagged "Google"

Page 5 of 17

Google Releases Crypto Test Suite

Google has released Project Wycheproof—a test suite designed to test cryptographic libraries against a series of known attacks. From a blog post:

In cryptography, subtle mistakes can have catastrophic consequences, and mistakes in open source cryptographic software libraries repeat too often and remain undiscovered for too long. Good implementation guidelines, however, are hard to come by: understanding how to implement cryptography securely requires digesting decades’ worth of academic literature. We recognize that software engineers fix and prevent bugs with unit testing, and we found that many cryptographic issues can be resolved by the same means

The tool has already found over 40 security bugs in cryptographic libraries, which are (all? mostly?) currently being fixed.

News article. Slashdot thread.

Posted on December 20, 2016 at 6:12 AMView Comments

Google's Post-Quantum Cryptography

News has been bubbling about an announcement by Google that it’s starting to experiment with public-key cryptography that’s resistant to cryptanalysis by a quantum computer. Specifically, it’s experimenting with the New Hope algorithm.

It’s certainly interesting that Google is thinking about this, and probably okay that it’s available in the Canary version of Chrome, but this algorithm is by no means ready for operational use. Secure public-key algorithms are very hard to create, and this one has not had nearly enough analysis to be trusted. Lattice-based public-key cryptosystems such as New Hope are particularly subtle—and we cryptographers are still learning a lot about how they can be broken.

Targets are important in cryptography, and Google has turned New Hope into a good one. Consider this an opportunity to advance our cryptographic knowledge, not an offer of a more-secure encryption option. And this is the right time for this area of research, before quantum computers make discrete-logarithm and factoring algorithms obsolete.

Posted on July 12, 2016 at 12:53 PMView Comments

Comparing Messaging Apps

Micah Lee has a nice comparison among Signal, WhatsApp, and Allo.

In this article, I’m going to compare WhatsApp, Signal, and Allo from a privacy perspective.

While all three apps use the same secure-messaging protocol, they differ on exactly what information is encrypted, what metadata is collected, and what, precisely, is stored in the cloud ­- and therefore available, in theory at least, to government snoops and wily hackers.

In the end, I’m going to advocate you use Signal whenever you can -­ which actually may not end up being as often as you would like.

EDITED TO ADD (6/25): Don’t use Telegram.

Posted on June 23, 2016 at 6:54 AMView Comments

Google Moving Forward on Automatic Logins

Google is trying to bring this to Android developers by the end of the year:

Today, secure logins—like those used by banks or in the enterprise environment—often require more than just a username and password. They tend to also require the entry of a unique PIN, which is generally sent to your phone via SMS or emailed. This is commonly referred to as two-factor authentication, as it combines something you know (your password) with something you have in your possession, like your phone.

With Project Abacus, users would instead unlock devices or sign into applications based on a cumulative “Trust Score.” This score would be calculated using a variety of factors, including your typing patterns, current location, speed and voice patterns, facial recognition, and other things.

Basically, the system replaces traditional authentication—something you know, have, or are—with surveillance. So maybe this is a good idea, and maybe it isn’t. The devil is in the details.

EDITED TO ADD: It’s being called creepy. But, as we’ve repeatedly learned, creepy is subjective. What’s creepy now is perfectly normal two years later.

Posted on May 24, 2016 at 8:35 AMView Comments

FTC Investigating Android Patching Practices

It’s a known truth that most Android vulnerabilities don’t get patched. It’s not Google’s fault. It releases the patches, but the phone carriers don’t push them down to their smartphone users.

Now the Federal Communications Commission and the Federal Trade Commission are investigating, sending letters to major carriers and device makers.

I think this is a good thing. This is a long-existing market failure, and a place where we need government regulation to make us all more secure.

Posted on May 11, 2016 at 2:37 PMView Comments

Exploiting Google Maps for Fraud

The New York Times has a long article on fraudulent locksmiths. The scam is a basic one: quote a low price on the phone, but charge much more once you show up and do the work. But the method by which the scammers get victims is new. They exploit Google’s crowdsourced system for identifying businesses on their maps. The scammers convince Google that they have a local address, which Google displays to its users who are searching for local businesses.

But they involve chicanery with two platforms: Google My Business, essentially the company’s version of the Yellow Pages, and Map Maker, which is Google’s crowdsourced online map of the world. The latter allows people around the planet to log in to the system and input data about streets, companies and points of interest.

Both Google My Business and Map Maker are a bit like Wikipedia, insofar as they are largely built and maintained by millions of contributors. Keeping the system open, with verification, gives countless businesses an invaluable online presence. Google officials say that the system is so good that many local companies do not bother building their own websites. Anyone who has ever navigated using Google Maps knows the service is a technological wonder.

But the very quality that makes Google’s systems accessible to companies that want to be listed makes them vulnerable to pernicious meddling.

“This is what you get when you rely on crowdsourcing for all your ‘up to date’ and ‘relevant’ local business content,” Mr. Seely said. “You get people who contribute meaningful content, and you get people who abuse the system.”

The scam is growing:

Lead gens have their deepest roots in locksmithing, but the model has migrated to an array of services, including garage door repair, carpet cleaning, moving and home security. Basically, they surface in any business where consumers need someone in the vicinity to swing by and clean, fix, relocate or install something.

What’s interesting to me are the economic incentives involved:

Only Google, it seems, can fix Google. The company is trying, its representatives say, by, among other things, removing fake information quickly and providing a “Report a Problem” tool on the maps. After looking over the fake Locksmith Force building, a bunch of other lead-gen advertisers in Phoenix and that Mountain View operation with more than 800 websites, Google took action.

Not only has the fake Locksmith Force building vanished from Google Maps, but the company no longer turns up in a “locksmith Phoenix” search. At least not in the first 20 pages. Nearly all the other spammy locksmiths pointed out to Google have disappeared from results, too.

“We’re in a constant arms race with local business spammers who, unfortunately, use all sorts of tricks to try to game our system and who’ve been a thorn in the Internet’s side for over a decade,” a Google spokesman wrote in an email. “As spammers change their techniques, we’re continually working on new, better ways to keep them off Google Search and Maps. There’s work to do, and we want to keep doing better.”

There was no mention of a stronger verification system or a beefed-up spam team at Google. Without such systemic solutions, Google’s critics say, the change to local results will not rise even to the level of superficial.

And that’s Google’s best option, really. It’s not the one losing money from these scammers, so it’s not motivated to fix the problem. Unless the problem rises to the level of affecting user trust in the entire system, it’s just going to do superficial things.

This is exactly the sort of market failure that government regulation needs to fix.

Posted on February 8, 2016 at 6:52 AMView Comments

Should We Allow Bulk Searching of Cloud Archives?

Jonathan Zittrain proposes a very interesting hypothetical:

Suppose a laptop were found at the apartment of one of the perpetrators of last year’s Paris attacks. It’s searched by the authorities pursuant to a warrant, and they find a file on the laptop that’s a set of instructions for carrying out the attacks.

The discovery would surely help in the prosecution of the laptop’s owner, tying him to the crime. But a junior prosecutor has a further idea. The private document was likely shared among other conspirators, some of whom are still on the run or unknown entirely. Surely Google has the ability to run a search of all Gmail inboxes, outboxes, and message drafts folders, plus Google Drive cloud storage, to see if any of its 900 million users are currently in possession of that exact document. If Google could be persuaded or ordered to run the search, it could generate a list of only those Google accounts possessing the precise file ­ and all other Google users would remain undisturbed, except for the briefest of computerized “touches” on their accounts to see if the file reposed there.

He then goes through the reasons why Google should run the search, and then reasons why Google shouldn’t—and finally says what he would do.

I think it’s important to think through hypotheticals like this before they happen. We’re better able to reason about them now, when they are just hypothetical.

Posted on January 16, 2016 at 5:26 AMView Comments

A History of Privacy

This New Yorker article traces the history of privacy from the mid 1800s to today:

As a matter of historical analysis, the relationship between secrecy and privacy can be stated in an axiom: the defense of privacy follows, and never precedes, the emergence of new technologies for the exposure of secrets. In other words, the case for privacy always comes too late. The horse is out of the barn. The post office has opened your mail. Your photograph is on Facebook. Google already knows that, notwithstanding your demographic, you hate kale.

Posted on November 30, 2015 at 12:47 PMView Comments

How GCHQ Tracks Internet Users

The Intercept has a new story from the Snowden documents about the UK’s surveillance of the Internet by the GCHQ:

The mass surveillance operation ­ code-named KARMA POLICE­ was launched by British spies about seven years ago without any public debate or scrutiny. It was just one part of a giant global Internet spying apparatus built by the United Kingdom’s electronic eavesdropping agency, Government Communications Headquarters, or GCHQ.

[…]

One system builds profiles showing people’s web browsing histories. Another analyzes instant messenger communications, emails, Skype calls, text messages, cell phone locations, and social media interactions. Separate programs were built to keep tabs on “suspicious” Google searches and usage of Google Maps.

[…]

As of March 2009, the largest slice of data Black Hole held—41 percent—was about people’s Internet browsing histories. The rest included a combination of email and instant messenger records, details about search engine queries, information about social media activity, logs related to hacking operations, and data on people’s use of tools to browse the Internet anonymously.

Lots more in the article. The Intercept also published 28 new top secret NSA and GCHQ documents.

Posted on September 29, 2015 at 6:16 AMView Comments

1 3 4 5 6 7 17

Sidebar photo of Bruce Schneier by Joe MacInnis.