Protecting Against Google Phishing in Chrome

Google has a new Chrome extension called "Password Alert":

To help keep your account safe, today we're launching Password Alert, a free, open-source Chrome extension that protects your Google and Google Apps for Work Accounts. Once you've installed it, Password Alert will show you a warning if you type your Google password into a site that isn't a Google sign-in page. This protects you from phishing attacks and also encourages you to use different passwords for different sites, a security best practice.

Here's how it works for consumer accounts. Once you've installed and initialized Password Alert, Chrome will remember a "scrambled" version of your Google Account password. It only remembers this information for security purposes and doesn't share it with anyone. If you type your password into a site that isn't a Google sign-in page, Password Alert will show you a notice like the one below. This alert will tell you that you're at risk of being phished so you can update your password and protect yourself.

It's a clever idea. Of course it's not perfect, and doesn't completely solve the problem. But it's an easy security improvement, and one that should be generalized to non-Google sites. (Although it's not uncommon for the security of many passwords to be tied to the security of the e-mail account.) It reminds me somewhat of cert pinning; in both cases, the browser uses independent information to verify what the network is telling it.

Slashdot thread.

EDITED TO ADD: It's not even a day old, and there's an attack.

Posted on April 30, 2015 at 9:11 AM • 27 Comments

Comments

KeithApril 30, 2015 9:17 AM

Is this simular to Rapport - which checks when passwards are being entered that match those if 'monitored' pages - so you can check phishing (etc) on your banking pages rather than just google.

readerrrrApril 30, 2015 9:50 AM

@Jim

By executing Chrome you implicitly trust Google with your passwords. This is not the problem. They would never collect your personal information in such an intrusive and illegal way. The problem is legal spying using third party scripts and information you enter into the search engine.

David LeppikApril 30, 2015 9:58 AM

@Jim: no, the plug-in runs locally and compares your password to a hash. It's just one more computation it does when you enter a password in Chrome.

If you don't trust Google to keep local storage separate from cloud storage, you shouldn't run Chrome or own an Android device. This doesn't change the equation. But if you do-- especially if (like me) you mainly use Chrome to access Google sites-- this seems like a reasonable security option.

AlfonsoApril 30, 2015 10:11 AM

As an interesting side-effect, if you re-use your Google password on a non-Google account you will get harassed by this plug-in and will hopefully switch to using unique passwords, ultimately making Google less likely to have to deal with your account getting "hacked".

Steven GrimmApril 30, 2015 10:12 AM

But Jim's comment does demonstrate a broader (though not at all new) problem. If one of the readers of a specialized blog on security sees this news and thinks it means Google is collecting passwords, what hope do members of the public who *don't* follow security news have of making sense of any of this? I don't know if there's a good answer even in theory.

Ben SchenkerApril 30, 2015 10:13 AM

Lastpass could probably make this feature work for all websites. Very cool!

ramriotApril 30, 2015 10:22 AM

To set the scene:-
I've seen a couple of versions of this, some saying 'fingerprint' some saying 'scrambled' as what the extension does to your password field entries. Lets assume for now they mean a hash function and that those hashes are protected and never shared outside of the specific machine.

Here are a couple of problems / possible advantages:

1/ If you reuse a google password for another site, the app will complain, thinkng you are being fished. This should be reason enough not to reuse passwords.

2/ If you have a shared machine then, EVERY password another user types into ANY account will be compared to your google password i.e. I becomes a password self brute forcing machine.

3/ You cannot use it on a public machine, where the likelyhood of MITB attack is highest.

4/ Should there be a way of extracting the hash list from the extension (with perhaps salt values if used) then this would allow an attacker to attack the hash directly, instead of attacking googles (hopefully well protected) user database.

Finally, static proofs of identity like passwords, fingerprints etc. must die as a primary form of authentication. We really need to concentrate on getting a replacement that is:
Commonly available,
dynamic,
zero knowledge,
challenge response,
primary identity proof system. Like for example SQRL or some second generation of FIDO.

Philip RaymondApril 30, 2015 10:45 AM

What about the improper use of your data by Google itself? It's not that they cannot be trusted. The threat is that data can diverted by court order, national security letter, a rogue employee, or accident. So, the question is how can we voluntarily give personal data to Google for one purpose, but have the confidence that it will not be used for any purpose beyond that to which we have agreed?

The answer is Blind Signaling and Response. It has the potential to restrict the conditions under which data shared with a trusted party can be used...Not by policy, but by math! With BSR, user data is unintelligible (and unusable, even if decrypted) when piped into any process that is not authorized by the EULA, privacy statement and Terms & Conditions.

It sounds like magic, but it can work. I presented a logistical flowchart in Mountain View (2013), and a more polished presentation at the Montreal Privacy Conference in last May. What is needed now, are people of vision to turn the logic into math, a test platform, and eventually the hooks that will embed heightened privacy into every relationship.

Philip Raymond
CRYPSA Co-Chair
(Formerly Vanquish Labs CEO)

Nick PApril 30, 2015 11:07 AM

@ Philip Raymond

"What about the improper use of your data by Google itself? It's not that they cannot be trusted."

Oh no, we CAN'T trust Google. AT ALL. It's about incentives. They're a publicly-traded company expected to make more money for shareholders each year while doing that primarily via advertising (aka selling out customers data). They used to limit their actions in accordance with a Don't Be Evil policy. They threw most of that out the window and are giving advertisers more data than ever.

Any for-profit or ad-driven organization should be untrusted by default.

winterApril 30, 2015 12:14 PM

According to the github link above, Google only store 37 bits of the password hash. So you just have to make your password 37 bits stronger if you think this would be a problem.
;-)

But if you do not trust Chrome, then this feature should be the least of your worries.

winterApril 30, 2015 12:27 PM

@Nick P
"Oh no, we CAN'T trust Google. AT ALL. It's about incentives. "

Replace "Google" by "nobody" (double negation as stress).

What you write says we cannot trust other people as there are always incentives that make breaking your trust worthwhile.

But we do need to trust some people amd institutions to survive.

Btw, I do as David Leppik and use Chrome mainly for accessing Google services. And that is not for hidden nefariousness, but because they do not let me circumvent tracking by add agencies (if this can be done I would welcome suggestions).

JaysonApril 30, 2015 12:35 PM

@Nick P

Any for-profit or ad-driven organization should be untrusted by default.

Because non-profits or organizations that don't use ads are much more worthy of trust.

Nick PApril 30, 2015 1:44 PM

@ Winter

Good substitution. We certainly have to draw the line somewhere. The question is: "is it in Google's interest to sell my data? And is there an alternative product/service that has different incentives?" For email, I still have Yahoo and Gmail accounts for stuff that isn't sensitive or that can be found easily but where *availability* is most important concern (eg receipts). Confidential stuff goes to a paid, email provider whose policy and TOS are not to sell me out and who resides in a country that will enforce it. I eliminate the web issues by accessing it with an open source email client. Very confidential information is encrypted with GPG. That's where I draw the line with email. See how much better that setup is than every message in Gmail being thoroughly scrutinized, profiled, sent to NSA/FBI, and maybe sold to advertisers?

The good news is that, post-Snowden, we're seeing a lot of startups focused on offering private alternatives to existing services. It's one of his greatest accomplishments. Hopefully, I'll be able to ditch more scheming companies' products for for affordable, private ones in near future.

@ Jayson

Ok, let me be more clear. The shareholder value focus of a U.S. publicly traded company makes them much more likely to screw over their customers if it makes more profit. A new Google CEO sacrificing a few hundred million in profit to enhance user privacy for ethical reasons will be unemployed in no time. A private company can act however its owner and management want. There are a number of private companies differentiating themselves with increased quality, privacy, service, and so on. Nonprofits are designed to accomplish something rather than maximize profit. They might still act corrupt. Yet, their incentives are aligned with that cause and they have the benefit that they can't sell out to a company that will cancel their product. My desktop is supported by one and it's hard *far* few privacy/legal gripes than commercial competition. Finally, each can use warranties, TOS's, EULA's, TLA's, policies, etc. to enforce baselines of trustworthy behavior. Rarely the case, but the exceptions to the rule show its potential.

Let's test my theory. A while back, I had two options for a mainstream browsing experience. Option A is made by a publicly-traded company whose profit primarily comes from pervasively spying on users and selling that to advertisers. Option B is made by a non-profit whose income comes primarily from ad revenue on the default search engine (easily changed) and donations. Your comment suggests they're equivalent in trust: you flipped a coin and maybe use A. Thinking otherwise, I went with Option B plus NoScript. Overtime, I collected a lot more disturbing news articles on one than the other. You might be surprised what motivated each of those actions: making more money for shareholders on ads.

Incentives, organizational structure, and legal agreements matter. The more they're aligned with user's needs, the more likely the organization will do what's in the interest of users. Sounds so common sense to me that I didn't take time to explain it in detail in original post. There it is, though, for anyone that found my lack of trust in Wall St.-directed firms confusing.

JaysonApril 30, 2015 3:18 PM

@Nick P
Apologies for my previous snark, I see you have a solid rationale and meant more of a probability view. Thanks for taking the time to clarify, you've forced me to rethink my position although I'd still go more with a statement like "trust in an organization is a case-by-case basis". Heuristics are hard.

Public companies may be more susceptible to embracing the dark side. Going public is an exit strategy and frequently the visionary founders leave after cashing out and inevitably quality declines (RIP: Shake Shack) as a more soulless formulaic approach (ad revenue!) takes root. Indeed, you are also correct that the shareholder value drive the company and the biggest shareholders are the terrified, nail-biting executives whose entire net worths are tied to market ticks and are thereby favor myopic strategies.

Nick PApril 30, 2015 6:07 PM

@ Jayson

"Apologies for my previous snark"

I was more of an ass in the reply. I take the apology and extend my own. All good to me. :)

"I see you have a solid rationale and meant more of a probability view. Thanks for taking the time to clarify, you've forced me to rethink my position although I'd still go more with a statement like "trust in an organization is a case-by-case basis". Heuristics are hard."

Appreciate it and always glad to demonstrate such issues. Your characterization of it being a case-by-case basis with difficult heuristics is spot on. That makes me wonder if someone has done research that cataloged many heuristics about organization's trustworthiness, assessed their effectiveness, and published them all in once place. Might be something worth doing for discussions like these.

" Indeed, you are also correct that the shareholder value drive the company and the biggest shareholders are the terrified, nail-biting executives whose entire net worths are tied to market ticks and are thereby favor myopic strategies."

Indeed. Directors try to ensure management will do what's good for the company by connecting management's compensation to the companies' earnings. Yet, that same incentive can backfire on the companies' earnings if the best long-term decision requires sacrifices in the short term. This is doubly true for anything related to safety, security, or ethics. It's why I'd prefer to partner with an organization whose owners and management bought into the importance of such things now and hopefully far into the future. That's typically the private companies or nonprofits in practice.

@ AndrewJ

Within 24 hours, too. That's hilarious.

JimMay 1, 2015 7:57 AM

We are led to trust the plug-in because "it runs locally" using a "hash of the password". As security professionals, we understand how the implementation of a security feature means more than the intention. "Is the hash salted?", "Is the hash harvested?" etc.

My intention of including the link in my original post was to point out that every Google security misdeed was introduced as a harmless feature. "We simply want to drive down every street to give you a view of your street with your map searches," led to capturing not only every WiFi MAC ID (with the dubious yet "harmless" feature of improved location resolution), but also capturing the unencrypted network data to read usernames/passwords (Why would anyone think that's justified?). With Google, it's not what they say they are doing, it's what they think they can do beyond what they say.

Philip RaymondMay 1, 2015 9:55 AM

I want to clarify a statement that triggered debate after it was posted. I said...

> What about the improper use of your data by Google itself? It's not that
> they cannot be trusted. The threat is that data can diverted by court order,
> national security letter, a rogue employee, or accident. So, the question
> is how can we voluntarily give personal data to Google for one purpose,
> but have the confidence that it will not be used for any purpose beyond
> that to which we have agreed?

1. Much feedback took issue with sentence #2. Readers claimed "Oh no...They cannot be trusted (i.e. because they are driven by ads and have an incentive to sell-out user data.

I hate spam and unsolicited messaging that is not specifically targeted and tailored to my personal desires. But I do not think that all advertising is evil, not even all unsolicited advertising. Nor, do I feel that commercial messaging or capitalist incentives are at odds with serving my needs.

2. In the last sentence, when I suggest that there exists an encryption technology (BSR) that can restrict Google's use of personal data (search, navigation, gmail content, docs, photos, etc) to just the explicitly agreed upon uses, I was not referring to just the direct user benefit of the search, email or nav-destination. If that were the only thing that Google could do with your data, it would be a pretty one-sided relationship, and Google would have no reason to build all of that value!

Rather, I meant that with Blind Signaling and Response, it is possible to render your data functional for ONLY the obvious user benefit and for matching up the most relevant adds when visiting sites that allow Ad Words to control some of the screen real estate. (They might as well be personally relevant, right?!). The beauty in BSR, is that even though future ads are more personally relevant, no one -- not even Google -- can audit and determine who saw which ads (magic?, eh?!). And the user data is useless for any other purpose (to both to Google or to any interloper), because it cannot be associated with any user account or with any past activity. Those bindings and relationships can only be discerned when the data is being accessed by specific user-approved processes.

I described the process at a privacy forum at University of Montreal in May 2014. I also described it to Google privacy and privacy infrastructure officers in Mountain View in 2013. I got the impression that 4 of 5 managers and officers were keen on the idea, but that they would not consider shifting resources or investment until the public demanded a wholesale improvement to privacy.

Philip Raymond
Blind Signaling and Response
cypsa.org, vanquish.com

David LeppikMay 1, 2015 10:04 AM

@winter: The EFF has the Privacy Badger plug-in for Chrome and Firefox. It's not perfect, but it does a reasonable first approximation.

Beyond that, the oldest trick in the book still works. On Mac/Linux, add these lines to the file /etc/hosts (or create that file with these lines):

127.0.0.1       googleads.g.doubleclick.net
127.0.0.1       ads.doubleclick.net
127.0.0.1       ad.doubleclick.net
127.0.0.1       doubleclick.net
127.0.0.1       ads.yahoo.com

Works amazingly well. Windows also has a hosts file that works identically.

JimMay 1, 2015 10:25 AM

This is why I like things like Lastpass. If I end up on a phishing site, I'm tipped off because my password isn't filled in.

Mr. HappyMay 1, 2015 10:41 AM

A Linux hosts file that can handle wildcards. That's all I'm asking for.

(Well, that and maybe a big red button that causes every "marketer" in the world to die of eyeball and/or testicle cancer.)

65535May 1, 2015 12:58 PM

@ Winter

I have to agree with Nick P [April 30, ’15 1:44PM] It’s not only G@@le's track record but the fact that it does have a relationship with the NSA. It’s top officers have security clearances and apparently are fairly “tight” with the NSA.

‘Speaking in a private session at the Guardian, Schmidt, 58, said: "I have the necessary clearances to have been told, as do other executives in the company, but none of us were briefed. Had we been briefed, we probably couldn't have acted on it, because we'd have known about it. I've declined briefings [from the US government] about this because I don't want to be constrained."’ – Guardian

http://www.theguardian.com/technology/2014/jan/21/google-eric-schmidt-nsa-tapping-knowledge

I don't know if turning a blind eye to security briefing yet having a high security clearance makes much sense for Mr Schmidt. The above statement is somewhat contradictory.

Because of this security clearance issue with the top executives I don’t use Chrome and I rarely use G@@gle search. There are plenty of other options [Duckduckgo and Ixquick https are some]. In short, I have to concur with Nick P’s overall statement.

OttoMay 2, 2015 9:07 PM

Chrome extensions are literally just JavaScript injected into pages. Other JavaScript can disable or bypass it.

Until Chrome has a better way of modifying browser behavior, this sort of thing will always be able to be bypassed.

Leave a comment

Allowed HTML: <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre>

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of IBM Resilient.