Risk-Based Authentication

I like this idea of giving each individual login attempt a risk score, based on the characteristics of the attempt:

The risk score estimates the risk associated with a log-in attempt based on a user’s typical log-in and usage profile, taking into account their device and geographic location, the system they’re trying to access, the time of day they typically log in, their device’s IP address, and even their typing speed. An employee logging into a CRM system using the same laptop, at roughly the same time of day, from the same location and IP address will have a low risk score. By contrast, an attempt to access a finance system from a tablet at night in Bali could potentially yield an elevated risk score.

Risk thresholds for individual systems are established based on the sensitivity of the information they store and the impact if the system were breached. Systems housing confidential financial data, for example, will have a low risk threshold.

If the risk score for a user’s access attempt exceeds the system’s risk threshold, authentication controls are automatically elevated, and the user may be required to provide a higher level of authentication, such as a PIN or token. If the risk score is too high, it may be rejected outright.

Posted on November 7, 2013 at 7:06 AM46 Comments

Comments

Leigh November 7, 2013 7:43 AM

For there to be some real-world benefit to this, instances where the risk profile was inline with expected parameters should confer benefits through reduced authentication burden on a user, as well as unexpected risk profiles conferring increased authentication burden.

Tom November 7, 2013 7:47 AM

Three points here:

  1. They double-account for the type of access requested when assessing whether access should be granted. Trying to access a financial system both increases the risk profile and decreases the risk threshold – this is just shoddy methodology. The factor should be accounted for once.
  2. I don’t see the point. This does not increase the security of any system but will only result in the reduction of security on low-risk, low-value access requests. Today organisations enable every security option available. Under this scheme, they more rigorous of these options would only be applied to high-risk, high-value access. So what have you gained? Why not just enable all options for all accesses?
  3. The examples cited show the problems. When a financial director is on holiday in Bali is exactly when he is least likely to be able to satisfy any more rigorous checks because he left his fob home. To give another plausible situation, someone who is trying to access their internet banking in rural China because they’re on holiday and have just had all their luggage stolen is both at their most desperate to gain access and least likely to be able to complete whatever extra authentication steps are required.

Google already offers this type of scheme for Google Accounts. When you enable two-factor authentication, you need to have your phone etc with you to generate a one-time key to access your account. But you can disable this extra check on specific computers. I have to say it worries me a bit.

Stephane November 7, 2013 7:56 AM

This is basically what Blizzard does when you authenticate in one of their games or their battle.net web site.

If you have one of their OTP tokens linked to your account, it will prompt you for a PIN the first time you authenticate from a specific IP address to a specific service but then it will only ask you if you use a different IP or after a period of time (not sure if it’s random of fixed).

This is a good balance between security and usability for gamers who don’t have to fish out their token from their pocket every time the get into a game but still are provided some descent level of protection from brute force attacks.

Micke November 7, 2013 8:00 AM

Facebook is also doing this. If my router has changed IP, it thinks my login with perfectly valid ID and psw is fishy in some way. I have to identify (sometimes incorrectly) tagged friends in stamp-sized photos and get thrown out if my guess isn’t right. Bummer.

Birch Thompson November 7, 2013 8:10 AM

Nice idea, but the evil side of me can’t help but think it’s an interesting vector for a DOS?

Now instead of X failed retries causing lockouts, with some creativity it could be x where x is less than X

Pete November 7, 2013 8:34 AM

I think such a system has the same problems as profiling at security checkpoints. The bad guys can figure out the selection criteria and the gamble the system, since most of the stuff can be easily spoofed (IP location, browser ID, …)

Ryan November 7, 2013 8:47 AM

Yup. Same as the credit card companies evaluating every transaction based on the location, the merchant, time of day, your spending patterns, etc. They stopped depending on merchants checking ID a long time ago. Authenticate the transaction, not the person!

grisu November 7, 2013 8:53 AM

Thoughts of [enter name of appropriate secret service of your country] while reading the article above:

“Great idea. This is using “big data” at it’s best. Think about all the new concludes we can draw from THIS great data (combined metadata on internet and system behaviour leads to much better user profiling). There should be connected databases, storing MAC-adresses on all services all over the world. Let’s go to our government and insist on making a new bill to achieve this goal, for security reasons related to terrorism and childporn.”

Hopefully, the “like” had some sarcasm in it.

qwertyuiop November 7, 2013 8:56 AM

Yahoo! mail does this too. I was in Paris at the weekend (watching Bruce at the ISF World Congress!) and logged in to check mail. It recognised I was in an unusual location and gave me some options as to how I could prove I was me. I went for the authentication code sent to my mobile phone by SMS. Interestingly when I got into my account Yahoo had sent me an email saying what had happened and inviting follow-up if it wasn’t really me.

Trevor November 7, 2013 9:14 AM

My bank has something similar. I have to request the send a OTP to a registered phone or email address when I login. Once I succeed I can authorize that IP to login without the OTP in the future. It’s a step in the right direction but not quite far enough. Banks have fairly sophisticated algorithms to detect high risk credit card transaction based on past behavior and require a phone call to authorize high risk transactions. Why not apply those same algorithms and heuristics to website logins but with additional levels? Basic is username/password, next level requires recognition of a chosen image or a security question. Next, they require a feedback system like the OTP email/txt message. Finally, they would require a phone call to customer service to unlock the account for that condition.

Vincent November 7, 2013 9:25 AM

“Why not just enable all options for all accesses?”

Because, high-security authentication systems are usually also highly cumbersome, and requiring the highest of standards for common, frequent access will result in the real world of active opposition.

If, to open your calendar from your company issued desktop in your office after 15mn of inactivity, you require people to enter a one-time password supplied by an external token that replies to a challenge displayed on the calendar login, soon, people will swap “that tool from Tom’s son in accounting that refreshes the calendar every 5mn”.

Once your users are actively fighting your security in “stupid cases”, they will also start fighting your security in cases where the extra security is necessary, because, in their mind, you’re that guy who’s an asshole, and therefore, the enemy.

There are applications where the risk threshold is low enough that you will require an additional authentication every time. But not every access to every application must use the highest authentication available.

Alex November 7, 2013 10:12 AM

Banks do this already. Ever noticed how you get asked an additional security question when you login from a new location? At RSA 2013 they were also talking about typing patterns and action patterns (what does the user usually click first when they login) as the means of detecting anomalies.

Nick P November 7, 2013 11:57 AM

@ Kevinv & Vincent

One thing to remember when using banks in security debates is that they expect and accept a certain loss rate. The last time someone quoted it to me the rate was a single digit percentage of losses on total revenue to fraud. This is different from many other asset protection decisions in a few ways:

  1. The loss essentially equals to money, the amount/effect is predictable, the amount is tiny, and their tradeoffs mean accepting it is a good thing.
  2. They intentionally choose certain weak schemes because they’re cheaper to implement, convenient for the user, and the bank doesn’t care about the losses so long as it’s within their expected percentage.

Now, try to translate this to other confidential assets: marketing plans; strategic merger/buyout info on asset value; trade secrets in manufacturing or tech; bank account of a small to midsize firm.

The list goes on. The thing to know here is that loosing any of these just once can do the company in. More likely, it will result in a serious loss to the point they’ll wish they had better security in place. Example: Brian Krebs regularly reports on ACH fraud wiping out small businesses & hitting others up to million dollar range. The banks’ perspective led to pushing ACH on Windows machines with little to no security in process. The banks make more money that way. Also, there’s zero risk for them in their assessment b/c the customer is responsible for their own losses. Many different people (myself included) have developed ridiculously cheap solutions to this huge problem but there is no takeup. The banks don’t care.

Another problem is the risk scoring itself. It does have some sense or value to it I can readily see. The problem is that most quantitative risk scoring uses models that assume a passive environment occasionally causing certain problems. It’s incidental. INFOSEC deals with combinations of: environmental faults; human error; passive/predictable attackers; intelligent, adaptable, active attackers; creative, motivated, well-funded attackers. It’s hard to see how a numerical methodology tells you anything beyond the odds of some well-understood cases.

So, lessons from my post:

  1. Many security goals aren’t meant to allow consistent amounts of compromises which tiered authentication can make easier.

  2. Banks don’t give a s***, routinely accept insecure operations, and push plenty of the cost onto their customers. Hence, their choices shouldn’t be supporting evidence for anything except for other bankers security decisions.

  3. Risk scoring mechanisms are often designed under models where the attacker is passive or the events are accidental/statistical. INFOSEC has active, adapting opponents + passive, predictable opponents. So, such risk metrics are unreliable in INFOSEC field.

Lamont November 7, 2013 12:01 PM

So when I typo my password 6 times before I’ve had my coffee the system would go “oh, yes, this has to be lamont” rather than locking me out due to a misguided attempt to block a brute-force attacker?

Tom L November 7, 2013 12:24 PM

I can see risk-based big-data handling a load of acl based problems in the future. Not only to prevent hackers, viruses etc but also whistle blowers and corporate thieves.

For example:
Logging in at a work station, checks are made to see if the CCTV facial recognition saw you entering the building, and if it’s your normal working hours.

Patterns of use built up over time; transfer more data than usual, you get flagged.
Suddenly send more email to external users, you get flagged.

The riskier you seem the tighter the controls. Maybe no access from home, or unable to work on sensitive projects.

Impossibly Stupid November 7, 2013 12:46 PM

I do similar things, but I use the word “continuity” rather than “risk”. That is to say, it isn’t inherently more risky to accept a login attempt from a new IP, but that does break with expectation, so it may be perfectly reasonable to take extra measure to establish some continuity. That could be asking for extra credentials for the new IP, invalidating/warning access from the old IP, or all sorts of other things that might make the most sense for the system being implemented.

To my way of thinking, trust is mainly about continuity.

aboniks November 7, 2013 1:04 PM

Tom says:

“2. I don’t see the point. This does not increase the security of any system but will only result in the reduction of security on low-risk, low-value access requests. Today organisations enable every security option available. Under this scheme, they more rigorous of these options would only be applied to high-risk, high-value access. So what have you gained? Why not just enable all options for all accesses?”

And I tend to agree. All that effort spent developing and deploying variable strength security systems is a total waste. Changing the complexity of the login process is just a band-aid fix for the fact that the authentication factors we routinely use are intentionally undermined in the name of user friendliness. (And the average users don’t bother taking even obvious steps to protect them.)

Users are not going to abandon platforms they’re invested in just because the passphrases get longer or they need two-or-more factors to access them, they’re just going to bitch for a minute and then get with the program.

Take Facebook as your model of how to change anything in-system without
giving a damn what your users think.

If you want a more secure system, stop catering to lazy users. If your higher-ups don’t like the idea, show them just how easy it is to exploit the system they’re defending…and do it by exploiting the existing system to access THEIR data, while they watch you do it.

Access control is, at its root, a people problem, not a technology problem. Bending over backwards to make minimal security gains while letting lazy users stay lazy isn’t good security…it’s just a PR win of questionable systemic value.

John November 7, 2013 1:37 PM

I was thinking the same as Pete above. How is this different from profiling, which you are on record as opposing? Is it because this is profiling an actual action, and not the person/entity behind the action? That seems like an awfully fine line to draw.

aboniks November 7, 2013 2:06 PM

John, re profiling

I’d say this is different.

Profiling is denying or degrading a service that would be otherwise available because the user is X, where X is a condition that has no actual systemic impact if you make the service available to the user.

“You can’t vote because you’re a felon.”

Access control is denying or degrading a service that would be otherwise available because the user is NOT X, where X is a condition that has does have a systemic impact if you make the service available to the user.

“You can’t vote because you’re not a citizen.”

In this case, “You can’t log in to John Smiths network because you can’t prove you’re John Smith.”

Making it more difficult to prove that you’re John Smith because you’ve never logged in from Pakistan before isn’t profiling Pakistanis…it’s profiling you, which is the root of access control.

John November 7, 2013 2:47 PM

@aboniks

Thanks for the response, and for your descriptions of profiling vs. access control. But it seems that access control relies on profiling to determine how the system should make it for the user to access the system.
“If there is a reasonably high probability that the user is X, reduce the barrier to entry; ELSE throw some additional authentication methods into the mix.”

i.e. IF the login request comes from an IP that has previously logged in to this account, and there is a cookie on the device from a previous successful login, and the user has a history of logging in at this time of day, (and the password is correct), just let him in.

or IF the login request comes from an unknown IP, and the device is not one that the user has used before, make him enter the prime odd numbers in his mother’s SSN.”

In neither case are we denying access outright, but we are raising the bar for entry. i.e. determining if the attempt is high/low risk based on what we can observe about the attempt.

From a non-IT perspective:
“IF the person looks ‘American’, speaks without an accent, bought a round-trip ticket, and didn’t set off the metal detector, don’t select him for additional screening.” vs “IF the person wears robes, claims he has a cardiac condition that exempts him from all scanners, and was seen mumbling to himself in the bathroom before reaching the checkpoint, require him to undergo a patdown, psych eval, and full carryon luggage search.”

Again, neither individual is banned from passing the security checkpoint. But one is flagged as higher risk (and therefore required to pass a higher barrier to entry) because of what we can observe about him.

Thoughts?

Nick P November 7, 2013 2:54 PM

@ John

“i.e. IF the login request comes from an IP that has previously logged in to this account, and there is a cookie on the device from a previous successful login, and the user has a history of logging in at this time of day, (and the password is correct), just let him in.”

Those conditions are usually present in an infected Windows machine. Matter of fact, real world malware writers have gone much farther than this in making malware that defeats two factor authentication. So, such an approach is not good enough if the endpoint running the login is in the TCB. (Usually the case.)

John November 7, 2013 3:30 PM

@Nick

Sorry, Nick – I was just making up a simplistic example, not trying to lay out a real authentication scenario. Sorry for not making that clear!

Ebay November 7, 2013 3:48 PM

Ebay did not feel comfortable when I used my laptop connected to the wifi hotspot of our accomodation during holidays: It asked me to receive a call AT MY HOME PHONE LINE, because it detected I was abroad.

Hence this risk assessment creates geographical segregation of the internet, like BlueRays are geographically segregated.

This is very dangerous for the consumer.

More, this is undocumented so that I do not know which detail about my account on Ebay will be needed, when travelling: I need to keep a local file with absolutely all account and transaction details, and I need to keep them updated.

In my particular case, my home phone line was not up to date because I had moved some years ago: so my Ebay account is now requiring me to call an expensive Ebay customer service (looping on long “hold on” messages).

What if it was my main social account ? My mail ?

Mailman November 7, 2013 4:36 PM

“So when I typo my password 6 times before I’ve had my coffee the system would go “oh, yes, this has to be lamont” rather than locking me out due to a misguided attempt to block a brute-force attacker?”

I had the same thought but this wouldn’t be as simple to implement as it looks. Passwords are typically stored in hashed form; change one character in your password and the hash will be completely different. So it’s not easy to determine that the invalid entry was just a typo and not a completely bogus entry.

However, what could be done is if the user always makes the same typo, then those typos could also be recorded as valid passwords. For instance, if you keep mistyping D as F, then “password” and “passworf” would both be accepted as valid password in low-risk settings.

Bob Monroe November 7, 2013 5:41 PM

Aboniks, one of the underlining principles of security is to make it invisible to the user.

Your comments of “If you want a more secure system, stop catering to lazy users. If your higher-ups don’t like the idea, show them just how easy it is to exploit the system they’re defending…and do it by exploiting the existing system to access THEIR data, while they watch you do it.

Access control is, at its root, a people problem, not a technology problem. Bending over backwards to make minimal security gains while letting lazy users stay lazy isn’t good security…it’s just a PR win of questionable systemic value.”

This mindset does not take into consideration that security impedes production. Our job should be to enable production, not hinder it. Users aren’t lazy, they just want to get their work done without jumping through endless hoops.

Security is not for securities sake. It is rather a required action to allow data to maintain CIA. As soon as you start thinking that security is and end to itself, you will find yourself in very lonely company. Don’t read all those white papers and fear tactic advertising by vendors.

Talk to a few average users and see what they have to say about security. Most of them understand the importance of such protection but would rather have it built into the entire process.

Both safety and risk management have made amazing progress towards building themselves into each everyday process. Yet, security still blinds the users with so many additional steps that it slows down work performance. This is not the direction we need to go.

Security should be seamlessly integrated into every system that the end user barely notices it. This is not the path we currently have however, we should be striving for this as safety and risk management already have.

Wael November 7, 2013 5:55 PM

I agree with @Tom

The proposal, in a nut shell, is:

  1. Use single factor authentication for low risk login attempts
  2. Use more factors for elevated risks login attempts
  3. Outright rejection of high risk login attempts

I say: Just adopt two decisions:

  1. Use multifactor authentication for all login attempts
  2. Reject high risk attempts, and this is based typically on Velocity and analytics

An analogy to that is the HW architecture that started with several rings of privileges, for example: Ring 0, Ring 1, Ring 2, and Ring 3. Most modern OS’s function fine with two rings or levels of privilege. Paper proposes nothing new, and packages the “idea” as a novel one. Paper also has some subtle mistakes, but I’ll not spilt hairs on those…

jimbo1qaz November 7, 2013 7:07 PM

I don’t like it. Smells very similar to the “nothing to hide” argument and “tor’s only for bad guys”.

Chris L November 7, 2013 7:10 PM

There have been login systems since around 2007 at financial institutions, using profiling (geolocation, typing speed, etc.) to add risk scores to logins and prompt the user for a second login. Implemented correctly, it doesn’t deny service to the user and the user experience is barely impacted. Implemented incorrectly, users can’t log in from outside their home country.

nonoptimal November 7, 2013 10:37 PM

I don’t like it either.

I’m all for bank security but there is no way this would be implemented intelligently. I’ll be on my vpn or in ipv6 land choosing a different address each time, and this ‘geo-ip’ bullshit will just wind up screwing me over.

I don’t know if you’ve heard, but we’re on the Internet. I should be able to access a website’s utility from any part of the planet. It shouldn’t matter what device I use, where it is located and so on. It defeats the concept of general purpose computing. Like the others have said, it is exactly when it is the most awkward for you not to transact money that you’d become an edge case.

We already have this shit when we’re trying to buy something with our credit card, and then it bounces and somebody from the company rings to check on us. It’s medieval finance with an extra helping of parentalism.

Bob M November 8, 2013 1:54 AM

I had a similar idea that GPS might be a good secondary factor of authentication because it would allow you to set safe zones, where the computer could be accessed. and then makes it your password wouldn’t work if your computer were taken off your property or intercepted while crossing a border.

I like the idea of the other factors too and obviously it would work for most people, but I would hate to be that one guy who broke his finger or something and was never able to log in again.

Mike the goat November 8, 2013 2:49 AM

I ran into the failings of these types of systems in the past. I formerly travelled quite frequently and would often find access to my PayPal account would unpredictably be suspended due to “suspicious activity” (logins from multiple geographical areas).

I telephoned, I emailed and complained for perhaps six months and yet the situation continued. It appeared that they couldn’t simply put in an exception for my account and whitelist the three countries that I was travelling between. “No, it can’t be done” was the response of the Paypal representative.

I ended up using an ssh tunnel back home to get around this crazy behavior until I unexpectedly got an email telling me that if I signed up for 2FA then they could “reduce the likelihood of flagging”. Presumably this meant that unusual activity could still cause a suspension, but nonetheless they sent me a Verisign “inCard” and it hasn’t recurred.

So I would suggest that risk based authentication might have its applications but when failure results in you being cut off unexpectedly from additional funds (on a weekend!) don’t expect me to be happy about financial institutions minimizing their risk.

Adam November 8, 2013 3:45 AM

I used to work in a investment firm which did this. All authentication was done by a single security team (and app teams sat behind this) and they implemented safeguards to protect from suspicious login activity based on geographic location and other metrics.

aboniks November 8, 2013 1:53 PM

Bob Monroe,

“This mindset does not take into consideration that security impedes production. Our job should be to enable production, not hinder it. Users aren’t lazy, they just want to get their work done without jumping through endless hoops.”

We have a philosophical difference of opinion. 🙂

Most public facing access control that I’ve encountered is just poorly thought out. Adding new hoops to jump through that are based on easily found information (mothers maiden name, previous addresses, etc), and requiring short passwords containing non-alpha characters…this is all catering to laziness. And it really is laziness, and it feeds the users tendency to practice poor physical security by writing things down.

Use a physical token and a long pass-phrase and you can stop installing gigantic meaningless hoops that don’t actually add to security in any meaningful way.

The argument that people find it inconvenient to come up with long pass-phrases can be discarded entirely just by adding a character counter to the process of creating the pass phrase. It’s annoying for users not because it’s long, but because they have to count.

The argument that physical tokens are clumsy, etc, can also be discarded…people have keys for their houses. Nobody whines about that anymore, but someone probably thought “who’s going to want this…people will just keep forgetting them and end up locked out”.

People problems. Focus on training the people, and designing simpler processes for them to set up meaningfully individualized credentials. Make the credentials robust to begin with and you have to ask for fewer of them, less often.

If you’re a business person having trouble with employees failing to follow security procedures, incentivize them or fire them, but don’t mollycoddle them. It’s pretty simple. If they can’t remember a 40 character alpha pass-phrase and a key card after you teach make it easy for them, just fire them already. They can remember a drivers license and where they live…this isn’t rocket science.

Facilitating productivity, if that’s your goal, is better served by hiring employees who aren’t lazy and/or unwilling to take security seriously, and then utilizing fewer security procedures that are actually robust and unique, rather than trying to make up for oversimplified pseudo-credentials by stacking a lot of them together. It doesn’t matter how much publicly available or spoofable data you use to identify someone…it’s still just theater.

aboniks November 8, 2013 2:58 PM

@ John

“From a non-IT perspective:
“IF the person looks ‘American’, speaks without an accent, bought a round-trip ticket, and didn’t set off the metal detector, don’t select him for additional screening.” vs “IF the person wears robes, claims he has a cardiac condition that exempts him from all scanners, and was seen mumbling to himself in the bathroom before reaching the checkpoint, require him to undergo a patdown, psych eval, and full carryon luggage search.”

Again, neither individual is banned from passing the security checkpoint. But one is flagged as higher risk (and therefore required to pass a higher barrier to entry) because of what we can observe about him.”

I see what you’re driving at, but I think we’ve got a semantic problem. Profiling in the sense that it’s currently in use by the media is really just a second generation euphemism for “bigotry”. (“Discrimination” being the first-gen euphemism, which used to actually be something you’d really want to be rather good at.) Profiling in the access control sense, I would argue, is concerned with data points of real value to a given security system. Build a “profile” of the user and use it to ensure that the user is actually who s/he claims to be.

So, when our man asserts that he can’t be scanned, that ticks a box. “Passengers” are people who go through security, including scanning. This guy can’t be scanned, so you have to do something else. That, imo, is really profiling. A passenger is X, Y, and Z, and gets treatment #1. If this guy is not X, then he gets treatment #2.

The fact that he mumbles, isn’t pink, and wears a robe…those data points shouldn’t be relevant if the rest of the system is properly designed and overseen. Giving him treatment #2 based on criteria that other passengers don’t have to meet isn’t profiling, no matter how much the conversation-shaping pundits say it is…it’s just bigotry.

Ethical questions aside, it’s really not a very good idea, either. I’m pink, with dredlocks and a long beard, travel with my stuff in military issue containers, I never check bags, don’t buy my own tickets, and have all my carry on items in neat little ziplocs per TSA rules. I also hand-roll my own smokes in the terminal for later use…and yet I never get treatment #2, because I have my retired military ID in a pouch on my chest, use my passport for identification, and make nice-face and say polite things to people in uniforms.

I’m not any kind of threat, but the number of “mixed signals” that my appearance and behavior give out to staid southern pink people would tie the system up in knots if, for instance, I was also brown, or sounded “unamerican” when I spoke. I don’t fit the picture most air travelers carry in their heads of a “normal” American, but that picture isn’t relevant to my actual risk to their security, only to their prejudices.

Back to the technical end though…I’d say that in the technical sense of the word, you’re correct, variable-strength security as described above does rely on “profiling”. But only in the technical sense of the word, not the way the word is commonly used in the media.

Impossibly Stupid November 9, 2013 11:00 AM

@aboniks
“Profiling in the access control sense, I would argue, is concerned with data points of real value to a given security system. Build a “profile” of the user and use it to ensure that the user is actually who s/he claims to be.”

That is why I prefer to use the word “continuity” rather than “risk” or “profile”. If, for example, I give you a random number (or whatever) at the end of one session and require it as part of the challenge-response at the start of the next session, that helps establish a continuity of trust, which is what the real intent is. Using other, loaded words really miscommunicates that intent.

Dirk Praet November 9, 2013 8:53 PM

@ Wael

Reject high risk attempts, and this is based typically on Velocity and analytics

I think an additional consideration also ought to be what kind of service/data is being logged into/accessed.

Personally, I’m not too keen on having geolocation as a deciding factor because everyone using Tor, VPN’s or other anonimyzing service would be seriously screwed.

Wael November 11, 2013 10:59 AM

@ Dirk Praet

Personally, I’m not too keen on having geolocation as a deciding factor because everyone using Tor, VPN’s or other anonimyzing service would be seriously screwed.

Neither am I. But geolocation does have it’s specialized use cases. It’s not a generic control. Some have used it loosely because it was, and maybe still is, a buzzword.

Jean Camp November 11, 2013 1:25 PM

Access control is, at its root, a people problem, not a technology problem. Bending over backwards to make minimal security gains while letting lazy users stay lazy isn’t good security…it’s just a PR win of questionable systemic value.

Let’s me restate this argument in context. So if there are brain surgeons in your organization you should have them spend their training time learning your relatively unusable secure software instead of say, brain surgery. Because having secure designs that work for humans is hard. Since the system security designers want to be lazy, fire the brain surgeon. Don’t mollycoodle him because all expertise but security is useless.

Good luck with that argument. Personally, I would rather my medical providers (who do handle medical data) have systems that serve them so that they can provide better medical care. To a lessor degree, I think the same about mechanics, contractors, service providers, and a host of other people who handle financial and corporate data.

aboniks November 11, 2013 11:09 PM

Jean,

Interesting attempt to re-frame what I was saying, although that wasn’t actually what I was saying at all. Are you in PR?

My point is that complex security and effective security are not equivalent. Go ahead and use multiple bits of data to create new hoops for your users to jump through, but don’t make the mistake of thinking that the number of hoops is going to be relevant to the end result, security-wise.

Your neurosurgeons (and your organization) will be far better served if they are identifying themselves with a physical token and a passphrase of significant length (30-40 characters), than they will be with eight characters of !33tspeal<, their mothers maiden name, previous streetname, and neck diameter.

The conventional wisdom argues against physical tokens and long passhprases because “users will whine”, and then in an attempt to placate them we end up with short passwords that are nearly impossible to remember in order to keep them barely secure against bruteforce attacks, with a layer of “security through obscurity” public data on top of it…and it’s obviously not even obscure.

This notion that asking for more data in order to be “more sure” that the end user really is John Smith could be completely discarded if the baseline factors weren’t being undermined in an attempt to be more “user friendly”.

So no, In your scenario, I’d fire you, as the security professional putting forth that argument, and hire someone who could grasp the idea that escalating authentication barriers based on publicly available data are a waste of resources that don’t actually make anything more secure, and simply encourage poor security practices by the neurosurgeons in question.

Mark Eastman July 25, 2014 1:37 PM

Interesting idea. My team developed a comerical product about 12 years ago that did just this (end a lot more). Although we called it ‘score based authentication and authorization’, and it went a couple steps further. One of our key principles was that the system would never ‘automatically’ lower the barrier to autheticate (or authorize); it would only raise the barrier if the percieved risk was elevated due to unusual behavior. Another benefit was that administrators only needed to assign a ‘minimum passsig score’ to an asset. They didn’t need to specificy what mechanism would be used (e.g., password, voice, fingerprint,smart card, etc.). The potential cost savings for a large corporation were huge. Too bad the product never really took off and the company closed.

Mike Cichon July 20, 2017 12:55 PM

Under the category of what is old is new again … risk based authentication has come a long way since this blog post was published.Ironically, of all that is written on this subject, on a Google search for “Risk Based Authentication” this blog is one of the first articles to surface. This is probably due to all the great content here so kudos to you Bruce.

As it happens, Forrester Research just this week released a Wave report on RBA. It’s available on my company website for complimentary download, so I won’t post a link here, but the net of it is that RBA is now viewed by Security and Risk professionals as a critical element of their fraud prevention tool kit.

The sophistication and intelligence of RBA, and its integration into the enterprise security infrastructure represents a lot of what is new here. RBA has moved way past IP and device recognition with a focus on pinpointing fraud to a high degree of accuracy, delivering a personalized and frictionless digital experience to legitimate users, and eliminating a great deal of operational overhead related to false positives. There are also toolsets available now to help resolve transactions that do require manual review.

The future appears quite bright for RBA as it delivers on all fronts – fraud reduction, good digital UX, fewer manual reviews, and more efficient management of suspicious transactions. For more check out the Forrester Wave on RBA.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.