Schneier on Security
A blog covering security and security technology.
« Actual Security Theater |
| Manipulating Breathalyzers »
August 26, 2009
Small Business Identity Theft and Fraud
The sorts of crimes we've been seeing perpetrated against individuals are starting to be perpetrated against small businesses:
In July, a school district near Pittsburgh sued to recover $700,000 taken from it. In May, a Texas company was robbed of $1.2 million. An electronics testing firm in Baton Rouge, La., said it was bilked of nearly $100,000.
In many cases, the advisory warned, the scammers infiltrate companies in a similar fashion: They send a targeted e-mail to the company's controller or treasurer, a message that contains either a virus-laden attachment or a link that -- when opened -- surreptitiously installs malicious software designed to steal passwords. Armed with those credentials, the crooks then initiate a series of wire transfers, usually in increments of less than $10,000 to avoid banks' anti-money-laundering reporting requirements.
The alert states that these scams typically rely on help from "money mules" -- willing or unwitting individuals in the United States -- often hired by the criminals via popular Internet job boards. Once enlisted, the mules are instructed to set up bank accounts, withdraw the fraudulent deposits and then wire the money to fraudsters, the majority of which are in Eastern Europe, according to the advisory.
This has the potential to grow into a very big problem. Even worse:
Businesses do not enjoy the same legal protections as consumers when banking online. Consumers typically have up to 60 days from the receipt of a monthly statement to dispute any unauthorized charges.
In contrast, companies that bank online are regulated under the Uniform Commercial Code, which holds that commercial banking customers have roughly two business days to spot and dispute unauthorized activity if they want to hold out any hope of recovering unauthorized transfers from their accounts.
And, of course, the security externality means that the banks care much less:
"The banks spend a lot of money on protecting consumer customers because they owe money if the consumer loses money," Litan said. "But the banks don't spend the same resources on the corporate accounts because they don't have to refund the corporate losses."
Posted on August 26, 2009 at 5:46 AM
• 47 Comments
To receive these entries once a month by e-mail, sign up for the Crypto-Gram Newsletter.
It surprises me it took them so long to target companies. Gotta be more money to be had from a company than from grandma's saving account. Or is that they are now noticing, or is it that there's more reporting...seems like all three and just WHO is the litigant in the Pittsburg school district?
"School board vs. Some Guy in Romania"
Even suing the bank, and 74 withdrawls over two days maybe should have raised a flag, aren't we liable for people using our passwords?
I thought the bar was lowered to $5,000 transation reporting?
Regarding identity risk for business, I have a doozy for you:
When you change the publicly registered address for an Austrian business, you can do so via unvalidated Email, and I'm not talking even cryptographically unvalidated - any From: header will do.
You then get a confirmation snail-mail -- only to the new address!
That's the mail address that will be used for all government mailings (not the mandatory associations and social insurance, though).
This a risk for any kind of business, but I guess they'll take a closer look if Austrian Airways changes their head office address. Small businesses, not so much.
"We're from the government and we're here to help."
Of course the banks will give their usuall reply to business about using their secure ID tokens etc.
The only problem they no longer work this NYTimes artical on malware "realtime sending" of "securid" numbers so bank accounts etc can be empted?
Some times I just wish the banks etc would wake up to the fact that each and every transaction and it's contents needs to be properly authenticated to both parties not just user authentication at sign on etc but also through an out of band channel that is not mutable.
This slow incremental step by step approach to increasing security is not going to work with these sorts of hackers. They now have the back-end processes in place to launder the money relatively safely, and technical resources and finance to react orders of magnitude faster than the banks.
Sadly I have been predicting this since the last century, and yet the banks just continue externalising the risk one way or the other and draging their heals.
" And, of course, the security externality means that the banks care much less"
maybe this will change when the banks themselves are targeted instead of small businesses
That was an interesting article but it seems like it would be very easy to fix the SecurID system. From what I infer from the article is that the token is good for 1 minute, if they changed the system so that the token was no good after it was used that would eliminate the problem correct?
The passcode from a SecurID token is only valid for one transaction. The SecurID system doesn't work the way the article describes; they seem to be talking about another system entirely.
Who moves more money via ACH, a business or your grandma? The bad guys will go where the money is.
It's good for one minute or one transaction, whichever comes first.
"And, of course, the security externality means that the banks care much less"
But on the other hand, the larger the business, the smaller the assymmetry between the bank and the customer (as long as adequate competition exists).
Banks care a lot more about customers who have $2 million on hand than $2 thousand.
At a certain point, it's no longer an "externality". In these cases, the externality only exists because of assymmetry -- it's not an inherent externality, where the victim is in no way a negotiator in the relationship (say issues of pollution), but one due to the fact that one of the ostensible "negotiaters" in reality has very little influence on the relationship.
The solution would then seem to be for small business to bank with small banks -- in essence, not a direct security issue at all, but a smart business issue.
SecurID token codes are already only usable once. The problem is that this malware will typically intercept your token code entry, send it from their server and send you a duplicate of the bank's output. It's a full man-in-the-middle attack that can work now that the malware is real time.
In essence, the token code is only usable once, but the malware is the one that uses it first.
@ Nevins, anonymous coward, Rr,
Which ever way it works it does not unduly mater, the simple fact is that if it's fixed the attackers will simply move to the next "low hanging fruit" way faster than the banks can roll out the fix.
The real problem is piece meal upgrades of a system that has been known to be broken for well of 10 years.
I have suggested on this and other blogs in the past ways we could start thinking about solving the issue.
In essence it boils down to,
0, Establishing a secure authentication channel
1, setting up a communications channel
2, two way authentication of the two entities
3, two way authentication of each transaction
The important part is step 0 establishing a secure authentication channel. This would typically be an external token with a keypad and LCD.
Importantly the only connection to the secure token should be through the user, and importantly the token must be immutable to prevent it being attacked.
The immutable requirement does not preclude the token being updated but requires a physical interlock to be removed to enable any upgrade. However it does rule out using things like mobile phones PDA's or any other normal user device which can be updated (all be it via signed code).
The recent issues with hashes should indicate just why it needs to be a physical not software interlock.
Unfortunately this has a number of cost considerations and the banks are probably not going to go for it until forced to. And even then if it is via legislation they will fight tooth and nail to delay it as long as possible. It is only by losing significant market share that they will wake up and deliver, unfortunately as has been seen they will try a "fluff" approach first.
Doing proper secure two way authentication on all transactions with minimal hassle for the user is not an easy task and if not done properly will ultimately fail...
Thanks for the clarification.
I wonder if you could build into the token something like the IP address of the machine it was generated so that you could reject any attempts that did not come from the same IP as the token was generated.
I know its easy to spoof IP addresses but I think it would create a noticeable lag between when the malware got the token and when it was able to send the user output from the bank.
unless the malware created its own session locally
@Clive "Some times I just wish the banks etc would wake up to the fact that each and every transaction and it's contents needs to be properly authenticated to both parties not just user authentication at sign on etc but also through an out of band channel that is not mutable."
Some times? Clive you've said this so many times I think you've got it mapped to a hot key. Could it be that it keeps coming up?
(Skinners times 5 rule, if it comes up 5 times it's something to take action on)
And likely it belongs on a Tshirt....hmmmm either xxxxll size or 10 pt font.
"unless the malware created its own session locally"
Bingo give that man a prize 8)
This is the point no mater what you do you cannot trust the PC you are using it can lie to you in various ways via low level drivers or software that is loaded before the OS. Even signed trusted code has been subverted in various ways, and this will only get worse with time.
Now starting from that point where do you go...
Effectively you have to work out how to build a secure channel on top of the insecure channel that is immune to attack but is also usable by the user.
The simplest way is you enter a transaction ID into the token along with other details and the token produces a data string that you then send off to the bank. The bank sends back a reply that you then type into the token, if it's OK the token puts out another string that you send back to the bank as confirmation.
The problem here is how to gain the required level of security to avoid Man In the Middle attacks with the minimum of typing by the user.
Also you need to find some way of stopping the attack being automated, which means using a kind of random picture text that a human can correctly read with minimal effort but an automated process either cannot or would take to long.
First of all, there is no such thing as identity theft. It is simply financial institutions being the victims of fraud/loss and passing their losses onto their customers. From the perspective of the financial institutions, the term "identity theft" is one of the best things they have invented to date. They have done a great job convincing their customers (and the general population) that their identity could somehow be stolen (it can't) as a way to pass on their losses. This is no different than if a bank robber walks into a bank and steals money from the bank, however, in that case, the banks haven't figured out a way to directly pass that loss onto their customers like they have with their idea of "identity theft".
Regarding Clive's comments on real time man in the middle attacks, we do have technology that can easily stop this - it's called SSL authentication, and it works great. Unfortunately, the vendors implementing the technology have done such a disservice (mostly due to greed), that the whole concept of trust (aka CAs), which SSL authentication is based on, has become almost a joke, to the point where they are now forced to try to reestablish SSL cert trust (i.e. extended validation SSL certs), which was originally in place and was lost over time.
I agree with Clive's comment regarding client authenticated transactions, we realized 10 years ago working with digital signatures, that some form of "trusted sealed certified device" is a requirement to create digital signatures that have any chance of succeeding. There are just too many variables associated with creating a trusted digital signature on an untrusted computing device (i.e. desktop, laptop, smartphone, etc.).
There was a comment on twitter just yesterday about a furniture business being targetted by a shipping agent con.
@ BF Skinner,
"Clive you've said this so many times I think you've got it mapped to a hot key."
Yippie somebody is listening 8)
Do you work in a senior position in a bank?
With regards to,
"Could it be that it keeps coming up?"
Yup so often I've lost count, It's one reason why I do not have any on-line banking facilities and promptly leave a bank if they suggest it would be a good idea.
I even came up with reasons for why it still keeps happening (Hark I think I can hear the Moderator groaning so I'll stop 8)
"And likely it belongs on a Tshirt....hmmmm either xxxxll size or 10 pt font."
Make mine a UK XXXXL or US XXXL with the bigest point font that will fit 8)
I dispute the statement and mentality that due to the "externality... banks care much less." That gross generality may be true of some, but not those that truly care about their customers. Community banks strive to maintain strong, mutually beneficial relationships with ALL customers. Just because a loss would not have to be covered by a bank, does not mean that a commercial customer would understand that fact and retain their accounts within that bank. I can attest that at least some are investing in solutions and customer education to reduce loss/occurrences of theft. As fraud grows more complex with simultaneous structured transactions in/out amongst multiple bank locations, etc., the cost also rises on effective solutions to monitor and alert. Not investing in a solution after assessing risk is not the same as not caring.
Top Trends in ACH Fraud
Suspicious Activity Report Research
"The important part is step 0 establishing a secure authentication channel. This would typically be an external token with a keypad and LCD."
How about a dedicated hardware client? The cost of hardware keeps falling, so this might not be as ridiculous as it was 10 years ago. Example: A hardened netbook (or something smaller with a tailored UI), issued and managed by the bank. The only functionality would be secure Internet banking.
So what happens if someone switches netbooks with you? You can't tell if a computer is secure by looking at it, so a dedicated thief can take your netbook and leave you a similar-looking one (whether nonfunctional or compromised) and use the "hardware authentication" to clean out your account before you notice.
The bank my company deals with issues an SSL client certificate for each user that will be authorized to do wire transfers as part of the multi-factor authentication. That seems like the best solution -- certainly not invulnerable, but quite complex to MITM or steal the credentials. Clive, can you come up with an attack vector on that?
A dedicated client does not exclude other forms/factors of authentication (SIM card, PIN, password etc). Anyway, if you force the thief to switch hardware with you, it significantly raises the bar. Try doing that to a bank customer in the USA when you are sitting in e.g. Eastern Europe. It solves one important security issue (remote attacks), but, like all security, it is not a silver bullet.
Also, forget what I said about a netbook. Consider a small, dedicated hardware client with a basic banking UI, possibly something that would fit in a pocket or wallet. (Yes, this sounds very similar to a mobile phone, the key difference is that its only function is banking.) Less functionality and complexity makes it easier to build a secure system.
"Clive, can you come up with an attack vector on that?"
I'm not Clive, but how about a Trojan Horse on the computer that does the transaction? You don't need to be a MiTM if you control one of the end-points. If you can't trust the client, then you can't trust the transaction.
@And, of course, the security externality means that the banks care much less: "The banks spend a lot of money on protecting consumer customers because they owe money if the consumer loses money," Litan said. "But the banks don't spend the same resources on the corporate accounts because they don't have to refund the corporate losses."
I think that is due in part to another externality from a political perspective. There is many votes to be won/lost over protection of the individual but less so over protection of the business.
Trojan on the client computer -- true, but having to select the correct client cert would seem to introduce a minor Turing test, similar to captcha, which would significantly raise the bar of practicality. Then again, perhaps its not that difficult, I don't know.
"How about a dedicated hardware client? The cost of hardware keeps falling, so this might not be as ridiculous as it was 10 years ago. Example: A hardened netbook (or something smaller with a tailored UI), issued and managed by the bank. The only functionality would be secure Internet banking."
The problem is attack vectors.
You have to assume that any device that has an accessible communications path can be attacked. We have seen attacks on all OS's in one way or another via a communications path including portable memory (anybody remember floppy disk virus?).
However you do need one communication path and that is to/through the user, therefore you have to say that an open connection path with sufficient bandwidth and lack of control is an attack vector.
And implicit with that there is another (communication/) attack vector which is physical possession of the device even temporarily. To reduce the risk of this you make the device immutable that is no additional runnable code can be put on it (which would suggest a Harvard architecture processor such as a member of the PIC family of CPUs). Oh and put sufficient controls on the user comms path (say a minimum of a secure password just for arguments sake) including displaying last use and last access (which means a real time clock is needed).
There is an obvious downside to no comms and immutable which is if a bug is discovered then the device becomes landfill unless there is a "factory interlock" method. This needs to be such that case seals have to be broken to get the required access and make it physically obvious to the user this has happened.
Therefore if a client did not have the ability to be connected to a communications path "except through the human" with sufficient controls and "it did not have mutable storage" (except after a tamper evident physical interlock was removed), then yes it's going to do the job.
In practice an old PC without a hard disk and a dedicated CD-ROM OS/program would go a very long way towards this as it has no comms, but the CD-ROM could be swapped as could IO cards the BIOS etc etc.
Basically there are several security vectors that need to be considered and in order of importance I rate them as,
Firstly the bigest security vector is removing a communications path by which an attacker can get into the device.
The second is to reduce the ability of a person who "borrows" the device to be able to install malware etc.
The third security vector (and one I have not considered) is user entry errors. That is the software is so buggy that typing on it's keypad can cause the software to malfunction in a way favourable to an attacker.
There are others but in a mass production device they are going to be difficult to justify (there is no such thing as 100% security after all ;)
I work in the banking industry (although not in a bank).
@Alan - the types of trojans we're seeing now (actual, not theoretical or hypothetical) easily beat the Turing test, because the human attacker is driving. The users' PC gets compromised, the attacker sees everything the user does, and the attacker can do anything the user can do and can do it without the user seeing anything. The attacker can capture the user's username, password, multifactor authentication code, and SSL client certificate - in real time - and use them to access the bank's wire transfer interface as if they were the user, from the user's PC, completely invisibly to the user.
Re the "externality" - first of all, JP is quite right; this isn't strictly speaking an externality, and the term is not just shorthand for "something that someone else pays for". But economic terminological quibbles aside - in my experience, banks are very much concerned with losses suffered by commercial customers. If they haven't invested in security for commercial accounts to the same degree they have for consumer accounts (and I'm not convinced that's true), it's not because of differences in liability - it's because there hasn't BEEN a lot of online fraud in commercial accounts. Until recently. Now there is, as the article points out. The crooks have figured out where the big money is, and the banks have noticed where the crooks are going.
Try telling the CEO of a bank that one of their largest customers just lost a few hundred thousand dollars and is threatening to take their business elsewhere because of it, but it's okay because it took them more than two days to report it so the bank doesn't have to eat the loss themselves. It will be the last thing you ever tell that CEO, I assure you.
"In practice an old PC without a hard disk and a dedicated CD-ROM OS/program would go a very long way towards this as it has no comms, but the CD-ROM could be swapped as could IO cards the BIOS etc etc."
That is a very interesting idea. Compared to the situation today with vulnerable home PCs used for Internet banking, an "Internet banking CD" provided by the bank(s) might actually improve the overall security.
And yes, I know that there is no such thing as 100% security. However, we have a fairly good understanding of how to secure a communication path between two trusted devices (by cryptography), this is generally not the weakest link. So if we could increase the level of trust in the client device then that would improve the overall security. IMHO the biggest security risk with (individuals) using Internet banking today is vulnerable PCs/operating systems. The attacks cited in the article support that statement: the criminals go after the weakest link, which is the security of the client OS/application software (possibly with a little social engineering added).
Having a (more) trustworthy client device might make the weakest link stronger than it is today.
Also - vendors of token solutions like RSA and Vasco are well aware of these kinds of issues, and they are pressing hard on the banks to make them aware of these issues too, because they see themselves as being The Solution.
And, in fairness, they probably are.
What the industry is moving towards now - slowly and reluctantly, because it's god-awful expensive for the typical bank and annoying as hell for end-users - is similar to what Clive described above. It's using hardware token authentication at the transaction level. For each transaction, the user enters information that meaningfully-to-the-user identifies the transaction (such as the amount and the recipient) into the token, which generates a code. The user enters the token-generated code into the bank's interface. The bank verifies that the code is valid before accepting the transaction.
You have to do transaction-level authentication because a compromised end-user PC can have their legitimately-logged-in current session hijacked. You have to enter amount and recipient into the token because the attacker could spoof anything else. The amount and recipient have to be specified in user-meaningful ways, because otherwise the attacker could still spoof them.
A much simpler, cheaper, low-tech solution that most banks use to great effect is to require their commercial customers to have a multi-party approval scheme for transactions. One person has to log in and initiate a transaction; one or more other people have to log in and approve the transaction. This raises the bar a lot. Still not foolproof, but more than good enough - at least, against the kinds of attacks we're seeing today. Stay tuned.
"Clive, can you come up with an attack vector on that?"
All to easily it has a fundamental assumption the computer you are going to put it on is not compromised in some way.
As we currently know it is almost impossible to stop a user level computer connected to the Internet becoming infected one way or another so as a security assumption it is weak.
Further if the malware is specifically targeted at a particular banks users via a phisihing site etc then it would know about the SSL Cert and it's likely location and make a hidden copy of it and any user access data required for it's use (remember we are talking about skilled attackers against so-so bank programers).
Oh and then of course is the SSL Cert it's self and the software that uses it vulnerable in some way.
This is why the fundamental assumption I have made is that you need to establish a secure path across an insecure communications system, which as has been pointed out has a number of issues including MITM and "end point" problems.
Thus the client end point I have chosen to do this is the "user" that is I have assumed from the user to the bank is untrusted and therefore the authentication token must work through the user.
That is (briefly),
1, The user types the ID number of the basic transaction type into the token and the salient details specific to the transaction at that time.
2, The token then encodes these against the internal "client secret" and puts the result up for the user to type into the untrusted communication network.
3, The bank decodes the result and checks it for sanity, it then encodes the salient details against the account "server secret" and sends it back in a web page to the user along with a human (but not machine) readable version and authentication tag.
4, The user types the authentication tag into the token which checks it and provides a closing token, which the user sends back to the bank.
5, the bank checks the closing token and if it's correct carries out the transaction or raises an alarm if not.
Unfortunately as you can see from a 1,000ft overview it is complicated the actual details more so as the protocol needs to be correct to prevent attacks on it. It also has a high level of user input and output (which is probably the main HCI problem to overcome)
The idea is that if you can get secure two way end to end authentication not just of the user but of the transaction then most of the technical security issues have been dealt with.
However in reality even the user cannot be trusted either as has been seen by phising sites etc, But there is only so far technical solutions can go and user liability has to take over.
As in all things the solution is a compromise and is not 100% (nor can it ever be) but it is a lot closer than most other solutions out there and at comparable cost to other token systems. It's main downside and the one "Marketing" will object to is the high user activity per transaction, there are however ways this can be reduced.
"Still not foolproof, but more than good enough - at least, against the kinds of attacks we're seeing today. Stay tuned.
And that is one of the complicating issues.
Currently we have got to the point where one or more hardware tokens are needed and the bad guys have shown themselves adept at working the weaknesses.
The bad guys now have very significant resources both financially and by motivation.
One of my points over the years is not just raising the bar but raising it far enough (sorry Moderator I can here the oh no ;)
It's a bit like learning to climb mountains in general nobody goes out without experience or equipment and gets to the top of Mount Everest. You start of with small challenges and work your way up in stages obtaining the equipment and skills with time.
If however there where no small challenges to get experience and confidence on, then the sport of mountaineering would probably not have started and in all probability the equipment would not have been developed to enable a normal person to climb Mount Everest.
By just raising the bar each time the banks have in effect made a rod for their own back by effectively presenting one small challenge after another. And each time the attackers succeed they become more skilled develop better tools and become much more difficult to stop. Also their gains have caused them to set up a complex infrastructure to deal with the proceeds which makes prosecuting them much more difficult.
Either the banks accept this war of attrition as part of the cost of business or they say OK how do we stop this effectively "ourselves" (because to be blunt nobody else is unless they see a profit in it).
Back in the 1970's and 1980's "Stick-up" bank robberies where quite common almost everyday events they are not these days. Primarily not because law enforcement has rounded them all up but because the banks by installing the right security measures raised the bar and lowered the rewards to the point where it was not worth the effort in most cases. That is not to say that bank robbery is a thing of the past it's just that the smarter crooks have changed the low hanging fruit they go after (it progressed through armoured cars, ATMs etc).
You cannot stop crime only make it more difficult to carry out in any given domain and thus either move it on or reduce the number of participants.
The easiest and in the long run cheapest solution for the banks is to invest in the required security to move the problem on and reduce the participants. This requires more than just raising the bar a little but a lot (unfortunately the nature of the Internet precludes reducing the rewards).
That being said they cannot do it on their own they do not possess the expertise to design the protocols for such systems (think about WEP and Chip-n-Pin as examples of what can and will happen if the proper steps are not taken). Nor do they possess the capabilities to manufacture such systems.
Perhaps the best thing they can do is as in the past come up with a framework within which standards can be developed not just for interoperability but for security as well and then leverage the market with their combined clout.
The danger is if it is left to a lot of little organisations to develop solutions you will get a lot of incompatible solutions that will eventually need to be brought together. And that bringing together will open so many security holes that the game will start afresh.
"Also, forget what I said about a netbook. Consider a small, dedicated hardware client with a basic banking UI, possibly something that would fit in a pocket or wallet. (Yes, this sounds very similar to a mobile phone, the key difference is that its only function is banking.) Less functionality and complexity makes it easier to build a secure system."
These have been available commercially for some years from several vendors, in a wide range of form factors (keyfob, wallet, desktop, even text-to-speech for the blind). For example, see the VASCO range at http://tinyurl.com/ntczoy (no - I don't work for them).
The problem is not getting this kind of technology to work; it's avoiding seeing it bypassed by the social engineering, phishing, etc. that was the subject of the original article, while avoiding annoying the end users. And of course it's not free, though it is rapidly getting cheaper.
I've been wondering what the whole point of the business bank account scam was for quite a while (i.e. post linked below) - thanks for shedding some light on the way fraudsters use dupes' accounts.
It surprises me a bit that Bruce does not repeat his usual advice on fraud protection, namely that it is not about authenticating persons but authorising (trans-)actions (if I am not mistaken and please correct me). But then again, the article already uses the term "authorized transfers".
@Clive, and everyone, especially Bruce:
"That being said they cannot do it on their own they do not possess the expertise to design the protocols for such systems (think about WEP and Chip-n-Pin as examples of what can and will happen if the proper steps are not taken)."
I'd like to see Bruce write an article (maybe he already has?) theorizing on why there have been so many instances of security-done-wrong by large organizations (industry consortia, government committees, working groups, standards bodies, etc) when the right way to do it was reasonably well known to at least a decent handful of knowledgeable practitioners.
"...theorizing on why there have been so many instances of security-done-wrong... ...when the right way to do it was reasonably well known to at least a decent handful of knowledgeable practitioners."
Two reasons spring to mind the first being,
"a decent handful of knowledgeable practitioners"
Is another way of saying,
"you can count them on the fingers of one hand"
That is they are not that plentiful and there are by no means enough to go around for even critical infrastructure projects let alone all the other projects that sit on top of the infrastructure.
Which means that when there is a shortage of pro players people have to step in from another league, which has it's own problem,
To quote a very old saying,
"Pride comes before a fall"
If you are acknowledged as being more than a bit smart in one field and have a "can do" attitude, you can fall into the trap of believing that you are a "renaissance man" who has expertise in many fields. That is you have as much bredth as you have depth.
Unfortunately this is usually not the case, "renaissance man" has never ever been thick on the ground even before science was named as such.
The design of security protocols is probably one of the greatest "hidden challenges" of our time, it requires not just great technical insight and skill but the ability to judge risk judiciously whilst also being able to see how to break things in interesting and unexpected ways.
And believe me you need sensitive "guts" and "short neck hairs" and the ability to find out what is making them give you "that funny feeling" or sixth sense of where things are not right. Bruce sometimes calls it "feeling hinky", but it is a lot more than that.
Oh and not being funny the chances are quite high that if you are not left handed or not mildly autistic you probably don't have some of the skills needed.
This is because by and large lefties and Asberger's (ASD) tend to live inside their heads not inside the heads of others and this gives them an unusual perspective.
Interestingly I'm not the only person that thinks this, there is a European Company that is setting up an organisation in the UK specifically to recruit High Functioning Asberger's for technical security work (apparently we have more of them per head of population than other parts of the world).
i have been in the business of securing people against identity thet and i can assure you i meet people who have lost everything from bank balances to reputation. the context is India. the problem is severe and digital certificates do not solve the problem. the reason is customers do not know what to check in them. We need a customer tool to help identify genuine websites and not simply adisplay of digital certificates.
i am open to any kind of academic debate on this.
@ Clive Robinson:
This situation is why the Separation Kernels are so useful. I'm currently building designs on the VIA ARTIGO board that solve some of these problems. Essentially, my VPN's and crypto appliances follow these rules to reduce attack surface:
1) Trusted boot. BIOS boots core kernel and drivers from read-only device. Checks them against hardcoded SHA1 hash. Passes control. Kernel loads additional modules, checking that they are signed.
2) Privileges. Only kernel has Ring 0. All other software runs in Ring 3 w/ resource use enforced by capabilities. Simple processor is used to avoid Intel's known bugs, while trying to avoid onboard DMA hardware. If I use Intel, I use IOMMU to prevent DMA attacks and optionally VT for partitioning.
3) Partitioning. Whether on a separation kernel, SELinux or slimmed OpenBSD, I partition the system into least privileged components. High assurance Partition Communication System (PCS) or Security Kernel enforces comm rules. This is used in OP secure web browser and modern military SKPP systems. It's damage control.
4) Any connection to outside world always in separate partition. All authentication, etc. is done by reactive components. Keys never exposed. Erased from memory as soon as used. etc. Trusted path facilitated by PCS to prevent keylogging of PIN/password.
5) System has different modes it can boot in. One is production, one is update, one is restore. Each mode has different access control policies, and in Update, the external-facing partition is checked for integrity before data is accepted.
This scheme can be used in many different application areas. It intends to: limit damage; isolate external comms; ensure integrity of kernel and userland processes; extra protection for updates; no read access to secret keys; trusted path for secret entry; prevent DMA attacks; prevent processor attacks; allow for formal verification of information flow policy; reduce TCB of any one component greatly.
Many secure systems have withstood attacks using some of these principles. My goal is to use all of them with commercial or seL4 kernels and cheap, simple embedded hardware to provide a solid foundation for secure appliances. Like, say, a transaction authentication system. ;)
I log in to my bank in the Netherlands with user name and password. The bank sends a 6-digit code by sms to my cellphone after receiving the instructions for a transfer. The transfer is only executed after I have entered that code on the bank's webform.
Can someone analyse if this is an adequate barrier against this type of attack? Even if my password is stolen, my cellphone is still in my hands.
@ Eric Ferguson,
"Can someone analyse if this is an adequate barrier against this type of attack?"
There is not enough information to say, but I doubt it from your description.
What you need is some method to simultaniously authenticate not just the transaction in both directions, but both entities as well, and it has to be done in a secure way which would involve an out of band channel that is secure.
From what you have said it does not do this so the answer is very probably no.
If you consider an active "Man In The Middle" attack where you have logged into the bank and the bank has sent you some data by which you can authenticate it, this does not preclude the attacker simply passivly passing themselves off to you as the bank, and as you to the bank simply by doing nothing other than forwarding the communication in the required direction.
However when it comes to the transaction they can replace the details you type in with other details and send you what you expect to see from the bank for what you typed in. Neither you or the bank would be any the wiser.
The transaction it's self needs to be authenticated in both directions in a manner an attacker cannot realisticaly do and due to the limitations of human beings this would typicaly involve a token into which you typed the transaction then it displayed a result that you typed into the banks web page. The bank would send back a data string that you would then type into the token, which in turn would produce another data string you would type into the bank web page and send.
You and the token provide the method of authentication on the transaction, provided you and the token do not get "attacked" and the token is cryptographicaly secure then there is a reasonable chance it cannot be attacked.
There is nothing in what you said to indicate that there is a process by which you and the bank authenticate the transaction to each other in a secure way.
Not an expert at all.
But thinking about this problem.
How about an auto dialer system on the banks end.
With a pre-arranged code word or phrase.
So I am sitting at my computer by my phone authorizing a bank transfer.
As soon as I start the process an auto dialer calls me and asks for the password or phrase.
If I don't give the right code then the transaction does not go through.
Just a thought in passing.
I don't know about the cost but it seems to me this would be pretty hard to circumvent.
I see someone has already come up with that.
Seems like a good idea to me.
Now I suppose when the problem is that someone can intercept my transmission to the bank - and change it - this would not work - unless perhaps the voice mail could provide details.
For example "Mr. S you are wiring 10,000 to Ivan in Russia whereas I was only wiring 300 to pay my credit card bill to Capital Oone in Delaware."
So via the phone you would not only be authenticating a transfer but confirming the main details.
That it seems to me would be hard if not impossible to circumvent.
Because there is no way that I can see that a hacker could monitor both my phone conversation and my internet connection.
It amazes me how many security experts keep recommending "authentication" as the solution to this problem. When the end-user's computer is compromised it doesn't matter how strong or accurate the authentication is because this is a "session" attack! The attacker piggybacks on your authenticated session. Even OTP 2-factor has been compromised.
"It amazes me how many security experts keep recommending "authentication" as the solution to this problem."
Because "authentication" is currently the only solution that will work.
The problem is not with authentication but actually the way people implement authentication currently.
The current insecure authentication implimentations is why you can say,
"When the end-user's computer is compromised it doesn't matter how strong or accurate the authentication is because this is a"session" attack! The attacker piggybacks on your authenticated session"
As you note the first big mistake designers and implementers keep making is to "authenticate a session" when they should be "authenticating a transaction". Authenticating the transaction needs to be done by both parties and the final commit confirmation needs to be secure.
Secondly the other big issue is where is the authentication carried out in the communications channel.
If the authentication is done in the application on the PC then there is the OS and Display drivers outside of the authenticated transaction. These can be compromised, I have been saying this since the mid 1990's. So as you can imagine I (unlike many others,) was not surprised when it hapened as a successfull attack in the wild".
Thus for the authentication to work it has to include the human. Which as the human mind is not realy of much use at doing secure authentication means it has to be done "after the human".
That is the human becomes part ot the communications channel to another device that an attacker cannot (easily) get at. The human thus types the transaction details into a "token" and the token displays a string that the user then types into the computer. The reply from the bank gets displayed on the PC screen and the user types this into the token. The token checks the authentication and indicates if it is good or not and also displays another string that the user then types in to the PC to confirm the transaction the bank has is correct.
There are two issues arising from this,
The first is the security of the token. It must not in any way at any time bypass the human as this puts them outside of the suthentication chain.Nearly all token systems fail in this respect especialy the likes of smart cards. Also the token needs to be secure in it's design and effectivly immutable (this rules out the use of modern smart phones).
The second problem is the human, they are not going to like typing in everything three times...
And it is trying to resolve this issue which is going to almost always give the attacker an opening...
I though of a way in which the typing burden could be reduced by using the difference between a humans ability to recognise a distorted image and a computers limited OCR ability.
I had failed to realise there was a potential way around this... Attackers had simply used humans by paying "sweat shop rates" to people in various countries around the world to break "Captchas".
I was as they say "Close but no cigar"...
Schneier.com is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc.