Court Ruling on "Reasonable" Electronic Banking Security

One of the pleasant side effects of being too busy to write longer blog posts is that—if I wait long enough—someone else writes what I would have wanted to.

The ruling in the Patco Construction vs. People’s United Bank case is important, because the judge basically ruled that the bank’s substandard security was good enough—and Patco is stuck paying for the fraud that was a result of that substandard security. The details are important, and Brian Krebs has written an excellent summary.

EDITED TO ADD (7/13): Krebs also writes about a case going in the opposite direction in a Michigan court.

Posted on June 17, 2011 at 12:09 PM104 Comments

Comments

BillK June 17, 2011 12:30 PM

And Krebs also writes about a case going in the opposite direction in a Michigan court.

http://krebsonsecurity.com/2011/06/court-favors-small-business-in-ebanking-fraud-case/

Court Favors Small Business in eBanking Fraud Case

Comerica Bank is liable for more than a half a million dollars stolen in a 2009 cyber heist against a small business, a Michigan court ruled. Experts say the decision is likely to spur additional lawsuits from other victims that have been closely watching the case.

Nicholas Bohm June 17, 2011 1:22 PM

Arguably the court asked the wrong question (perhaps because the customer made the wrong claim).

The question ought not to be “Has the bank used commercially reasonable security procedures?” but “Has the bank proved either that the customer authorised the charges to its account or that the customer was at fault in allowing or enabling the fraudulent transactions to be made, so as to be responsible for the losses?”

If the bank sets up a system, however “reasonable”, which exposes customers’ accounts to fraud without fault on the part of the customers, then the bank should carry the loss. That would give banks the proper incentive to set up safe systems.

Nick P June 17, 2011 1:54 PM

The interesting thing is that it’s not very expensive to set up a transfer mechanism that would defeat most malware. The system might use a dedicated, small PC with very hardened software, trusted boot, attestation, and the bank’s public key. The system could only connect to the bank, only runs the transfer related applications, and accepts data from external devices in a low risk way. If two factor was used, then a token generator or smart card could be used. The customers would pay for the cost of the computer.

Banks just don’t care about the security. They’re not liable for these problems. The court’s need to rule that banks are liable unless they implement security measures that can actually counter at least common attacks. That would be “commercially reasonable” for a service that advertises itself as “safe” or “secure.” I also consider the current practice to be fraud.

Marcos June 17, 2011 2:05 PM

@Nick

That’s too expensive and user unfriendly. Banks need only to send code tables for their clients, on paper, of course. During a transaction, the bank presents a code, the client looks at the table and inputs the corresponding code. Won’t stop a very well articulated MITM, but those are rare.

The cost is a few hundred bucks on programming (doing it right for the first time is a few hundred dollars more expensive than doing it wrong), and some mail sent to the clients.

Steve K June 17, 2011 2:14 PM

I’m no fan of weak security, but I do agree with this ruling.

It isn’t up to the courts to write legislation (tho they often do). When US Congress passed a law decades ago making banks responsible for credit card losses over $50, the banks responded with much better security models. Congress could just as easily write legislation making banks responsible for fraudulent transfers that are outside normal practices for a particular business. Then the problem would pretty much go away. It would give incentives for strong two factor (not one factor twice) identification such as tokens or smart cards.

When courts look at what could have been done (in hindsight) and then make rulings with the power of laws you wind up with some pretty bad law and no easy way to rectify it.

The solution here belongs with the legislatures, not the courts.

Steve K (retired attorney)

Thomas Ryan June 17, 2011 3:08 PM

Here is the irony behind this. People’s United Bank uses banking software owned and operated by Fidelity (FNIS) ((ibanking-services.com)) Legally shouldn’t some of the risk be on the Service Provider? Also the court’s lack of understanding of technology or the Lawyer’s lack of ability to explain technology in layman’s terms essentially screwed Patco Construction in that court case.

B. D. Johnson June 17, 2011 3:27 PM

It’s kind of disappointing that, given how important online banking is nowadays, that they’re not embracing security.

To log into my World of Warcraft account i have to provide a username, password, and number off a token. The only additional protection on my bank account is a image shown to prove they know who you are, which only prevents phishing attempts already prevented by basic safe browsing practices.

Even the image to prove it, I wonder if you set up a spoof page, and just replaced that image with the “broken image” icon and some generic message along the lines of “authentication server load exceeded” how many people would still log in.

aikimark June 17, 2011 3:43 PM

I think the question should have been “Is your commercial client account security as strong as your own security?”

That is, does the bank employ a more flawed/weak security scheme than it provides to their commercial account holders.

When I first read the Krebs essay, I thought there would be a business opportunity as a result of this court ruling. One could rate the banks and then offer insurance for business account holders, based on which bank and any compensating security used by the business.

Mauro S June 17, 2011 4:28 PM

Fortunately in my jurisdiction (Brazil) the justice system is not so servile to big companies; here the banks must be much more careful so as not to lose money, it’s up to them to prove you were you, not the other way around. And since they are the ones who 1)cut huge costs with online banking and 2)can take countermeasures, this puts the incentive in the right place. The way things are in the US, the banks profit when their customers are fleeced.

BTW, one-time password tokens – supplied by the banks – are common here.

Gweihir June 17, 2011 4:31 PM

Maybe I am blind, but this looks like two times the same one-factor authentication to me, i.e. two times “something you know”. That is completely unacceptable for online-banking.

Nick P June 17, 2011 4:54 PM

@ Marcos

“That’s too expensive and user unfriendly.”

Most of the losses reported are in the six digit range. That’s more expensive than spending a few hundred or a few grand once every few years. This is one of the few situations where medium to high assurance solutions make financial sense.

“Banks need only to send code tables for their clients, on paper, of course. ”

That won’t work. You’re inputing the codes into a hijacked computer and trusting what it’s displaying to you is correct. There’s already malware in the wild that defeats two factor authentication codes for specific banks. A standardized, widely-deployed two factor technology would probably be targeted by these groups. Any scheme that uses the user’s computer to enter transaction parameters must either secure the computer from attack or make it untrusted. The scheme I used does the former.

Note that the solution I posted is not my favorite or preferred scheme: it’s just one of many approaches I’ve come up with just for kicks. I like toying with new ideas. The approach I commonly push, which I’ve described on this blog previously, is to use a dedicated appliance to display the transaction to the user and authenticate it. Essentially, a trusted path, no hardware bypass, and protection of crypto keys.

Basically, the user sets up the transaction in their GUI, malware loving OS. They hit Send or whatever, then the data is formatted into an easily parsed text form and sent to the appliance over a non-DMA connection. The appliance loads it, displays it (a KVM switch can eliminate two monitors), and asks the user for confirmation. The user might insert a token, enter a password or just say yes, depending on how it’s designed. The system then cryptographically signs the transaction with its private key and sends that to the PC over the non-DMA link and the PC sends it to the bank.

Using this scheme, the PC is untrusted because it can’t perform transactions by itself, it can’t authorize transactions, it can’t modify them, and it can’t trick the user by spoofing transaction data. Malware on the PC cannot steal the money. The appliance can be really cheap because it basically needs the capabilities of a thin client and crypto acceleration. A number of x86 and POWER processors already do this. It’s also easy to use: the user starts in easy to use app, hits a button, waits a second, visually compares the transactions, and authorizes it on the other machine. What do you think about this scheme?

(Note: IBM’s recent ZTIC takes this approach and makes it extremely simple. It’s more limited in the types of authentication and functionality it can have, but it’s good for individual wire transfers. See the link below to see how easily a scheme like mind can be implemented.)

IBM Zone Trusted Information Channel
http://www.zurich.ibm.com/ztic/

Richard Steven Hack June 17, 2011 6:20 PM

Nick: That approach sounds good. It would only cost a few hundred dollars for the appliance, well within the reach of even small business.

I assume the key is that the transaction is formatted in a strictly text form which cannot be compromised. But how do you verify that the text file does not contain an invisible attachment added by malware on the PC which can compromise the appliance?

Perhaps there should be a second step or device. The PC puts the transaction on a write-once medium such as a CD. The CD is removed from the PC and inserted into the second device. That device does one thing and one only: it scans the transaction and re-formats it into pure text.

It writes it to another write-once medium. That medium is then inserted into the transaction crypto device which does the encryption and writes it to another write-once medium. That medium is then inserted in the untrusted PC and the data sent from there.

Actually I’d prefer the crypto appliance do the sending over an encrypted channel to the bank. No matter what you do with the resulting encrypted transaction, sending it from the untrusted PC allows for the possibility of some sort of spoofing of the transaction details. I’d prefer to trust the encryption appliance to do the actual data transmission.

Also the non-DMA channel cannot be compromised – how do you verify this? This is why I suggest using a write-once physical medium transfer as the communication model between the untrusted system and the trusted system. Sneaker-net!

Remember – the appliance is a computing device. It will have security flaws. So ANY communication with the device from an untrusted device other than direct input from the user is likely to have a channel allowing compromise.

As for the two monitors, a simple imbedded LCD screen in the appliance would be sufficient.

Richard Steven Hack June 17, 2011 6:23 PM

Oh, and as far as usability is concerned, having the additional step of having to physically transfer the write once media is not really an issue. Companies have all sorts of inefficient physical processes going on in their computing environment.

Remember, we used to have to backup on diskettes. Back when I was working on an IBM System/34 system, we used to have to load ten 8-inch diskettes into a jukebox-like device to do our weekly backups. Moving a CD from one PC to the appliance isn’t that bad. Employees can handle that sort of stupidity, they do it all the time.

Nick P June 17, 2011 8:53 PM

@ Richard Steven Hack

“Nick: That approach sounds good. It would only cost a few hundred dollars for the appliance, well within the reach of even small business.”

Thanks for the feedback. That’s the idea. It’s affordable, relatively simple, and the complexity is low enough that high assurance design techniques may be used.

“But how do you verify that the text file does not contain an invisible attachment added by malware on the PC which can compromise the appliance?”

Input validation is applied to the text. A transaction set that fails input validation instantly kills the whole process, is possibly logged to a no-execute medium, and is zeroized from memory. The user might be notified of the failure. Remember that, since this isn’t just a crypto processor, the banks can put extra functionality in there to help with stuff like this. For instance, the user might “register” valid account numbers for payroll with the appliance. Later, during a transfer attempt, the software might check that the destination account is on that list. There’s other tricks, too.

The main defense is the visual inspection step. Any data it receives is rendered as text for the user to verify. It will be quite obvious if extra data was tacked on: it will look like a bunch of strange characters. Obvious changes or extra data is obvious to the user in this step. Slight changes, like a digit difference, might be harder to detect. We could add in a feature whereby the hash of the transaction details is displayed on the user’s PC and on the appliance during verification. If it doesn’t match, the user would be instructed to contact support or an administrator to determine if it’s a severe glitch or a compromise.

“That device does one thing and one only: it scans the transaction and re-formats it into pure text.”

You can do that in software. My design would be implemented with medium or high assurance design principles. That means the input validation is a separate software process. It takes input from the communications stack, copies it into a buffer (fixed amount of copying to prevent overflows), validates, converts, stores it in a new buffer, repeat until transaction data is converted, and and notifies the next component. There’s a similar component and strategy for sending the signed transaction back to the main machine.

The reason I use a design like this is it allows us to leverage a separation kernel, like Integrity-178B, to much of the heavy lifting. So long as the IPC communications policy is correct & the components are functionally correct, then the overall system is secure [enough]. For instance, I can make the buffer’s or components containing them different address spaces, with any attempt to interact with those buffers invoking the kernel for a security check to prevent unauthorized information flows. This turns the situation from “Are all of my software functions’ potential interactions secure?” to “Is the kernel secure & my access control lattice correct?” Much easier problem, esp. with certified kernel. Isolating the communication part from the rest of the system also let’s me use simple tricks to prevent externally visible covert timing channels. I hate timing channels with a passion. They are insidious and difficult to find in complex comms stacks.

“No matter what you do with the resulting encrypted transaction, sending it from the untrusted PC allows for the possibility of some sort of spoofing of the transaction details.”

The trusted display on the appliance prevents spoofing: it signs what the user see’s. Any modifications are caught by verifying the transaction details against the signature. Malware has to access the private key on the appliance to forge a signature and the bank gets the appliance’s public key during user registration or when issuing the device. In the abstract, this is basically the same way that products like Safenet’s Luna CA4 use to protect root keys while signing other CA’s. I’ve just expanded it to use cheaper hardware & have more functionality. TU Dresden’s e-Commerce Nizza Architecture demo also uses a similar approach.

“I’d prefer to trust the encryption appliance to do the actual data transmission.”

I warn you to NEVER do that if the comms stack and appliance’s software run on the same processor or SOC. Unlike a serial/ATA driver w/ simple comm’s stack, adding Ethernet + TCP/IP + VPN to the appliance drives complexity way up. Vulnerabilities in this complex software might lead to compromises through their interactions with the hardware, drivers, or OS. Simple is better. So, I leave that whole mess outside of my appliance in exchange for a very simple, maybe mathematically verifiable, comms stack. If the networking is taken out on the main PC, then the user might not want to be attempting transactions anyway: something is really wrong at that point and they should at least boot with a Linux LiveCD. The system might come with one as a backup option and it would include the relevant public keys, interface software, etc. A support contract might provide regular, updated LiveCD’s for users by mail or collectable at the bank.

Although, we could tweak the design to allow a backup (or simply different) system to be plugged-in in mid transaction. The user would use the non-default option, “authenticate but don’t send yet.” The user would plug in the other networked system, wait for the connection, and then hit “Send.” This tweak still leaves the other system untrusted, only present for availability.

Actually, while I’m on this brainstorming tangent, you suggesting gave me an idea that would allow us to do what you say and meet my security requirements. We could put networking on the device without putting it on the device. (Paradox, huh?) I’ve done this in previous designs & Rockwell Collins does it in their Turnstile cross-domain guard. Basically, the device is two devices or boards: one high quality board doing the security-critical stuff and a really cheap, replaceable networking board. (The user interface portions MUST be on the high assurance board or a separate, cheap board for security reasons.) The networking board would have Linux/BSD, networking, VPN, etc and simply relay information between the high assurance appliance and the bank. That meets both our goals.

I usually use this trick of offloading something onto another board when I want a COTS functionality, like RAID storage or complex networking, in a high assurance system. Isolate the COTS functionality onto other boards, connect them to the trusted system with non-DMA hardware, and interface with a careful protocol. It can’t be used all the time for performance reasons but it’s often effective. If the system allows for medium instead of high robustness, DO-178B Ethernet and TCP/UDP/IP stacks might be used to meet higher performance requirements with a low probability of defects.

“Remember – the appliance is a computing device. It will have security flaws. So ANY communication with the device from an untrusted device other than direct input from the user is likely to have a channel allowing compromise.”

We definitely share the same concern. Which is why we’re using the most robust hardware and software platforms we can get. My initial design employs a Freescale or Curtis-Wright PPC board with no known security critical errata, increased assurance BIOS/firmware, and a DO-178B or EAL6+ certified kernel, drivers, and filesystem. We can get all that without extra effort if we use one of the existing EAL6+ hardware and software configurations. High assurance techniques would be applied to the interfaces between trusted and untrusted software, especially comms stack.

Several systems designed the way I would design the comms stack have achieved high assurance certification, survived years of NSA pen testing, etc. They were also fielded for years without any known compromises or security-critical flaws found. So, it definitely can be done. It’s just hard work. Most of that was achieved with mid-80’s to mid-90’s technology. I’m sure we can do better now. Actually, the L4.Verified project only took a few million to do most of what used to be $15-25 mil. Development tools, from IDE’s to bug hunting tools, are also light years ahead of what they had back then. It’s also been done recently: INTEGRITY-178B and MULTOS were both certified after 2000 to EAL6+ and E6/EAL7 respectively, with Caernarvon preparing for EAL7 evaluation.

“Moving a CD from one PC to the appliance isn’t that bad. ”

I totally agree. It’s very usable. The issue I have is solely with it’s complexity & firmware-level attack issues. Serial or non-DMA ATA + simple comms protocol stack is much simpler than a ATA/IDE driver + CD-ROM driver. This makes it easier to get right. Also, the hardware is cheaper/smaller, and it lasts longer. The simpler design could also use custom hardware or firmware implementation using the same verified process if clients called for it. Verifying a CD-ROM interface and implementation is another matter entirely… The firmware worry refers to attacks like Heasman’s on device’s firmware. A simpler mechanism can be implemented in a way that resists these attacks, especially since there’s no device between the PC and the appliance: just a wire whose input is untrusted and scrutinized.

Disclaimer: I didn’t mean to drop a Clive-sized post, but I figured spelling it out is useful as we’re basically designing a product right now. I figured if I posted what I’ve already gathered about transaction signing, you guys might spot areas of improvement in functionality, assurance or cost reduction.

tommy June 17, 2011 9:14 PM

I must be the only one here who’s missing something.

“Patco said cyber thieves used the ZeuS trojan to steal its online banking credentials….”

So, the plaintiff, Patco, had weak security, allowing their system to be infected by a Trojan that stole their credentials, then they sue the bank for honoring those exact credentials?

Why isn’t anyone examining Patco’s system to find how the trojan got there, and the million other insecurities in their setup?

If my house key is stolen (from underneath the potted plant by the front door) and a burglar uses it to break into my home, should I be blaming the locksmith because the lock opened when the proper key was inserted?

Not by any means saying that present bank security isn’t a joke (more on that in a minute); actually, they’re the worst, as I’ve said before. But Patco, like all of us, should be providing our own security as much as possible, since we can never count on another party to secure us.

If the bank accepted the stupid challenge questions only, without a proper password, that’s a different story. (and how Sarah Palin’s Yahoo e-mail account was hacked.) But like Bruce, my typical answer to a challenge question is:

Q. “What high school did you go to?”

A. “+Hg;u?_-]FXzk”

No one has ever successfully searched public records, or successfully guessed, any of those.

Wall Of Shame:
https://secure.ingdirect.com/myaccount/INGDirect/login.vm?locale=en_US&device=web&userType=Client

A little independent research: They announced a few months ago that effective June 15, 2011, they would no longer allow access with Firefox 2.x, whose support ended at the end of 2008. Understandable.

Out of curiosity, I fired up the last version of the 2 series, 2.0.0.20, tried to log in, and surely enough, the page wouldn’t even display.

Then I reset the useragent of the same browser to Firefox 3.6.17, and went there. Surprise: full login, full access.

They can be fooled by just spoofing your own useragent, an easy about:config setting that requires no more skills than knowing what a useragent is and where the setting is located? (Mozilla Support will gladly help there.)

Banks, the dumbest on security matters? I rest my case.

Nick P June 17, 2011 9:31 PM

“Why isn’t anyone examining Patco’s system to find how the trojan got there, and the million other insecurities in their setup?”

There’s no doubt they are partly responsible. However, I think most of us are siding with them because the banks are committing these evils:

  1. Forcing the use of weak authentication methods that don’t counter known malware (and supporting it with non-sense legal claims about what constitutes two-factor authentication).
  2. Telling companies that their money will be safe because the banks systems are secure and the authentication method is safe.
  3. Banks not informing customers of threats and countermeasures. By contrast, Krebs on Security has done more in that regard than any of the banks offering the online banking schemes.
  4. Banks aren’t making options available, like better transaction authentication, that make these remote attacks harder or impossible. RSH and I have already worked out a design whose final unit cost would be $50-$2,500 depending on the features and level of assurance. Spend, at most, $2500 to prevent a loss of $100,000+ regardless of flaws in your PC or software? Problem solved at bargain price.

So, banks are being both careless and deceptive. Additionally, they are mandating weak authentication schemes. This is a regulated industry. Many of us think courts (via liability) or government (via regulation) should force banks to adopt a higher standard of commercial security so that small businesses left and right aren’t wiped out by malicious individuals. There are actually many solutions on the market already from the likes of Barclay’s. There’s just no effort on the part of most banks to even make them an option, much less mandate them.

tommy June 17, 2011 10:19 PM

@ Nick P.:

I agree with (2) and (4). (3) will never happen, because the banks themselves are ignorant.

(1) isn’t the age-old problem of “information inequality”, it’s the modern problem of “equal ignorance”. Customers and banks are both ignorant. I don’t want to replay my phone conversations with financial institutions, even with their “online tech support” reps, but it’s pathetic.

Also, nothing counters all malware known and unknown, although I will look at Richard’s and your system at leisure.

US-CERT and NSA have both recommended the Firefox+NoScript combination, but somehow the media haven’t bothered to publicize that. “Everyone” knows that you click the little blue “e” to browse, right?

NSA reference:
http://hackademix.net/2011/05/04/nsa-and-middle-east-rebels-agree/

I can understand wanting banks to warn customers about likely bank-related attacks, such as phishing, which most do. But you can’t hold banks accountable for inherently weak OSs; why not hold the OS vendors (all, not just MS) liable for these?

Something as simple as a business owner or executive getting to know their local brick-and-mortar bank people, and providing landline (not cell) phone numbers for verification, and setting a limit — “Anything over X dollars, please call one of these authorized numbers and names to verify”. My credit card company has done that when something triggered their own “risk alarm”, and props to them. (It was a legit charge.)

Until the golden day that never comes, perhaps the idea of transferring hundreds of thousands, or millions, of dollars, by Internet connection to a place and people you’ve never seen, thousands of miles away, or on another continent, is just a bad idea. I remember in the 90s, it was predicted that “all B&M stores will disappear”. Ha. More than ever, the golden rule of financial institutions is “know your customer”.

I agree both parties are partially at fault. I just couldn’t understand why neither the article, the blog post, nor the comments mentioned the customer’s part of the blame.

Richard Steven Hack June 18, 2011 2:54 AM

Nick: OK, I can see where you’re coming from in terms of simplified hardware and software being more secure. That makes sense. So I’d agree the CD thing would be unnecessary if you could insure the security of your comm stack.

I’d be inclined to go for your suggestion of having two separate pieces of hardware in the system, one for handling the transaction and crypto, the other for handling data comm to the bank.

The reason I suggested having the appliance do the data comm to the bank is not because I’m concerned about the encrypted transaction being messed with on the untrusted syste after its been signed, but that the untrusted system might somehow have a way to completely create a separate, damaging transaction (one for more money to a different destination, say) and replace the encrypted one with their own. Having the appliance do the sending would make that more difficult since there’s no way to manipulate the session information (except MITM maybe).

So the initial session could be set up on the untrusted PC, and then the appliance could receive the transaction, do its thing, then send it over a much more robust – and even totally separate – comm channel to the bank.

Sort of like separating a home PC’s Internet access from the PC’s VPN access to a corporate network. That is usually considered a security risk because of compromise of the PC via the untrusted Internet channel could place the VPN channel at risk.

But in this case the separation would be done precisely because of the lack of trust of the PC Internet access. The appliance takes care of the problem of compromising a single machine with both Internet and VPN by splitting off the VPN part into another device separate from the potentially compromised PC.

Then the only way to compromise the transaction externally to the appliance would be to compromise the VPN connection between the appliance and the bank (or on the bank end of the transaction.)

Daniel June 18, 2011 11:56 PM

I do agree with the general thesis that banks practice weak security, mostly because they can. On the other hand, I think this specific case is more difficult than some imagine. First, it worthy of note that the company didn’t practice good security itself. But even more important, once hacked the company screwed up the digital forensics. They made no effort to preserve digital evidence and in fact did the exact opposite. I don’t wish to call them liars, but their word is just their word. To me the following is very damning to the company, “The magistrate said Patco erred by “having irreparably altered the evidence on its hard drives by running scans on its computers and continuing to use them prior to making proper forensic copies.”

Maybe what needs to happen is better outreach to small business. If you think you have been the victim of computer crime, your computer and especially the hard drive is now the crime scene. If you came home and found your spouse in a pool of blood, you wouldn’t set about mopping up the blood and shooting holes in the furniture just in case the killer was hiding behind one. So why would the company do the digital equivalent.

Nick P June 19, 2011 1:37 AM

“but that the untrusted system might somehow have a way to completely create a separate, damaging transaction (one for more money to a different destination, say) and replace the encrypted one with their own”

The beauty of the design is that’s not possible without compromising the appliance. This is actually the whole reason for using a digital signature scheme. Any transaction details the bank receives must have been hashed and signed with the appliance’s private key. Without the private key (stored in appliance), malware on the PC cannot (1) create any authorized transaction or (2) modify any signed transaction. The user see’s, via appliance, exactly what they are signing. Transactions are dropped unless they are signed by the appliance. The situation you are describing can’t happen by design. Only availability issues or perhaps psychological tricks to make the user distrust the appliance (those are always available though).

But, assuming we want that availability or reliability, we can move forward with your idea…

“But in this case the separation would be done precisely because of the lack of trust of the PC Internet access. The appliance takes care of the problem of compromising a single machine with both Internet and VPN by splitting off the VPN part into another device separate from the potentially compromised PC.”

True. This doesn’t add to transaction protection for reasons described in previous paragraphs. This only increases security if the appliance is leans more toward low assurance than high. However, it does make it a harder target and increase the effort for the attacker, increasing availability. This adds cost to the device. Whether it’s worth it or not depends on the person purchasing it. That’s how it looks to me so far.

Mark Currie June 19, 2011 7:12 AM

@Nick P,

There are transaction security devices that have been around for a long time (over a decade in fact) that effectively thwart the man-in-the-browser threat. So why don’t we all have one? I believe that the problem is due to the traditional IT service-centric model. All these solutions require service providers to make huge financial investments in user gadgets and back-end security servers. Perhaps more importantly, they also have to invest in extensive client support services and other non-core support infrastructure. Essentially they have to become IT vendors to their clients. This is a hard sell and one which must be repeated for each and every service provider. It’s no wonder that these devices have not had significant market penetration on a global scale.

This service-centric model also has negatives on the user side. As users we have no choice in what type of gadget we are issued and we may have to accept having different types of gadgets from each service provider. What we want ideally is a user-centric solution that works with all service providers. This might seem impossible to do without requiring back-end support but it actually isn’t. I wrote a paper on this a while back and if anyone is interested here is the link.
http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=5409720&isnumber=5409687
(If you don’t trust the link you can google “in-the-wire authentication”)

The gadget that I developed is similar to the IBM’s ZTIC that you mention above. However it not only allows you to confirm information on its integrated display, it also allows you to insert information directly into the encrypted stream. Passwords, account numbers, ID number etc. can be pre-stored on the device, or directly entered via an integrated user input mechanism.

Put simply the gadget is effectively a HTTPS proxy enabling it to control information on the (trusted) server side of the link. It only implements a very small core of the TLS stack (necesary for security) and it also understands a very small subset of HTTP/HTML allowing it to filter and display/insert information in the clear text.

Mark Currie June 19, 2011 7:15 AM

@Nick P,

There are transaction security devices that have been around for a long time (over a decade in fact) that effectively thwart the man-in-the-browser threat. So why don’t we all have one? I believe that the problem is due to the traditional IT service-centric model. All these solutions require service providers to make huge financial investments in user gadgets and back-end security servers. Perhaps more importantly, they also have to invest in extensive client support services and other non-core support infrastructure. Essentially they have to become IT vendors to their clients. This is a hard sell and one which must be repeated for each and every service provider. It’s no wonder that these devices have not had significant market penetration on a global scale.

This service-centric model also has negatives on the user side. As users we have no choice in what type of gadget we are issued and we may have to accept having different types of gadgets from each service provider. What we want ideally is a user-centric solution that works with all service providers. This might seem impossible to do without requiring back-end support but it actually isn’t. I wrote a paper on this a while back and if anyone is interested here is the link.
http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=5409720&isnumber=5409687
(If you don’t trust the link you can google “in-the-wire authentication”)

The gadget that I developed is similar to the IBM’s ZTIC that you mention above. However it not only allows you to confirm information on its integrated display, it also allows you to insert information directly into the encrypted stream. Passwords, account numbers, ID number etc. can be pre-stored on the device, or directly entered via an integrated user input mechanism.

Put simply the gadget is effectively a HTTPS proxy enabling it to control information on the (trusted) server side of the link. It only implements a very small core of the TLS stack ( necesary for security) and it also understands a very small subset of HTTP/HTML allowing it to filter and display/insert information in the clear text.

Nick P June 19, 2011 1:05 PM

@ Mark Currie

“All these solutions require service providers to make huge financial investments in user gadgets and back-end security servers.”

Traditional solutions do. This is a problem my design was intended to solve. It only requires software modifications on the backend. The front end investment is paid for by the customer so their liability is reduced. It’s purely a client-side security scheme.

“I wrote a paper on this a while back and if anyone is interested here is the link. ”

Thanks for the paper. I’ll check it out.

“Passwords, account numbers, ID number etc. can be pre-stored on the device, or directly entered via an integrated user input mechanism.”

Sounds neat so far.

“It only implements a very small core of the TLS stack (necesary for security) and it also understands a very small subset of HTTP/HTML allowing it to filter and display/insert information in the clear text.”

Then the complexity creeps in. There have been few implementations of HTTP/HTML parsers w/out errors. The assurance level of this device would necessarily be smaller than a simpler device. I do like the functionality. Even so, it seems like it would require more software development investment for each client than a standardized signing app. What are your thoughts on that?

jfbauer June 19, 2011 5:01 PM

@Nick

Sounds like you are referring to the “Treasury Workstations” of the 80’s where the bank would provide a dedicated PC to the corporate customer for executing bank specific transactions and getting bank specific reports. All “sans Internet”.

Nick P June 19, 2011 5:27 PM

@ jbaurer

I wasnt aware of them. Sounds similar in principle. Just goes to show even more how so many modern IT security problems were solved in the 80’s. We just keep reimventing the wheel.

Jim June 19, 2011 6:24 PM

@BillK and @Papafox: There’s one BIG difference between this case and the Michigan case: in the Michigan case, the bank became aware of the fraud, and failed to prevent several further transactions from happening. That sounds to me like negligence on their part.

@Gweihir: I thought the same thing at first. Krebs left out something in his blog. This sentence suggests some kind of physical device ID was used as the second factor:

“ZeuS also allows attackers to tunnel their communications through a victim’s own PC and browser, an attack method that can negate the value of a device ID as a second factor.”

The blog doesn’t mention what device, but I downloaded the PDF of the decision, and it seems the bank (or, more accurately, the clearing house providing the financial services) was using a cookie and the IP address of the PC where the transaction originated as the “something you have” factor.

@AlanS: Unfortunate, yes, but it should not have had any effect on the trial. The decision document quotes the FFIEC guidelines as saying ““financial institutions should periodically . . . [a]djust, as appropriate, their information security program in light of any relevant changes in technology.” Zeus seems to me to have been a pretty relevant change.

Clive Robinson June 19, 2011 6:29 PM

@ Nick P,

Sorry been busy this weekend (three guesses) so have just had a looksee.

My first thought on your device is it has an issue or two which you need to address.

The first is the issue of an “end run” around the end of the security chain.

With existing PC’s the human is outside of the secure channel as are the device drivers on the PC. Thus those pesky shim attacks that change what gets displayed on the screen still work.

Idealy the user has to be put inside the security chain. That is the “transaction authentication device” (TAD) has a small screen and keyboard the user uses to operate it as part of a protocol,

1) The user logs onto the bank from their PC and in a sensible way authenticates the sesion initialisation.
2) The user types in the transaction request on the PC it goes of to the bank.
3) The bank processes the request and produces some kind of transaction authentication code which it sends to the PC.
4) The user reads the code on the PC screen and types this code into the TAD.
5) The TAD then uses the code to display the transaction details on it’s screen as well as an “accept code”.
6) The user verifies that the info the TAD displays is the same as they typed into the PC in step 2. If it is then the bank has the correct details, and the user types in the accept code from the TAD screen into the PC to compleate the transaction.
7) The bank checks the accept code and then only if it’s correct does it process the transaction request.

In this way the transaction security actually goes through the human in a way they can verify, which is important.

The cost of suitable TAD hardware could easily be down in the 10USD ball park figure using a modern PIC or Atmel processor in the “device”

The TAD hardware would have to be fully imutable.

Nick P June 19, 2011 6:49 PM

@ clive robinson

Im away from a pc right now and cant do a detailed reply yet. I did want to reply about ur strange claim that the design we were discussing doesnt keep people in the loop, etc. I think ur looking at the first design i posted: a castle approach i threw together for shits and giggles.

My real design/recommendation was in my next post. It uses a TAD-like scheme, but the transaction appliance has different functionality. It was alse specifically designed to account for driver issues and some basic interface covert channels. If u want details, see the reply to marcos then my discussion with RSH.

Dirk Praet June 19, 2011 7:12 PM

There’s several elements in this story that puzzle me.

1) Why didn’t Patco talk to a subject matter expert before using an online banking system most of whom would have laughed its security model away ? And did they really use a probably poorly secured or patched PC running a popular all-purpose OS for these transactions ? That’s just asking for trouble.

2) The judge is mentioning in his decision that there is basically zero case law on [question of what constitutes reasonable security] for the banks. Does this mean that no other company in the US has ever sued a bank before over a similar case or that other such cases were settled out of court ?

3) Are banks in the US so poorly regulated that they are not even required to take out some form of insurance to protect both themselves and their customers against online fraud ? That would be unthinkable over here.

4) Did Ocean Bank mention anywhere in its service terms that in case of online fraud, customer might end up paying for the damages himself ?

On a related sidenote: one of the comments in the Krebs article is mentioning a US Airforce LiveCD called LPS (. http://spi.dod.mil/lipose.htm ) Anywone familiar with that one ?

Richard Steven Hack June 19, 2011 9:19 PM

Clive: The way I read your description of the TAD suggests to me that what the bank does is receive the transaction request, then ENCODES the transaction request into the “transaction authentication code” (herein TAC) which is then sent to the TAD via the PC along with an encrypted “Accept code”. The TAD decrypts the TAC, displays the transaction details, then the user hits “Accept”. The TAD sends the encrypted Accept code to the PC which sends it to the bank, finalizing the transaction.

My suggestion to Nick was that the TAD handle the whole thing. His objection was that the comms handling of a TAD-direct-to-bank session was too insecure. So the PC should handle that but the only communication between the PC and the TAD would be over a simple wire transfer protocol and in pure text format, to minimize vulnerabilities associated with the IP stack and parsing the transaction details.

I like your idea, if that is what you’re suggesting, to encode the entire transaction request on the bank side, send it to the PC and on to the TAD for verification by the user that what the user sent and the bank understands are the same.

This would seem to introduce the possibility of vulnerabilities in the TAC encryption while it’s on the PC or possible manipulations of the TAD encrypt/decrypt software via the PC, which is what Nick wants to avoid by only transferring the transaction in pure text. But I’m inclined to dismiss that; we have to draw the line somewhere. If the crypto software is not trusted, the whole exercise is a waste of time.

I agree the cost of the device is mostly related to how big the embedded LCD screen would be which is a function of how big the transaction details are, which presumably are no more than a few lines of text, and also the cost of the processor needed to do the crypto in a reasonable time. Device probably wouldn’t be any bigger than a keyboard or a large DSL modem or switch.

Jay June 19, 2011 9:24 PM

@Clive: as I understand it, you’ve described Nick P’s system – with an extra step of copying binary data between TAD and PC done by a human.

I like his system (assuming no protocol vulnerabilities… and with simplicity and RS-232 we can prove that). I’m not so good at transferring binary data in my head; it will irritate your users, and it’s the TAD displaying the transaction and requesting confirmation that provides the security properties.

Richard Steven Hack June 19, 2011 9:56 PM

Ah, I missed the part where Clive requires the USER to manually transfer the code to the TAD. Yeah, that’s a killer to get right for a human. Agree with Jay that as long as the user confirms the transaction details displayed on the TAD – which also confirms that the bank has received it correctly and it was not tampered with midstream – that is sufficient.

Clive’s approach COULD work, it’s basically a manual version of sending the transaction details to the TAD. It’s just subject to requiring repeated attempts (unless the TAD allows significant editing ability which means at least a keypad and probably a keyboard as well as editing software (something better than vi, BTW!)

Richard Steven Hack June 19, 2011 10:04 PM

How about authenticating the TAD?

Here’s a scenario:

1) User keys in transaction on PC, sends to bank via his PC using whatever SSL/VPN methodology the bank requires.

2) That transaction is intercepted by some compromise of SSL/VPN via MITM. Hacker now has all the user ID, password, transaction codes, etc. Hacker does nothing to interfere with the current transaction, merely obtains all the info he needs to initiate a transaction later.

3) The transaction is finalized by the user using their TAD.

4) Hacker now takes the intercepted transaction and runs it against the bank using HIS TAD that he acquired from the bank under a false corporate or personal identity.

So the TAD needs to be authenticated by the bank as belonging to the correct user, no? And as Clive says, how can we make that immutable? If a hacker can get his own TAD, and it is not immutable, then he modify it to imitate another company’s TAD. Then if he can obtain user transaction information via a compromise of the user’s PC, the bank’s server, or a MITM attack, he can run his own transactions against the bank and have them verified by his TAD.

Andy June 19, 2011 11:20 PM

“2) That transaction is intercepted by some compromise of SSL/VPN via MITM. Hacker now has all the user ID, password, transaction codes, etc. Hacker does nothing to interfere with the current transaction, merely obtains all the info he needs to initiate a transaction later.”
The TAD could use a random part of the exchange as a IV to the encryption that doesn’t get passed over the wire, as is.
The attacker would need to guess or access a key off the TAD(shouldn’t leave the device), and check the two pulic keys both ends. If the bank adds data from your public key and its and you add data from the banks public key and yours…If there is a mitm the public keys won’t match, or if its just passed along without modifed the mitm would need to break ssl encyption

RobertT June 19, 2011 11:30 PM

I have not completely read all the replies but I’ve noticed that there is a lot of concern about costs. From a chip makers perspective the required hardware is trivial and costs less than $0.30 USD in volume. The problem is that banks have BIG volumes especially if they try to send such a device to every client. There is the added risk that if they send it only to 1/2 their clients than the other half can reasonably say that they were given inadequate security, (if they incur any loss)

So for the banks they have the ideal system already “reasonable security” where their customer suffers the loss. Nothing to needs to be fixed?

IMHO the most likely solution will come from cellphone embedded (near field communications) NFC. This NFC security processor will provide a secure token as a two part authentication system.

I seriously doubt that US banks will be the first to deploy this technology, because they suffer no risk, so there is nothing to fix. However, the whole worlds laws are are not so perverted so countries like Japan and China and will probably deploy this technology ahead of the US.

Adding NFC security unfortunately opens up classes MIM of attack that are improbable today. These MIM attacks are very hard problems to fix especially with an RF link between the token and the system.

You gotta love this business, because fixing that link problem will pay next years salary.

tommy June 20, 2011 1:33 AM

@ Dirk Praet:

“On a related sidenote: one of the comments in the Krebs article is mentioning a US Airforce LiveCD called LPS (. http://spi.dod.mil/lipose.htm ) Anywone familiar with that one ?”

No, but a quick look is enough to interest me. Downloading it as we speak, and will look at it at leisure during the week.

Interesting that the US Air Force is making this publicly available. (Hey, we US citizens paid for it, you know!) And that they chose Firefox among all possible browsers. That’s a pretty good (relative) endorsement. I hope they either included NoScript or provided for an add-on, but as an unwriteable CD, it probably isn’t, and all configs would be permanent. No whitelist, so everything would be a temp-allow (session-allow). But then, so is the whole system.

And the malice prevented by NoScript can’t write to your HD anyway. It’s just that a lot of bad stuff can happen in the browser.

When I get a chance to test-drive this baby, will post back. Might be a few days. Thanks for the link. (145 MB — it’s still d/l-ing.)

Nick P June 20, 2011 2:05 AM

@ jay

That’s right. His design is conceptually the same, but two steps are now manual. Will probably be the last time he slips like that. 😉

@ Richard Steven Hack

“This would seem to introduce the possibility of vulnerabilities in the TAC encryption while it’s on the PC or possible manipulations of the TAD encrypt/decrypt software via the PC, which is what Nick wants to avoid by only transferring the transaction in pure text. But I’m inclined to dismiss that; we have to draw the line somewhere. If the crypto software is not trusted, the whole exercise is a waste of time.”

You can transfer the initial setup and signed authorization to the bank however you want for your needs. The important part is that the appliance receives the transaction details, displays them, contains the private key(s), and signs (or doesn’t) the transaction details. This is what provides the real security. Any problems in the rest only lead to availability (e.g. Denial of service) issues. That’s why I focus on these critical parts.

“So the TAD needs to be authenticated by the bank as belonging to the correct user, no? And as Clive says, how can we make that immutable? If a hacker can get his own TAD, and it is not immutable, then he modify it to imitate another company’s TAD.”

My scheme includes that. I elaborated in my post at 1:37am that…

“Any transaction details the bank receives must have been hashed and signed with the >>>appliance’s private key<<<.” (emphasis added)

And the bank issues them. I can’t remember if I mentioned that, but I should have. The bank would link a certain device’s public key with specific users, accounts or however they look at it. As for stolen devices or imitation, I mentioned authentication mechanism but didn’t elaborate to leave it open. Here’s more detail: the private key would be encrypted in storage, and decrypted upon use. The authentication process would include the decryption. Dresden e-Commerce demo inspired this addition. The authentication will involve at least a password, possibly plus smartcard or usb token.

Regardless, that each device has a unique key pair and the public key is associated to specific bank customers should prevent many forms of spoofing (aside from theft of transaction appliance).

Nick P June 20, 2011 2:13 AM

@ Dirk Praet

I look at its description. It seems much like a forensics type LiveCD with productivity apps. I don’t think it achieves much more than other LiveCD’s in terms of either security or privacy. In the privacy case, the type of distro, plugins, etc. can be a trace itself. The most common are Incognito, Knoppix, and Ubuntu. In security, the description says it appears to trust the boot process and the BIOS. Advanced persistent threats, especially on BIOS or PCI/DMA devices, are still a risk.

tommy June 20, 2011 2:35 AM

@ Andy: The USAF system is a free download. Burn the .iso image to a CD with your favorite tool, and boot from it.

@ Nick P., Richard Steven Hack, Clive Robinson:

Love watching you guys brainstorm. Agree that for a business that xfrs even 100k at a time, a one-time cost in the amount described is reasonable, if mgmt can be made aware that it’s necessary. Yes, the bank should impress that on them, and yes, they’re more likely to if the bank has some liability. Keep going with it, and test a prototype!

However, very few home users are going to go to the expense or inconvenience, unless it’s mandated by law, and they won’t like it.

In the brainstorming spirit, and this is just off the top of my head so I can catch some sack time before the week starts (i. e., cut me some slack if it’s idiotic):

What would it cost, in volume, for each bank to have a live CD, which might boot the same mini-OS, but with unique burned-in identifiers for each bank, etc, that would support no more than a browser, GUI, and encrypted connection? User shuts down machine if running, reboots from CD, which has a GUID much as a Windows DVD does, which GUID was attached to that user’s account when they opened it and were issued the CD. They still must use conventional login credentials, which they are already used to doing, as another authentication factor.

The CD could be hard-coded with the bank’s public key (new one every three years – big deal) — for the purpose of verifying correct navigation only (standard session-key negotiation and encryption), and the browser hard-coded always to use that public key in transmissions, thus preventing fools from opening other windows or tabs and going other places while also doing online banking. Perhaps hardcode a tiny URL like bofa.com, which will redirect to the login page, so that the bank has the freedom to change URLs of the login page as necessary, which sometimes happens. IOW, this CD is useless for anything except browsing to that bank.

Not subject to malware in the user’s main OS. A keylogger couldn’t work, unless it could store the entries and later phone home when the user is back on the COTS OS, but the attacker doesn’t have the unique CD.

Attempts to log in that are too different in IP address and too close in time, or radically different in country (Asia vs. UK/US, e. g.) could be rejected, or require phone authorization. Thinking here of lost or stolen CD, or thief somehow making copies, then covertly returning the original, with user unaware. But still allows the user to travel and take the CD with them.

Last issue is MITM…. thinking … phishing won’t work if the bank URL is hard-coded. Compromised ISP — we’re all hosed anyway.

OK, tear this to shreds, or patch it up if the idea is usable. If all John or Jane Average has to do is “Pop in our CD, and turn on your computer for assured safe banking online”, I think they’d do it — especially if there were no other option than what RSH says: Suck it up and drive there.

Thoughts?

One more: A VPN connection in the 5. space, like a Hamachi network, in which the bank gives you credentials (in person, or certified mail) to a network they control (unlike the Internet) regarding who may and may not access it. My Hamachi VPN seems very secure, with AES-256. Why not get off the Net altogether (well, use the same cables, but a different space), and let each high-security business run their own VPN, with unique, limited access for each customer?

Hamachi:
https://secure.logmein.com/products/default.aspx

Andy June 20, 2011 2:51 AM

@Tommey,
“The USAF system is a free download. Burn the .iso image to a CD with your favorite tool, and boot from it.” will bookmark it for latter when got a spare computer and gear. I downloaded Encase and ran it, turn my laptop into a paper weight..:(

uk visa June 20, 2011 5:24 AM

@Steve K you say,
‘The solution here belongs with the legislatures, not the courts.’

Whilst I agree with you, in theory – the trouble in practice is that the legislatures are, unfortunately, bought and paid for by banks/lobbyists so they’ll never act.

Clive Robinson June 20, 2011 7:28 AM

@ Jay,

Yup the systems are the same except mine actually has the human in the security loop not just observing it as Nick P’s and quite a few others do, and the difference is important.

You can find my reasoning on this on this blog site somewhere (it’s probably five or more years ago I posted it when Bruce called into question even two factor authentication). It came out of work I was doing back in the mid 1990’s and made public back in 2000. I made quite a bit of noise on the Camb Labs blog when one of their bods was involved with some authentication using a grid of coloured dots on a web page and a smartphone app using the phone camera as a way of sending a couple of thousand bits of data (as I pointed out you could covertly send as much if not more data just by intensity modulating the dot brightness).

Back in the 1990’s I originaly looked at using some kind of electronic link such as a serial port etc but realised two things,

1, It puts the human outside the security loop as it effectivly makes them an observer, not a participant.

2, Any connection the human could not see the bits flowing up and down could carry malware, but worse even if they could see the bits that would not stop a time based side channel.

Oh and of course these days there is another issue “no serial ports on PC’s” it’s all USB / Firewire etc…. and both protocols are realy scary from the malware etc perspective.

As for the mutability of the device this is an open question. From a production point of view it should be easy to change or upgrade on the production line, and possibly even as an “after sales” option (This is something RSA are probably thinking long and hard on).

In essence it needs to be thought of as something like a masked programed (I know Flash ROM these days 😉 Microcontroler with extra in built EEPROM. For simplicity of explanation the “secrets” (which I’ve deliberatly left vague) get programed in like a serial number etc via some PCB pads inside the TAD casing. The casing provides a reasonable degree of security especially with a tamper evident seal over the program point. I originaly envisaged the TAD to be very similar to a credit card sized calculator.

[As RobertT will hopefully point out there are now structures that can be put on the chip that provide secure serial numbers and keys etc, he works at that level so it’s best to let him explain it (I’ve over generalised on analogies in the past with things like “burn in” to make such things comprehensable in human world terms but the quantum world is so different it ends up looking silly at best).]

The real issue/problem with the idea as has been noted is the quantity of data the human has to copy (twice).

My original idea was to find some way of using data compression in a way that was easy for humans but very difficult for automated computer programs.

What I came up with at the time now stands as an object lesson in how there is always a way around short cuts in security to a motivated attacker….

I decided to use some kind of shortend checksum of a hash etc of six or seven digits, that the bank would encode with capatchers, so relativly easy for a human to read of the PC screen, but very difficult for computers. (and yes I did feel pleased with the idea at the time)

However… the “ink was hardly dry” on the idea before people came up with the idea of using humans to get around capatchers. where by you can rent a “chinese sweat shop worker” to manually re-encode capatchers for a few cents or less a capatcher….

Typing in binary/hex data is at the end of the day one of those fundemental limitations of ordinary human beings. However it can be reduced by using fake but pronouncable words (see work on non dictionary memorable password generators from the 1970’s). Quite a few studies have shown that whilst most humans are challenged by an eight char password, most can remember a fifty or more charecter pass phrase in human readable form (such as “small elves have long ears whilst big dwarfs have long bushy beards”) with little difficulty. So there may well be ways of encoding binary data in a much more human usable form.

At the end of the day you have the choice between taking the security risk of a direct connection and making the human an observer -v- taking the security risk of using shorter data lengths and having the human in the security loop.

Whichever way one thing is certain there is no way anyone should use a “smartphone app” as the TAD (which unfortunatly is probably the way it will go).

Mark Currie June 20, 2011 10:37 AM

@Nick P,

“…It only requires software modifications on the backend. The front end investment is paid for by the customer so their liability is reduced. It’s purely a client-side security scheme.”

I like the simplicity of your system on the user side but I wouldn’t agree that it is purely a client-side security scheme. My model might have a thicker client, but it really is a client-side-only scheme.

While there are various models of PKI, full-blown client PKI is a tall order for a bank. If you get a chance to read my paper I address this issue there (BTW the paper’s not only on IEEE) but I will cover some of these and others here:

There are models of PKI which do not require third-party trust agreements with CA’s but these would require much more changes at a systems level:

  • Client authorisation models
  • Major changes to databases and other back-end infrastructure

If client certs are to be used then given the scale of the client base, the bank would need to setup a proper CA bunker with disaster recovery systems (big bucks). This cannot easily be outsourced due to liability complications. Even if it could be outsourced you would be still be talking big bucks in legal agreements, liability insurance etc.

In either case the bank would still have to invest in:

  • Infrastructure and training for capturing/issuing client certs/keys at all branches.
  • Help desk support (more training).

“Then the complexity creeps in. There have been few implementations of HTTP/HTML parsers w/out errors. The assurance level of this device would necessarily be smaller than a simpler device. I do like the functionality. Even so, it seems like it would require more software development investment for each client than a standardized signing app. What are your thoughts on that?”

Yes complexity is very important and I address this in my paper. In order to achieve a thinner server (or none) you typically need a thicker client. However we are not really talking about a lot of complexity. It doesn’t need a TCP stack but it does need parts of a TLS stack. However I have used code from OpenSSL for this which is pretty well tested. You don’t have to have a full-blown HTML parser either. You are only searching the text for a Pin/password response and in the case of banks, a beneficiary confirmation field. The PIN/password field is made easy by the fact that virtually all HTTPS servers use the standard HTML password form field e.g. Google, PayPal, Amazon, LinkedIn, Facebook, (I haven’t check all banks but I haven’t yet come across one that doesn’t. A PayPal payment is simply based on the server-side TLS + password model so the system can work without customisation. The beneficiary confirmation field requires a custom filter but very simple. Getting banks to stick to some method of providing the confirmation field is not really a big issue (they can display it how they want but also provide it in a HTML comment field). If the system is used by enough users then you could look at a putting out a RFC describing a new HTML extension for a “confirmation field”.

I am on your side when it comes to being able to build your own solution and mine can also be implemented on a separate hardened PC, but I humbly feel that mine can really be a roll-your-own solution. The advantages are:

  1. Does not need the online service provider to do anything. If you know what you are doing, you could build the system at home now and be protected from all modern crimeware including phishing, arp/dns poisoning, MITM & MITB (e.g. Zeus, SilentBanker, etc.).

  2. Does not need to hide a private key i.e. the system can be implemented on open read-only media. Although I said earlier that the system allows for pre-stored passwords, it doesn’t have to – the user could still type them in on the trusted hardware when needed.

  3. No new encryption scheme required. The tried and tested server-side TLS + user password method of mutual authentication is preserved except that now you have a hardware barrier.

  4. Not only for banking – Since the use of the HTML form password field is standard on most HTTPS services the system can be used with all major HTTPS service providers.

Richard Steven Hack June 20, 2011 4:41 PM

Tommy: While your Live CD version of a TAD isn’t unreasonable, the problem is management of the distribution. Basically, you’re requiring the bank to be a Linux distro maintainer. They’re not going to do that.

The bank would have to provide support for those customers whose PC for whatever reason won’t boot the CD into the GUI (or even a command line – which of course no one is going to use.) Even these days, when the average Linux distro has little problem with detecting ninety eight percent of the hardware out there, someone ALWAYS has something it won’t work on. (I have business clients running Windows 95 on machines that are ten to fifteen years old.)

The same issue – support – would occur if the bank runs its own VPN.

And then of course there will be those customers who lose their CDs, break their CDs, have their CDs stolen, etc.

Both these ideas are perfectly good and very cheap on the end user side, but they would probably be unworkable on the bank side.

Still, it would be better than what they do now.

Nick P: Yes, if the device has a private key which is stored in hardware, a hacker would have to go some to duplicate it. Not impossible, but very hard. Probably not worth trying to defend against unless one is concerned about state actors instead of criminal hackers.

Mark: Downloaded your paper and will read it. Your concept sounds pretty good. I like the idea that it could be constructed by a tech-savvy person and used for their own communications without any changes to the Internet side.

RobertT June 20, 2011 7:53 PM

@Clive R
“ere is a rumour doing the rounds that LG Comms got a serious hit due to RSA problem”

sorry but I know nothing about this

tommy June 20, 2011 7:59 PM

@ Richard Steven Hack:

Thanks for the feedback. I was thinking that all the major banks (there aren’t many left in the US after the crash) would use the same mini-OS, which could be handled by a contractor = volume discounts, and the same with tech support. The vendor would use the same distro, with only the bank’s unique ID’s and a set of customer GUIDs burned into each batch shipped to them.

I’m thinking of my Acronis boot disk. (Doubt it supports Win 95, but people who choose to use that accept that there will be consequences.) It boots a mini-OS, complete with MBR, bootloader, and support for keyboard, mouse, low-color monitor display (like Safe Mode), native CD/DVD, and external USB devices, including flash, external CD/DVD read-writer, ext HDD, etc. Which is good, because I can store the backup I want to restore from any of those media.

The screen resolution is adequate for their purposes, but I’d like it to be a bit higher for banking. Not a big deal.

All of this in 49 MB, and it has no problem with any of my hw on two different models of laptop. A CD holds about 700 MB, so adding a nini-browser and connection capability should be doable.

The whole program is loaded into RAM and runs from there, so once it’s loaded, you can remove the boot CD, which is good if your backup is on another CD. Not applicable to the bank idea; just saying that the light footprint should keep errors small and require little maintenance. I’ve been using the same Acronis boot CD for three years now. Unaffected by changes in Windows, a new HDD, new CD-ROM drive, etc.

I’m not seeing a maintenance/update issue here. If it can only connect to the bank and vice versa, how does the attacker get in? Even if he does, he can’t change the read-only CD; hence, cannot install malware. The native HD would not be accessible from this CD.

Contracted tech support for those with issues, but if done right and tested properly, the extreme limitation of function means there’s less to go wrong. This is in line with the Poly2 paper that Nick P. and I have been discussing: they use a separate machine for each app, just as some here have suggested keeping a separate computer for banking-only. This way, just a separate CD for each bank.

Those who lose or break their CDs would be responsible for the cost of creating another with the same account info, which would be pretty high, or be given a new CD with a new bank acct #. Hey, my credit card #s get changed every time there’s a data breach. A bit of a PITA, but that’s the world we live in.

Haven’t had any maintenance issues with my Hamachi VPN in the four or five years I’ve had it. It’s pretty self-contained. Try it out yourself, just for chat with a friend, and see. But still, the VPN vendor, whether LogMeIn (who owns Hamachi) or another one, would contract support. I bet a lot of banks outsource their web site design, maintenance, and updates anyway, so it’s not an alien concept to them.

@ Nick P.: Did reply re: Poly2 at the 25% thread.

Nick P June 20, 2011 8:18 PM

@ tommy

Thanks for the heads up. Im on my fone now and dont get home till 1am CST tonight. Ill respond in depth to both posts then. We discussed the LiveCD approach on Krebs before, strengths + drawbacks. Till latet, u can google my name, krebs, jcitizen and livecd if u want to find it.

tommy June 20, 2011 11:09 PM

@ Nick P.:

Thanks for taking your time on the phone. I scanned the Krebs thread. The difference I see — on a quick scan — is that they’re still talking about general-purpose Linux distros, whereas I’m talking about a custom distro that supports only a mini-browser that can connect only to the one bank site, and SSL/TLS connection capability. Like in Poly2, minimal function minimizes potential flaws. Also, I don’t see how the attacker gets into such a strict-function system.

If it costs $1 million to develop, and the banks have 1 million customers, that’s $1 per customer, plus, say, $1 for each copy. $2, maybe every three years, for vastly improved security? Seems like a no-brainer to me.

Don’t stay up late on my account, please. The comments will still be there tomorrow – and the next day…

RobertT June 20, 2011 11:15 PM

@Clive R
“As RobertT will hopefully point out there are now structures that can be put on the chip that provide secure serial numbers and keys etc…:

Actually I’m a bit of a skeptic about the added security of so called PUF’s. (Physically Unclonable Functions) as I’ve said before the people claiming these functions are Unclonable are not releasing enough information about the nature of the difference that is used to extract the PUF. So it is difficult to devise an attack method.

In my opinion PUF’s are “security by obscurity” which, as I’ve said before, is OK with me because it delays the time until an attack can take form (which is all any security can really hope to do).

IMHO all the “single ended” PUF’s are very easy to extract from the chip, there are some proposals for fully differential PUF’s that make external measurement extremely difficult.

Most external chip measurement systems are inherently single-ended relative to the on-chip data. so fully differential and even 3 and 4 way comparative systems are possible. These are especially difficult if the differential data is intentionally hidden in a field of single ended random data.

Anyway, I’m giving away trade secrets, so I’m sure some PUF makers would be happier if I, just shut-up, before giving away secrets for extracting differential data.

Richard Steven Hack June 21, 2011 12:02 AM

Tommy: Thanks for your further comments. I’ve used Hamachi in the past to do remote access to my clients until they went commercial and started charging for commercial use. Now I’m looking for another VPN to use and there’s no shortage of possibilities. Seems like everyone and his brother has a VPN based on the same concept as Hamachi.

I would go with Comodo VPN but they’re charging for commercial use as well. There are a couple VPNs that use Google or other major services to do the initial mediation server tasks, and I’m looking into them. So long as Google or the service used doesn’t lock them out at some point, they seem feasible. As a last resort, some of these VPNs can be hosted on one’s own Web server, but I’d have to check the ToS of my hosting service to make sure I could use it.

Also, just a quick note on whether malware can affect a live CD. If a live CD is providing Internet access via a TCP/IP stack, and if there is a vulnerability in that stack or the OS while in memory, it would be possible – albeit very difficult – for an attacker to exploit that vulnerability while the live CD is in operation, by manipulating the software in RAM.

An example of this sort of thing would be the Kon-Boot Live CD. You boot it, it modifies RAM and then boots Windows or Linux, and the modifications done in RAM disable the login mechanism of Windows or Linux so that you bypass it completely. It doesn’t always work, but I have used it once or twice to recover a client’s forgotten login password. It’s really slick when it works. Doing such a thing remotely just via some vulnerability in the bank browser or the TCP/IP stack or the OS in RAM would be really hard. But it is a risk to be considered.

I agree that if a bank, or better several banks, contracted for the development of the bank distro as well as distribution and support that the model probably would work well enough. And building a stripped down Linux is hardly a problem – Linux is probably the best OS for that since it can be embedded or rebuilt for virtually any purpose. For this sort of single-use purpose, it would be easy to make it so minimal and so simple and so hardened that it would be really hard to compromise it, especially running off a live CD.

So the main issue would simply be convincing a bank or a set of banks to do it. It may be that Mark’s idea would be better simply because it wouldn’t require significant work on the part of the banks at all.

Come to think of it, I wonder if it would be feasible for some distro creator to simply produce a live CD for use with particular banks even without the bank being directly involved. The creator could go to one or several banks, determine how their transaction system works, then embed that in a stripped down, hardened Linux distro. Then they could either go to the bank and sell it to them, or just give it away to the bank’s customers. Embedding the bank’s keys and such would have to be done with the bank’s permission, but beyond that the bank wouldn’t necessarily be responsible for the system.

Even the embedding of customer specific info could be done via some sort of custom CD burning facility: you order a distro made to order for your bank, provide end user authentication information, they burn the CD, ship it to you, then remove your private details from their system. Of course, you’d have to trust them to do it. Similar to the RSA situation in that respect.

Might be an idea for a startup in there somewhere with all these ideas. If the banks can’t provide adequate security, someone else needs to step up and do it at a price point small business and individuals can afford.

tommy June 21, 2011 1:26 AM

@ Nick P., whenever: re: USAF Live CD:

“I don’t think it achieves much more than other LiveCD’s in terms of either security or privacy….”

Do you think it’s not worth my burning, booting, test-driving, and poking? I don’t mind, but the time could be used for other things if you think it unlikely to be of any genuine benefit.

@ Richard Steven Hack:

I’m sure the banks expect to pay licensing fees for a VPN system. Hamachi offers a version that you can host on your own server, which banks would definitely want to do, both for security and control, and because Hamachi cannot guarantee 100% success in mediating clients. I think they claim 95%+, but any inconvenience makes bank and customer unhappy. So they host their own.

“Doing such a thing remotely just via some vulnerability in the bank browser or the TCP/IP stack or the OS in RAM would be really hard. But it is a risk to be considered…”

You’ve pointed out many times that complete security is impossible. Wouldn’t that risk be an epic reduction of the risks they pose now?

“So the main issue would simply be convincing a bank or a set of banks to do it. ”

Make banks liable if they don’t. Or just require it by law, or as a condition to keep your FDIC insurance. Granted, bank lobbies control Congress, but make it a win for them: It gets them off the hook, and if it’s enforced uniformly on all banks, it satisfies “compliance” quickly and easily. Paint the stick carrot-colored, and they’ll jump at it.

“Come to think of it, I wonder if it would be feasible for some distro creator to simply produce a live CD for use with particular banks even without the bank being directly involved.”

Only if they sell the idea to the banks and/or Gov first.

“Even the embedding of customer specific info could be done via some sort of custom CD burning facility: you order a distro made to order for your bank, provide end user authentication information, they burn the CD, ship it to you, then remove your private details from their system. Of course, you’d have to trust them to do it. Similar to the RSA situation in that respect.”

We’re right back to you authenticating yourself to a third party, and them to you, and you trusting them to remove the info from the db before someone hacks it. Under my plan, the distro creator issues pre-made CDs with a GUID for each, but has no idea which user will ultimately get it. They’d have to hack the bank’s db, much as in the RSA thing, and there goes their trust and a juicy, multi-year contract.

Also, most users won’t go to that trouble. If the bank hands them a CD when they walk in to open or upgrade the account, they’ll take it.

“If the banks can’t provide adequate security, someone else needs to step up and do it at a price point small business and individuals can afford.”

Unquestionably. Which is why if someone could design a prototype system and sell the banks and the Gov on it, they could get rich while keeping us all safer.

Re: Mark Currie’s method, and

@ Mark Currie:

“If you know what you are doing, you could build the system at home now…”

The first part of that statement eliminates 99.9% of those with bank accounts. I’m trying to make something that Juan and Juanita Normal-Medio can use with almost no instruction. “Pop in our CD and power on” – browser prompts for creds, and they’re in.

Or “Click the shortcut to start our secure network (VPN, but they don’t know that term), enter your password, and go.”

Mark, if you can do your solution to that level of non-tech user, which is vastly the majority, by all means, roll it out.

RobertT June 21, 2011 1:55 AM

@tommy & RSH

No disrespect intended, but you guys must be living in a time warp if laptop’s and desktops are even remotely relevant.

These days banking access discussion is all feature-phones / smart-phones and touch-pads (I mean 100% of new product discussions).

It might be possible to add the “live USB” functionality to a smart-phone, but I’ve never seen a phone actually boot from a USB (maybe possible I’ve just never seen it). This means that the corrupted Android / Iphone OS’s or app level malware are always going to be problems.

Anyway, I’m not trying to be negative, because it’s an interesting discussion, but I would like to shift the discussion from win95 and towards the second decade of the 21st century.

Nick P June 21, 2011 2:06 AM

@ tommy on banking livecd’s

Alright, now for a real reply. Tommy, what might have been hidden in the Krebs article is the attacks that are still present. We agreed on Krebs that a LiveCD for banking, even Ubuntu, is currently a good idea for individuals because most malware targets (a) windows machines and (b) OS’s with persistence (i.e. not LiveCD’s). If banks started deploying Linux LiveCD’s en masse, you’d see a shift among the more sophisticated groups to targetting them. So, what’s the risk from there?

The first risks involve spoofing the bank. Many people are tricked into going to sites that look like their bank, which then perform MITM attacks. Knowing that banks love JavaScript, other browser-based attacks might work as well. Then, as we must assume they can access the public network, they might look for signs of banking activity and only attack when the livecd’s are loaded. Just reinfect and compromise each time its loaded. BIOS, SMM and Intel VT type attacks are still available. Covert channels leaking key material is still a possibility. Most of these are prevented by the secure appliance designs, but present in the LiveCD designs.

That said, your proposal is an improvement on LiveCD schemes. It even immunizes a user to several of the above attacks and makes the others harder. Banks or 3rd party companies could, like RSH agreed, pool together to make the lifecycle of the LiveCD cheap. Now, for a few problems.

The first is subversion. The LiveCD would be made using commercial software development methods, 3rd party components, a low assurance repository, etc. Subversion of the software to include a backdoor could happen at any point. It’s more likely than in a medium assurance development process. Recent examples of subversion include the Borland Interbase backdoor and Microsoft’s quality control people not noticing an entire games hidden in their Office software (one of RSH’s favorites).

The second is the user. Any good scheme must be seemless enough that users wont pass it over for convenience. Pressing a button, looking at another device, and pressing/typing something on that device isn’t that bad. Heck, it’s what we do when we use debit cards at most POS terminals. Shutting down our system, loading a LiveCD (that’s SLLOOOWW), doing the work, shutting that down, reloading the main system, and reloading the work is more than most users do today even when they know the risks & the benefits of a LiveCD. (I’ve occasionally skipped the LiveCD when I was too impatient for a wait: “I have a locked-down Linux system. What’s the odds I’m infected anyway!?”) They prefer something quicker.

The third is interaction between LiveCD and filesystem. The software on the untrusted PC in my design can have arbitrary complexity. It can be Quickbooks or ADP for all I care. The user looks at a convient format, clicks “Pay the Bastards”, and the software transparently converts it to a simplified format for the transaction appliance. If we use a LiveCD, does it include software to create and process the complex files? Potentially buggy software that could be exploited by a malware-modified payroll file? Or does it interface with the filesystem and attempt to parse these files? Legacy software and big vendor’s ACH software interoperability must be considered. These two issues increase the odds of an exploit via rigged file, LiveCD or not. Granted, they will need privilege escalation capability to use this to the best of their ability, but it’s a risk with potentially unforseen angles. People keep finding holes in the Linux TCB so I’m sure we’d at least see one damaging zero day.

The fourth concern is specifically for Tommy’s scheme. His is one of the most secure LiveCD’s because it’s so locked down and tied to the bank. This might be its downfall, too. Studies on C.A.’s and certs show most sites have domain errors with their certs and many let their certs expire. Currently, even the average bank can’t manage its PKI right. Networks, IP’s, etc. often change regularly as well. A LiveCD locked down enough might be so inflexible as to lock OUT a user from their account. I’m not saying this would happen in your design, but that it should be considered. This is such an annoyingly complicated problem that I make comms stacks untrusted partly just so I can ignore such issues.

So, these are my risk considerations (right off top of head anyway) with a LiveCD (or LiveWORM-USB or LiveWORM-Whatever) approach. A secure appliance avoids these. Notice I’ve shown that, if a small percentage of banks used them, then there are serious security advantages to a LiveCD approach. Tommy’s extra lock-downs are better than most. However, widespread popularity would necessitate a change to more secure appliance designs like those proposed by Mark, Clive and I. Many have agreed the up-front cost would be worth the near total risk mitigation it earns against online, six-digit threats. My preference is that, if we’re investing big in something, to go ahead and do risk mitigation in this case instead of “risk management” (read: vulnerability breeding).

Btw: Acronis is a great product for backup and recovery, among my favorites. It’s LiveCD is great too on many types of hardware. There’s a reason though: Acronis’ LiveCD is Windows, possibly Windows Embedded. Windows forensic or recovery LiveCD’s tend to work pretty well on PC hardware designed to run Windows. But, is Windows Embedded a good base for a widespread, secure banking platform? (See the Office subversion above for why my skin crawls at the idea.) A Linux LiveCD wouldn’t work nearly as well, as experience shows, although it’s security profile is better. Tradeoffs, tradeoffs…

Nick P June 21, 2011 2:17 AM

@ Mark

I’m sorry I haven’t given a detail response and really read your paper. I’ve been a tad busy. I should have a response in by the next few days.

@ RobertT

“No disrespect intended, but you guys must be living in a time warp if laptop’s and desktops are even remotely relevant. ”

Considering how much banking is done on them, I’d say the time warp only goes back a few minutes. 😉 Remember also, Robert, that we’re mainly talking in the context of the ACH and wire transfer frauds that are hitting businesses hard right now. They typically involve compromising a corporate desktop or laptop, altering the transaction details, and picking up the money.

These schemes are targeting those things, not mobile payment. You and I have already discussed some issues with that. It’s a whole different beast. I’m glad this beast is easier to tame. 😉

RobertT June 21, 2011 3:07 AM

@nick P

If what you are talking about is strictly a corporate (and small business) system than you might need to consider the following

1) Most corporate desktops have the USB’s and DVD disabled , to reduce network infection vectors.

2) It has been 6 years since I had a laptop with a CD/DVD and my existing laptop has USB disabled, so your solution would not work for me especially when I’m traveling (because I don’t typically take my own laptop) but I often need to do internet banking and money transfers when abroad, sometimes personal banking sometimes corporate banking.

Clive Robinson June 21, 2011 7:03 AM

@ RobertT,

“Anyway, I’m not trying to be negative, because it’s an interesting discussion, but I would like to shift the discussion from win95 and towards the second decade of the 21st century.”

And that in a nutshell is the bigest problem. It’s not what the current technology is but the fact we know it’s going to be different tomorow.

The classic example being the disappearance of the RS232 port. Yes it’s quite good for secure designs because it is very simple (remember it had to work with mechanical machines ;). But it was already “legacy” in the early 1980’s. And even Big and Medium Iron shifters don’t use it any longer it’s been replaced with USB,FireWire and “Network attached” solutions, none of which can be easily or reliably made secure.

The solution for banking needs to be technology agnostic very low cost and the design needs to be as simple as possible so that the number of bugs and loop holes can be minimised. And by technology I don’t just mean hardware or interfaces or even software, I also mean algorithms and protocols that are dependant on any of them.

But importantly to be used it also has to be quick, simple, reliable and convenient for the user.

One of the reasons I talk about a TAD and human data entry is it’s as close as we currently have to being agnostic to technology. It’s downside is the “usability” issues.

We can talk technology and protocols etc till we are blue in the face, but in order to be any use the first and formost requirment is it has to be “conveniant to use” otherwise it is not going to get of the ground let alone fly.

Oh and on an up note, the 13:00 UK news has just anounced that the UK’s Met Police in colaberation with the FBI has done a take down on senior LulzSec members…

Nick P June 21, 2011 2:56 PM

@ RobertT

“Most corporate desktops have the USB’s and DVD disabled , to reduce network infection vectors.”

Most small businesses that are making the news don’t do this stuff. They do whatever their bank tells them to do on a regular PC. My setup is primarily targeted at them. For corporate systems, you’re right: that would be an obstacle. RSH had suggested building networking into the device so it could connect directly to the bank. I said, if we did, it would be a separate SOC connecting to the appliance SOC over non-DMA. Customers wouldn’t know the device contained two computers and don’t need to. If ports were available but incompatible, I’d just use an adapter.

But a corporate PC probably has at least network access, so I could use an onboard networking feature to send the data that way. The secure part of the transaction appliance would still receive the data over a non-DMA link with a simple protocol. It would just go to an untrusted, networked component first. Setup in the corporate environment would basically mean entering some network information into the system and changing some firewall rules. (Maybe trickier in very complex environments, but not much harder than putting in a new corporate PC)

“so your solution would not work for me especially when I’m traveling (because I don’t typically take my own laptop) but I often need to do internet banking and money transfers when abroad, sometimes personal banking sometimes corporate banking.”

Maybe, maybe not. If we added wired/wireless networking component described above, could it be made to work for you? I could imagine you connecting it to the internet and it connects to the bank. Then, you connect on a different device. You get a transaction going, the bank sends it to the device, etc. What do you think?

Mark Currie June 21, 2011 3:09 PM

@ Clive,

Your scheme certainly works and there are actually gadgets that do just that (or very close). They often add a shortcut whereby the bank communicates the TAC directly to the device via on-screen flashes that are picked up by an optical sensor on the device. This saves the human having to type a possible lengthy cryptogram into the device. The resulting acknowledgement code that the human sends back is kept short. To be fair though, I have only really seen one device of this class that does it correctly.

My only issue with this method is the general one that I am on about i.e. the service-centric approach. There are good solutions already out there that conform pretty well to your criteria but they just can’t get the global market penetration. I believe that if you can find an acceptable people-driven client-centric solution you will have a winner.

@ Nick P

My paper is only useful in covering some of the many considerations around this issue and it gives details on my solution (although I have moved on a bit from there). I think that you have the gist of my solution already, so don’t stress if you don’t get to read the paper.

@ tommy,

I don’t really expect everyone to build their own system but I can see that the way I put it across could sound like that. I have proved the concept in a custom USB gadget (looks like a memory stick) with a small OLED display and joystick. This is my preferred implementation but it could also be implemented as a LiveCD solution on a separate PC.

I have given some thought in the past to the single PC + LiveCD solution that you propose. It certainly can be done without requiring any bank intervention and I agree with you that it is a whole lot better than status quo, however like Nick I am also not happy using my normal PC as a security device. That’s actually what I have being trying to get away from e.g.:

  • High complexity system connected to a network
  • Even the LiveCD is vulnerable to BIOS attacks and not many people realise that they have to make sure their PC BIOS is locked down
  • Vulnerable to hardware key loggers
  • Rather loud on the EMC side too

What’s nice about your method is that once the system is up and running, nothing changes for the user. However, rebooting into a LiveCD and then rebooting back to your original system each time you use it is pretty inconvenient.

I think that my solution is convenient under day-to-day operation, especially when implemented on a small USB gadget. It can store all my passwords, and since I don’t have to remember them, they can be strong passwords (it could in fact generate passwords for me). It actually requires less typing than you do right now since I don’t have to type in a website password (it automatically inserts it for me). I just need to remember one password which I enter directly on the gadget to enable it (could also be biometric). Note: I can also opt to reserve highly sensitive passwords like my bank PIN for manual entry when required.

If you really want convenience you could make the device act as a USB keyboard where it types in the necessary keystrokes to automatically connect you, log you in, and land you exactly where you want to be on your website.

Your non-tech user point is valid since you do have to interact with the gadget. The convenience-vs-security thing is always an issue and the challenge is to find the right balance. I do have some more non-tech models of using my device but they bring other issues into play that require more consideration and I am still working on these.

Nick P June 21, 2011 3:15 PM

@ tommy

“Do you think it’s not worth my burning, booting, test-driving, and poking? I don’t mind, but the time could be used for other things if you think it unlikely to be of any genuine benefit.”

I don’t really see any benefit. The government is usually way behind on stuff like this. Especially with Linux because Linux’s development and release pace far exceeds anything the government produces. Their secure configuration guides are nice, but even those came after the hardening guides that hackers and security gurus posted.

Far as LiveCD’s go, your time is probably better spent improving an existing LiveCD setup for banking or trying to create an OpenBSD livecd for banking. Those OpenBSD livecd’s keep falling out of maintenance and support. Would be good to see someone keep one going.

“The first part of that statement eliminates 99.9% of those with bank accounts. I’m trying to make something that Juan and Juanita Normal-Medio can use with almost no instruction. “Pop in our CD and power on” – browser prompts for creds, and they’re in. ”

That’s was one of my primary complaints with the system. Who would keep track of parsing rules, templates, etc. for the bank web sites? It might be best if the banks themselves designed their login pages to make that work, but if a 3rd party builds this then someone has to maintain that stuff and constantly run scripts against banks to detect if they have changed their login in a way that breaks the functionality. If they do, then the device either won’t work anymore or might do something wrong in a way that causes problems. Again, though, these are just thoughts I had during a quick skim. I want to suspend judgement until I read & analyze the paper thoroughly. I will say that, whether I prefer it or not, it is a novel design.

Richard Steven Hack June 21, 2011 4:55 PM

Speaking of parsing rules: I worked many years ago for Bank of America’s Online Treasury Department as customer support for the MicroStar cash management system.

Part of that had bank employees configuring software to parse bank online cash management reports for our customers. This required telling the software what lines, columns and fields and text to look for to find and extract dollar amounts.

As a support person I had to update those when bank reports changed – and this happens quite a bit. This was back in the mid-80’s, so software today would probably handle it much better. But presumably the basic problem does remain.

tommy June 21, 2011 5:09 PM

Quickies:

@ Robert T.:

First, you say laptops and desktops are antiquated, then say, “This means that the corrupted Android / Iphone OS’s or app level malware are always going to be problems.”

Which is an excellent reason to do $100k xfrs from a real computer, not a telephone. Personally, I walk into the bank where they know me by face, and I know most of them by face, before wiring 100k, or even 10k. But that’s a once-in-a-while thing. This corporation (Patco) was doing payroll, IIRC. You’re not going to be doing payroll from a telephone, or you deserve whatever happens to you.

Since you said smart phones can never be high-assurance, please don’t mock those of us who are trying to come up with ideas for high-assurance systems for actual computers.

@ Nick P.:

Thanks – will skip the USAF system. Appreciate the time saving. Will respond to your analysis of the bank system later on.

RobertT June 21, 2011 9:19 PM

@NickP
“Maybe, maybe not. If we added wired/wireless networking component described above, could it be made to work for you? I could imagine you connecting it to the internet and it connects to the bank…”

I can imagine an NFC system with a secure link and two token system actually working for me, but I don’t think this is what you are describing.

I still have the problem that the PC and PC_OS are potentially infected so I’m not sure that any malware caused MITM attacks have been prevented.

RobertT June 21, 2011 10:01 PM

@tommy
“Since you said smart phones can never be high-assurance, please don’t mock those of us who are trying to come up with ideas for high-assurance systems for actual computers. ”

Sorry I was not mocking rather just trying to shift the conversation to a relevant time period. (2013-2020) for development.

You see I work in the new product/ device chip development area, so what is new for Joe Public is something that I have usually spent the last 3 to 5 years working on. (BTW the first Iphone came out in 2007).

I personally think the Laptop is dead (sure it will take 3 years for sales to show this) but the baton has been passed on to new classes of Mobile devices.

“Since you said smart phones can never be high-assurance, please don’t mock those of us who are trying to come up with ideas for high-assurance systems for actual computers. ”

Tommy, I think if you read my previous posts you will see that I am very interested in high assurance computing. But I’m designing systems for future products (say 2 to 4 years out). At this time I expect there to be a significant shift to Pad’s and smart-phones as the dominant mobile computing devices. In other words I’ll leave the laptop at home and just travel with my phone, it will have presentations stored and phones will have built-in BT and WiFi to talk directly to display devices (TV, projector, monitor….) even Movies stored on my phone will play on the Hotel TV through wireless links.

The smart-phone is the laptop of the future, so if we don’t want a complete disaster than we need to be thinking about high assurance smart-phone systems, and thinking about them today!

Nick P June 22, 2011 1:54 AM

@ RobertT

“I still have the problem that the PC and PC_OS are potentially infected so I’m not sure that any malware caused MITM attacks have been prevented. ”

If you read my design description, you’d know the PC can be 100% subverted and that the security policy is still in effect. My design, modified for your situation, uses something like this: secure, minimalist, device for transaction verification/signing + untrusted PC to setup a transaction + untrusted device to connect PC or bank to secure device.

Other than the non-DMA link & careful protocol design, the reason the PC and communications devices can be untrusted is that anything the secure device receives is displayed to the user in text form. If you see a modification, then a MITM has happened. If you like what you see, hit the authorize button on the secure device and it signs the transaction with the onboard private key & sends the result to the bank. Only the secure device has the private key, so forging transactions is out of the question. The user must approve of what’s being signed, so tampering is evident. The worst thing that can happen is a loss of availability. (We can design around that too, but it’s easier for them to just use a different computer/network or fix their existing one.)

“You see I work in the new product/ device chip development area, so what is new for Joe Public is something that I have usually spent the last 3 to 5 years working on. (BTW the first Iphone came out in 2007). ”

I figured that. But, I wonder why you’ve never mentioned who you work for. You work on classified projects or just confidential?

“At this time I expect there to be a significant shift to Pad’s and smart-phones as the dominant mobile computing devices. ”

Agreed, as do most commentators. The push for people to just be content consumers is working.

“The smart-phone is the laptop of the future, so if we don’t want a complete disaster than we need to be thinking about high assurance smart-phone systems, and thinking about them today!”

Have you seen OK Lab’s Nirvana phone demonstration? I mention their OKL4 kernel on this blog a lot, but they’ve gotten more professional than academic these days. Rolling out lots of “solutions.” The Nirvana phone is basically a remote desktop client [on a smartphone] that you plug a monitor, keyboard and mouse into. Then, bam, you have a PC at your fingertips. Sweet, huh! Also can be behind the corporate firewall: using OKL4’s isolation capabilities to protect VPN keys from smartphone OS’s was one of the earlier design goals of the kernel. I’m sure they got that part covered at least as well as the competition.

I definitely agree that smartphones and mobile devices are big for the future. But, we need to plan for all of whats in the future and what can be done in the present. Remember how many said that, with cheap laptops available, desktops would die out? How many people do you know with desktops? Probably plenty, but less than in the mid to late 90’s. The future will have desktops, laptops, smartphones and more. I think we need parallel efforts to develop high assurance solutions for each. The cost of entry into secure SOC development is so high I can’t stand a chance there. But, developing a little secure banking appliance that connects to desktops, laptops or bank networks is a lot more realistic for some of us. And the target market could and would use such a solution if the benefits were worth it.

Btw, I was wondering what the development cost of the hardware would be like. I was going to take an existing, high quality board and put new BIOS/firmware on it. What’s it normally cost to develop custom firmware for a board whose hardware specs are known? The custom firmware would be developed in a way to increase trust in its operations, maybe modular and layered. The board would probably be POWER processor and have some basic IO options. What kind of cost range would I be looking at?

RobertT June 22, 2011 4:33 AM

@Nick P
“If you read my design description, you’d know the PC can be 100% subverted and that the security policy is still in effect…”

I’ll re-read the thread and make sure I understand your security model.

“Have you seen OK Lab’s Nirvana phone demonstration? ”

I’m familiar with their products, but I don’t believe they address real life security anymore than simple virtualization fixes PC OS insecurity. It just raises the bar, a little.

“The cost of entry into secure SOC development is so high I can’t stand a chance there. But, developing a little secure banking appliance…”

If smart-phones can be secured than they will be the ubiquitous CHEAP platform that banks and others are looking for, so I don’t believe the market for Laptop bank security products will exist IF we successfully secure phones. IMHO multi-touch devices (pads etc) have many more choices for password entry than do simple keyboard based devices, and many of these input methods complicate MItM attack vectors, and this will delay the development of viable attacks.

“I figured that. But, I wonder why you’ve never mentioned who you work for. ”

I guess I don’t believe that who I work for is relevant, especially if I’m not intentionally pushing their products. I also don’t want my musings, especially about exotic attack vectors, being confused for the “acceptable risk” that is associated with all new product releases.

I’m not a US citizen and I don’t work for a US company, nor do I design products for US customers, so I rather spare myself the wrath of the Asia phobic blog readers.

“The board would probably be POWER processor and have some basic IO options. What kind of cost range would I be looking at?”

Sorry I have no idea what this would cost.

Nick P June 22, 2011 2:03 PM

@ RobertT

All makes sense. As for Nirvana, I didn’t mean for it to be an example of security, nor OKL4 high assurance. You mentioned your belief that smartphones would become our computers. Nirvana is an implementation of that concept.

tommy June 23, 2011 1:24 AM

@ Robert T.:

Fortunately, by the time I was able to answer your non-mocking of the non-disappearance of laptops and desktops by 2013, Nick P. did it for me, and probably far better:

“I definitely agree that smartphones and mobile devices are big for the future. But, we need to plan for all of whats in the future and what can be done in the present. Remember how many said that, with cheap laptops available, desktops would die out? How many people do you know with desktops? Probably plenty, but less than in the mid to late 90’s. The future will have desktops, laptops, smartphones and more. I think we need parallel efforts to develop high assurance solutions for each.” – Nick P.

Thank you, Nick.

You might also ask Nick, Clive R., Richard S. Hack, etc. why they put a good deal of time into commenting and critiquing my ideas for a single-bank Live CD or a bank VPN, instead of saying, “Forget it, Tommy — by 2013, nothing but smart phones will exist.”

I still need to study some of the replies, but got sidetracked here.

If you can produce the high-assurance phone that overcomes the problems you yourself cite, then I’m sure we’d all love to see it — SERIOUSLY. But Nick P. is still spending a good deal of time designing high-assurance desktops, as he told me in our ongoing thread,

http://www.schneier.com/blog/archives/2011/06/25_of_us_crimin.html?nc=70#comment-552709

I asked about laptops, and he said that it would take considerable additional design work to meet the size, weight, and cooling constraints, but he had an idea. Either Nick is wasting his time, or there will still be laptops and desktops in 2013.

I would have accepted the concept more if you’d said, “2020-2100”, but then, we still need to do something with what’s out there now, and for the next number of years. This problem can’t wait.

I, for one, will always have at least a laptop, no matter how good smart phones get in the next ten years. Consider the occasional comments phoned in here – there’s one from Nick, at this thread or elsewhere; Clive does so frequently: we readily forgive or ignore the chatspeak, u r rite 2 do so, and whatever typos appear. But do you want your next professional paper to look like that? … So I will always need a full-keyboard device, because I prepare contracts, legal papers, and other documents, with hundreds of thousands of dollars at stake on a single error, and am not about to try to do that on a telephone.

Don’t even mention voice-recognition. I watch fictional TV medical shows that are closed-captioned by VRT, I think. In fiction, the errors are humorous. In reality, they’d be fatal (not in the IT sense of the word, but in the M. D. sense). VRT of that quality is a long way off.

Looking forward to your banking-secure smartphone.

tommy June 23, 2011 2:53 AM

@ Nick P.:

Thank you for the detailed critique.

“The first risks involve spoofing the bank. Many people are tricked into going to sites that look like their bank,”

Recall that the design specs hard-coded a destination URL into the CD, that cannot be changed. Later objection: “Networks, IP’s, etc. often change regularly as well. A LiveCD locked down enough might be so inflexible as to lock OUT a user from their account ” The design called for a tiny URL, such as bofa.com. Bank of America is then free to redirect that to whatever IP or login URL changes from time to time. The CD is still going to send a standard DNS lookup request for bofa.com. If a phisher invades the user’s normal Windows or Mac OS, he cannot change the Live CD.

“Knowing that banks love JavaScript, other browser-based attacks might work as well. Then, as we must assume they can access the public network, they might look for signs of banking activity and only attack when the livecd’s are loaded:”

The mini-browser will accept scripting only from the bank, or as required by the bank (e. g., akamai.net). Corrupting the user’s regular browser does no good. If an attacker can corrupt the BIOS, or arrange a delayed attack on RAM, etc., we’re hosed, but we need more secure BIOSs anyway. Keep working on that!

“The first is subversion. The LiveCD would be made using commercial software development methods, 3rd party components, a low assurance repository, etc”

Not in my plan. We get Clive and you to design it…. “Recent examples of subversion include ,,, Microsoft’s quality control people not noticing an entire games hidden in their Office software (one of RSH’s favorites).” …. I think you missed my post to RSH that I personally had discovered a video game inside Open Office 2 Calc program. I posted to their forum, and they laughed it off. I can get you the link if you like, or searching this blog for open office calc video should do it.

“Shutting down our system, loading a LiveCD (that’s SLLOOOWW), ”

Good point. Perhaps my very limited-function, low-footprint Single-Bank Live CD would load much faster than a general-purpose Live CD? It doesn’t have to support office suites, etc. …. Alternate possibility: Acronis offers to create a “secure zone”, a separate HDD partition not visible to Windows, to let you boot the recovery program directly from your HD boot menu. For low-tech users, we can make a simple button that will restart the machine from that partition, which should cut the time to a minute or less, HD booting usually being faster than CD booting. Do you think the Acronis “secure zone” is indeed secure, unaffected by the Windows running in the other partition, and therefore, almost the equivalent of a Live CD? Most of it could be hard-coded as read-only .

I did not intend for the Live CD to be able to access any files on the HDD or Windows partition at all, only to log into the bank securely. The user then uses the bank’s UI to make transfers, etc. I’m still thinking more of a single user or family; payroll operations for a large company would require another solution. I already have a model, though: My County property taxes are paid annually by the bank who holds my mortgage and escrows the payments each month. Since my bank holds thousands of mortgages in this county or any other, they don’t cut checks; they used to create a CD, — or maybe a tape drive, for all I know — and send it to the County for processing. If they’ve moved to direct online instructions to draft their account and credit it to the County, I don’t ATM know how, but they’re running the same risks. However, it’s still single-sender to single-destination. Interesting topic to explore further.

If your bank security ideas become widespread, would the Live CD add any further assurances? If not, it might be a bridge from here to there, since as you say, “if a small percentage of banks used them, then there are serious security advantages to a LiveCD approach. Tommy’s extra lock-downs are better than most. However, widespread popularity would necessitate a change to more secure appliance designs like those proposed by Mark, Clive and I.” … Start with the Live CD for now: I don’t think it would be that hard to create; it runs on existing user hw with no additional gadgets; probably very little change on the bank’s ends, as they could keep their existing setup for customers who don’t want or have the CD, but mark accounts that are tied to the CD GUID. — While you try to gain traction for the high-assurance overall system nationwide, and worldwide.

“Btw: Acronis is a great product for backup and recovery, among my favorites. It’s LiveCD is great too on many types of hardware. There’s a reason though: Acronis’ LiveCD is Windows, possibly Windows Embedded” …

I was always under the impression that my Acronis boot/recovery CD ran a Linux-based kernel. I just looked through the support docs, and they imply it, but not explicitly, by providing additional parameters for booting the Linux kernel. Their web site, for the latest version (mine is a few years old) says you can create either a Linux or Windows PE (not embedded) CD, and the advantage of Win PE is all the plug-ins you can add. I don’t see any of those plug-in places in mine. I’d shut down and boot the Acronis just to check, but it’s late, and as you said, it’s slooow. 🙂 … will put it on the list for Wednesday and report back. Perhaps my memory is faulty.

Do you wish to comment on the idea of banks running their own VPNs, with tech support by contractors, and each customer having unique access controlled by the bank, very much as I control my Hamachi network? — but with the banks hosting the server, because we don’t want a middleman mediating the connections. Search this thread for RSH’s comments on that, if you like. Or not.

Thanks for your always-insightful comments. “Something” good is going to come out of all of these ideas and counter-suggestions.

Andy June 23, 2011 3:46 AM

@tommy, for the livecd you could have two option, one you put in when it boots, and trys to get execution first, then when they want to do banking you put the cd in when the computers running.
“hack in the box” had a comment about locking the OS from programs, with segments or something(most malware would probable go for the highest level). Or block access to a part of ram. maybe some ,a data stream(they might be a tool that can read the data without know the name, to defeat it).
Probable not a good idea but get the bank to send information that makes the cd data be-able to run, some code modifier

tommy June 23, 2011 3:51 AM

@ Nick P.:

“”Btw: Acronis is a great product for backup and recovery, among my favorites. It’s LiveCD is great too on many types of hardware. There’s a reason though: Acronis’ LiveCD is Windows, possibly Windows Embedded” …”

The loader text went by too fast to read while it booted, so I searched the kernel.dat file with a free tool, Analog X Text Scan. It found the string, “Linux Version 2.6”, and did not find any occurrence of the string “Windows”.

Nick P June 23, 2011 3:20 PM

@ tommy

“The loader text went by too fast to read while it booted, so I searched the kernel.dat file with a free tool, Analog X Text Scan. It found the string, “Linux Version 2.6”, and did not find any occurrence of the string “Windows”. ”

They must have switched. Mine was certainly Windows, from appearance to how it loaded/functioned. I was guessing Windows Embedded but I forgot about WinPE. My CD used one of them. I guess they switched to Linux once its hardware support got really good. I’ll respond to your other posts later on today when I have free time.

Nick P June 23, 2011 10:55 PM

@ tommy

Alright, rather than responding to every point, I’m going to summarize the issues, what your design nullified, and what’s remaining.

  1. Attacks on active LiveCD. Browser-based attacks, DNS-based attacks, etc. A good SSL setup on a mandatory URL with a hardcoded cert should work, assuming it’s all implemented correctly. Doing this essentially turns the general purpose online system into a specific purpose, reducing attack surface.
  2. BIOS attacks. This is still a real threat because any real-time attack on the Linux TCB might be used to drop a BIOS rootkit. Good sandboxing and permissions might defeat this. The LOMAC and SMACK M.A.C. schemes come to mind.
  3. Hardware issues. Linux hardware support is better than ever, but if that fails who knows what happens. Bank might standardize on a cheap netbook, keeping a working Linux install on it (or several banks rely on 3rd party for this). The LiveCD software will be certified for these cheap platforms, which a customer can buy if they don’t like hardware & performance issues.
  4. Slowness. The thing will still take time to boot. Reduction will help but several seconds of waiting occur during BIOS initialization and CD must still be extracted. Best alternative is netbook.
  5. Subversion. As I’ll probably not be the developer, this is still a consideration. Assuming I wrote the client software, there’s no guarantee that this is what’s on your CD or that the bank is following the standard protocol. An improvement would be having the files (or compressed filesystem image) hashed and signed by the developer. Then, any user could just mount the CD in a running system and check its files against the signature. Still must trust the developers and their repository.

  6. Convenience. It still isn’t. Users might not choose such a device. Banks could mandate it for online banking. This creates a potential loss of customers to a competing bank that claims its solution is secure & is more convenient. Users don’t know the difference. Convenience is a big issue for livecd-based approaches.

  7. Cost. This is the advantage and isn’t security relevant, but I should mentioned it. The LiveCD costs much less to deploy than a secure appliance. Of course, hardware issues might necessitate spending $200 on a netbook or nettop. This is comparable to what I figured Mark’s device would cost and the lower assurance (maybe market entry) version of mine would. At this point, it’s not competitive. It’s still cheaper than high assurance but… of course it is. 😉 So, it’s the best from a cost perspective if it works on the user’s hardware.

  8. Cert management. I’m only half convinced on this. For the lockdown to work, it must be strict. Strict means that the certificate must absolutely be valid. Many web sites have a hard time with this. However, banks are a bit better than most & might do better if it was central to preventing massive fraud that they might be partly liable for.

  9. Javascript. Minimalized risks. Primary issue comes from 3rd party interactions with the bank: XSS. Might be necessary to include bloated firefox + noscript on this thing as indicated in 9.

  10. Integration with existing Windows payroll & finance apps. You kind of dodged this one, but it’s important. Remember, our target market is mainly small businesses doing ACH and stuff. Many use proprietary applications dedicated to this or to accounting/finance in general. Loading data from such devices will be essential. The new solution doesn’t address this. It can integrate with online applications, perhaps, by adding permissions. (Then, we open a can of worms with attacks like XSS. Can’t just a simplified browers or thin UI anymore.)

Best proposal would be extending those apps to export it in a simplified, easily validated format (like in my appliance). Then, the livecd would mount the HD, pull the file off it, parse/validate it, and proceed with the banking. This addition might be easy for apps that support plugins, but other proprietary apps might prove a challenge. This also could create additional risks, as the HD is untrusted & possibly maliciously altered.

The VPN idea requires more thought and consideration. It might be hard to give a judgement call on it because it will depend a lot on what technologies, topologies, etc. each bank network uses. I think a properly constructed, strict SSLv3/TLS connection should do. We’ve already discussed a user key pair and device key pair. If we used two tunnels, this might be conceptually easier: one strong tunnel using device key pair between the PC and the bank. This forms the VPN. Then, the user’s key pair (probably unlocked/decrypted with password) is used to create another tunnel between the bank application server & the client software that simply maintains integrity and authenticates the user. No need for special segments and fancy stuff: SSL and SSL acceleration is already widely deployed in banks.

As for Acronis secure zone, I’ve always had my doubts about that. It relies on obscurity, runs on a possibly compromised machine, etc. I just feel shaky about it. I think we should treat the machine as totally untrusted & start root of trust with boot cd. If the HD was used, it would be for faster load time. In this case, the CD would load the filesystem image from the HD into memory, hash it, check the signature, and then boot as usual. If it didn’t match, it would tell the user something was up and boot from CD instead. Must start with and entirely rely on integrity of CDROM/CDR data to ensure we get the strengths of a LiveCD.

And, btw, I had no idea about the OpenOffice thing. So, here I was using this “open-source” program all along and didn’t know it had a game built into it. Subversion is insidious. Fortunately, it was a harmless subversion by non-malicious developers. I don’t feel I made a bad judgment call because I trusted them not to intentionally harm my computer: I figured software glitches in Word conversion would do me in. So far, so… painful…

RobertT June 24, 2011 12:45 AM

@tommy
“If you can produce the high-assurance phone that overcomes the problems you yourself cite, then I’m sure we’d all love to see it –”

I’m probably the only person that reads this site that believes in the value of “security by obscurity”. Especially the value of obscurity in delaying the time till an effective attack develops.

Fundamentally my focus is not to plug every possible hole, but rather to present the attacker with a problem surface that is so convoluted and morphing, that hopefully they just give up. (or, if well funded, they spend 5 years figuring it out)

IMHO Traditional Hash’s and cryptography methods are already secure enough for “data at rest”, so the work is needed to make these systems also secure for real time tasks on processors with known side channel leaks.

Securing for real time crypto involves preventing an attacker from knowing exactly what operations you are doing and exactly when the operations are happening. This requires closing doors for timing channels, and preventing all power analysis methods. Additionally it helps if the computation section can cope with and even introduces ALU channel errors. I prefer to think of the crypto processor as a communications Link and manage the link SNR to obscure information. (think of it as a DSSS secure comms link operating 20dB below the system noise floor).The link error correction section only needs to be understood by the chip so proprietary FEC’s and spreading codes and even sets of FEC’s can be implemented to achieve a required processor error rate.

In my opinion this also means having sets of protocols rather than THE protocol. This is especially true for RF links, where security requires actively managing the link to fully utilize the Shannon Hartley channel limit. (e.g sending the most critical data with the most complex modulation supportable, e.g 100 channel OFDM with QAM 1024) MITM attacks are very difficult to implement on wide band RF links because the multipath model changes with link distance, and this change in the expected multipath model can be detected.

Anyway, I’ve probably left most readers scratching their heads, and wondering if I’m just a complete nutter. So I’ll close out this post.

Of course the real trick is to do this all in a small chip area and at the lowest possible power consumption and running compatible with legacy systems.

Mark Currie June 24, 2011 8:56 AM

@ Nick,

“7. Cost. This is the advantage and isn’t security relevant, but I should mentioned it. The LiveCD costs much less to deploy than a secure appliance. Of course, hardware issues might necessitate spending $200 on a netbook or nettop. This is comparable to what I figured Mark’s device would cost..”

The manufactured cost for my current memory stick version is around the $10 mark in volume so it’s not really fair to put it in the netbook cost category.

@RobertT

“Fundamentally my focus is not to plug every possible hole, but rather to present the attacker with a problem surface that is so convoluted and morphing, that hopefully they just give up. (or, if well funded, they spend 5 years figuring it out) “

I think that there used to be value in what you say but today it’s not always necessary to perform reverse-engineering. It can be done much quicker if you employ hackers of the recent ilk that are successfully compromising large “secure” organisations. Even on the reverse-engineering front – forget 5 years – a year or two ago a private individual broke arguably the most secure smart card chip in 6 months purely by reverse engineering the design and getting past their anti-tamper mechanisms. OK he spent $100K on an old generation Focussed Ion Beam (FIB) machine, but in the chip world his attack would be considered a limited resources attack. There are also plenty of professional reverse engineering companies that have formidable resources e.g. http://www.rawscience.co.uk

I agree with Nick on the multi-platform future but I also agree with you that the mobile platforms are very important. I don’t think that it’s a good idea to leave this to the mobile device manufactures. Firstly they don’t have the long history with security that one needs to do this properly. Secondly, their new model churn rate (I’m sure there’s a better term) is too high so they would not be able to get the high assurance certifications necessary for each new model. The main issue in the mobile world is interface standards. The Trusted Platform approach has no real value here either as it does not specify a security chip with integrated user interfaces. A Bluetooth version of my gadget could also work since you would not be relying on Bluetooth security for login and beneficiary confirmation, only the general web page content.

@ general discussion

Don’t just think about the banking scenario, but think all HTTPS services. The HTML password field is very consistent. By picking up the password var on the incoming web page, you can reliably filter on it in the outgoing response and substitute the dummy value entered by the user for the real password. I have proved this consistency already on several sites. Therefore as I mentioned earlier, for pretty much all major HTTPS services you can get a hardware-based mutually authenticated secure login. I haven’t tried recently but as far as I know you should still be able to do a PayPal transaction using only this method without any special customisation.

While online banking has always been my main goal, I wanted something that people could use now to protect their logins. The fact that it is also a useful (real) password safe, and easy enough to add secure memory stick storage, adds to the attraction especially for cloud services. If you could build a reasonable user base around this you would have no problem at all in convincing banks to play along, It costs them so little.

Although the way to get the banks involved would be to get their business customers to use the device. Even one reasonably large customer would be enough.

Nick P June 24, 2011 1:54 PM

@ RobertT

“Anyway, I’ve probably left most readers scratching their heads, and wondering if I’m just a complete nutter.”

Just the first. I know enough about security engineering to understand, conceptually, what your doing in some of the specific examples. But I lack the domain knowledge to understand it at any concrete level.

@ Mark

“The manufactured cost for my current memory stick version is around the $10 mark in volume so it’s not really fair to put it in the netbook cost category.”

I remember that your paper mentioned a trusted path on your device. The illustration showed a screen (LCD?) and a keypad. The original problem area we were discussing involved small businesses running batches of payroll transactions. To get real security, they need to see what they are signing on the secure device. A tiny LCD screen would cause a time consuming review. Strong or long passwords are also horrific to type on a primitive keyboard, necessitating a miniature qwerty or something like the old electronic organizers.

These requirements and issues led my transaction appliance to be a somewhat large device. A few inches of keyboard, maybe a four+ line screen, maybe foldable to save room. Anything less would be a real headache. Putting a trusted path in your design that’s good for things like payroll verification seems like it wouldn’t be a USB stick and would be more than $10. Just a hunch, but I can’t mentally fit this in a USB form factor. 😉

Your substitution approach looks like it’s fine for logins & very simple transactions. Things like payroll that deal with lots of transactions would require a whole lot of preset form data to be input in your little device. That data could change often, too, as well as the data fields. My setup just requires a file to be exported to a simple text file, parsed into the device, & displayed verbatim. Your substitutions approach would require copying plenty of it by hand to prestored memory & configurations to match form fields of the application. Thoughts on this?

“Therefore as I mentioned earlier, for pretty much all major HTTPS services you can get a hardware-based mutually authenticated secure login.”

I could see your device being useful for this. If it became widespread, might be able to get a few big sites to adopt it as a defacto standard. More people would use it, more web sites would get on the bandwagon, and it could become universal. I could see it happening. Look at OpenID’s success.

Note: As I was writing this, I noticed Bruce post about a technology called AppFence that uses the substitution approach your talking about (with some blocking too). I commented that they were about two years behind you guys. lol

RobertT June 24, 2011 8:15 PM

@Mark C
“Even on the reverse-engineering front – forget 5 years – a year or two ago a private individual broke arguably the most secure smart card chip in 6 months purely by reverse engineering the design ….”

I’m very familiar with the use of FIB machines to extract chip data, that is why most of my techniques require the system to work below the noise floor. It’s not an idea that is understood by most digital monkeys, because with on chip calculations you are typically no where near the noise floor. (signal level is say 1V noise uncertainty is less than 0.1V )

Reverse the situation and make noise 1V and signal 0.1 V and you have a signal hidden in 20dB of noise. Now figure out how to accurately extract signal and complete basic ALU operations and you have some idea what I’m talking about. The noise must be fundamental to the ALU channel so that it can never be replaced by a known signal. This leaves the attacker with a VERY difficult observation and measurement problem.

Mark Currie June 25, 2011 12:58 AM

@ RobertT,

To be fair we will always need to develop obfuscation techniques for the sake of tamper-proofing. Even in my design, if you store your passwords on the device and you lose the device there is a time factor for thieves to get your passwords. However one of my goals was to minimise this risk by firstly not requiring the use of a private key and recommending that your banking PIN should be entered each time.

@ Nick P

Yes my device is designed for protecting entry of things like PINs/passwords, account numbers, ID/social security numbers etc. It was primarily designed for public use not specifically payroll.

However…

Here’s an idea – What if you combined my device with your idea, you could get your signed file verified by the gadget before it sends it through the secured channel. That way the cryptography is localised. The bank does not need to do anything other than accept your data through the secured channel. The gadget only needs your public key. For this transaction it only needs to display go/nogo and handle a user accept/reject input.

So…

You have a separate old machine that has been checked for BIOS lock down, hardware keyloggers etc. (the IT guy does this when setting up for the accounts dept.). The old machine boots on tommy’s LiveCD which contains a private key. This CD is kept locked up normally. The accounts guy logs into the bank on a separate machine (his machine) using my device to get the secure login and secure channel. He then checks the payroll info on the other machine, signs it, stores it on a file and downloads it to my gadget. The gadget checks the signature, display’s go/nogo. If go, accounts guys accepts it on the gadget’s input mechanism and the gadget sends it through the secured channel.

OK there are other details like the bank would still need to support the gadget in delivering the payroll info, but hey – this is a web page thing, not a CA bunker thing. What do you think?

tommy June 25, 2011 2:19 AM

@ Everyone:

Sorry, been tied up for a couple of days, but found this in my ordinary, insecure e-mail:


Dear (Customer name)
Is this email really from (bank)? How can you be sure? To help you fight fraud and verify the legitimacy of (bank) emails, we’re adding a personalized stamp to our emails (see top right corner of this page). The stamp … includes your first name, last name and the last four digits of your (bank) number.

Always verify the accuracy of this information

To make the most out of the (bank) Security Stamp, please verify the accuracy of this information every time you receive an email from (bank).

Our goal is to place this security stamp on all our email by the end of August. If you receive an email from (bank) after August without the security stamp, or if the information in the security stamp is incorrect, don’t click on any links in the email. Instead, visit (secure.bank).com to conduct your business.

Sincerely,
(name)
Vice President
(bank) Chief Privacy Officer


My reply, through their internal secure messaging system:

I received your e-mail about the (bank) Stamp authenticating your e-mails. As someone well-versed in security, this is totally useless and actually dangerous.

First, you must know that ordinary e-mail is not secure, and many people can see it as it travels the network. So it is trivially easy for someone to copy the stamp. I created a Word .doc with your stamp on it in about three minutes (attached), illustrating how trivial it is for an attacker to insert your stamp into his fraudulent message.

Even worse, the stamp gives additional information: The name on the account (which is not the same as the name on the email address), and four digits of the account number. That might not be enough by itself, but it’s additional info for someone attempting an attack.

What to do? STOP SENDING ORDINARY E-MAIL. If you have a time-sensitive, necessary message, send it via this secure system here, and just notify the above email account, “You have a message waiting at (bank). Please log in and view it.” — with no other identifying information, nor any links. Thus, a separate login here is required.

Even better, just call me at my phone number of record.

If your e-mail is not necessary to account function — if it’s to sell other products and services — then please don’t send it. I get quite enough spam already, thank you. If you’re not willing to spend a stamp and some printing, then it’s not important enough to send at all.

Please be assured that I know whereof I speak. Please feel free to have a member of your security team, or a member of your IT team who is well-versed in security, call me.. Thank you.


If they reply, I’ll post it. Since this thread is no longer visible from the home page, I also sent it to Bruce in case he thinks it worthy of a new blog post, as yet another example of ineffective and leaky bank security theater.

Will catch up on the parts of the thread that I’ve missed, as soon as possible. Thanks to everyone for their feedback and input. Something good will come out of the mixing of ideas and minds.

Clive Robinson June 25, 2011 9:28 AM

@ RobertT,

“The noise must be fundamental to the ALU channel so that it can never be replaced by a known signal.”

Compared to making it tamper proof, making a good true random noise source (TRNS / TRNG) is easy, provided you accept a few limitations 😉

I used to design random sources with known charecteristics, but I could always influence them somehow (a known signal on an RF signal coupled at an input or output connector was always good for this).

I’m not saying you cannot make a tamper proof true random noise source, but it’s not going to be cheap or convenient. In that respect TNRG’s are a bit like designing good “clock sources”. Both problems always remind me of the old design engineers saying that “amplifiers do and oscillators don’t”.

Some years ago a favourd way to generate high security true random noise was to use a low noise receiver and a battery driven “hot source” all in a “tin can”. However almost invariably the output was “tailored to suite” simply because long strings of one’s, zero’s or repeating strings etc was “verboten”, oh and invariably there would be some form of bias detectable that likewise had to be removed.

So at the end of the day you have to ask yourself a serious question,

“Do I want a true random noise source with it’s myriad of known and uknown faults, or do I need a determanistic noise source that is unpredictable to others?”

The former is a hard problem to solve, the latter is considerably easier if not trivial in comparison.

It’s a question I’ve asked a number of times when it comes to high volume Key Material production and the answer almost always goes the way of a determanistic solution with something like AES in CTR mode and hash functions. You effectivly have zero entropy but it’s effectivly non determanistic to an adversary and that’s what usually counts. Further it’s output charecteristics are usually more amenable to direct use.

Most importantly unlike a “true” random noise source it’s very difficult to influance a determanistic noise source without it being fairly easily detected if you can be bothered (and you should) to do it 😉

RobertT June 25, 2011 8:59 PM

@Clive R
For everything that I work on, I must assume that the attacker has physical possession of several of systems they are intent cracking. They can learn what they need to from one or more systems before applying this knowledge to the real target system.

Tamper-proofing, in the sense of physical lock-out with thermite charges activated upon tampering, is not an option.

Exotic package materials that could cause all secrets to be destroyed (such as incorporate an acid module) are also impossible, although it is possible to use the surface of the chip (above the PO) to store secrets. (which is the last that I will write on that point)

I can never rely on an auto-erase tamper protection because I can’t guarantee that power is always connected.

Tamper-proofing a chip with multiple layer of metal mesh is a complete joke and a waste of resources, because it assumes a top-side attack. Top side is the least likely attack method for a knowledgeable well funded attacker. These days the preferred attack vector is through the back of the Chip using appropriate high NA optics and lasers. (google PICA and chip Failure Analysis)

The real problem is that I don’t know how to do true homomorphic encryption. So I need to use added noise, complexity, confusion and obfuscation, along with systems that are homomorphic for addition. The purpose is to create a chip that requires a very wide range of skills and understanding before you can even attempt to hack it.

On the point of TRNG it is indeed a hard problem, but is a a problem made more difficult by the need that most encryption systems have to resolve the underlying randomness as a digital sampled signal.
There are several on chip noise sources that maintain randomness even in the presence of substrate signals and high electrical / magnetic fields but it is difficult to extract this randomness without introducing power supply correlation (usually at the sample stage or the level shift stage)
Most RF electrical fields couple into the chip through the power supplies, so internal decoupling of the RNG supplies is critical.

On the point of True Random noise, I agree you need two RNG sources one that is truly random and a second (uncorrelated) source that guarantees a maximum string length for Ones / Zeros (again this is really a digital implementation issue)

tommy June 26, 2011 2:57 AM

MAKING A DIFFERENCE:

Re: my post above about bank “security stamp”: They notified me that they had replied in the secure messaging system. The reply, after duly logging in, was a bland, generic “Thank you, blah blah blah…” probably from an auto-reply bot.

But … the /notification/ in my ordinary email did not have the Security Stamp, just:

“We’ve posted a new message for you on bank.com. To protect you and your financial information, you can view and respond to your secure message on bank.com.” … exactly as suggested.

Sometimes we feel like we feel like we spend much of our time shouting to a wall. but every once in a while, we do make a difference.

Keep at it, people. 🙂

Nick P June 26, 2011 7:53 PM

@ Mark Currie

Although I’m still concerned about complexity, I do like your device for one important reason: it should be easy to get banks to support that. The main requirement to make your device simple enough to assure to high levels is a standardized form structure for banks. The device could be programmed with custom parsing rules (and a separate, untrusted parsing module) for non-compliant banks. However, banks can comply simply by changing a few lines of HTML in a way that will have little to no impact on their backend functionality.

My approach, although much higher assurance, requires a backend system to authenticate the transactions. This might come with significant upfront costs for both hardware and software re-engineering. Many customers, banks and I might think it’s worth it if the design existed and was marketed. However, I think the majority of banks are going to initially ignore a new scheme that requires significant effort on their part. They already have little liability thanks to the courts. The further reduction in liability must come at a proportionally low cost. My device’s main market would be banks who are branding themselves as more secure via superior protection methods.

So, we could start with your device to set up an initial tunnel. If the HTML format is standardized and simple, this can provide reasonable assurance (exceeding “commercially reasonable” lol). The part that confuses me a little bit is where the user downloads it onto your device. Since it’s just a USB stick with an LCD, this implies the user removes it from one PC and inserts it into the payroll PC to do the file transfer. The problem is that, as your paper indicates, the USB stick is maintaining the TLS connection. Removing it might disrupt the session somehow.

So, we have to have a way of doing the download. A few options come to mind: your devices is physically altered to connect to both simultaneously; the scheme is altered so device removal doesn’t disrupt the transaction; scheme is modified so to allow a transfer from one PC to another, then to the device. The last seems easy to do with existing hardware. I’m focusing on it because I need to know more about your implementation to do No 2.

The old PC might be connected to the network during the ACH transaction and set to only send outgoing UDP connections (not even receive TCP acknowledgements). This means the user enters the untrusted PC’s IP address. The signed data is sent to the untrusted PC, which relays it, and your device checks the signature. Signature prevents alterations. For reliability during transmission, we can just send each packet a few times, display a hash on both screens and let the user tell the trusted system things are fine or retransmit.This let’s us get it onto your device without modifying your hardware. The device might also have restrictions like “only allow a download during a secure transaction”, file size limits, and “only allow a download signed with X key.”

(Note: The only time the trusted PC is connected to the Internet is to do updates. It’s firewall settings are changed to allow incoming traffic that’s solicited by the PC. A LiveCD might not be possible if 3rd party Windows applications are used. In that case, a hardened Windows installation is used.)

What are your thoughts?

(Btw: Who owns the rights to your device? Is it your intellectual property, your teams, university’s or freely available?)

Nick P June 26, 2011 7:57 PM

@ tommy

Nice work with the bank. What we need to do now is figure out how to teach companies to only deploy security measures that have a chance of accomplishing something.

Mark Currie June 27, 2011 11:28 AM

“The problem is that, as your paper indicates, the USB stick is maintaining the TLS connection. Removing it might disrupt the session somehow.”

In an air gap scenario, this can work in two ways. The file could be saved to the device from the secure PC using supporting software. Then the device is plugged into the un-trusted PC. The web is session is setup, and the file is transferred when requested by the bank’s web pages. In a second scenario, the file could be copied to a standard flash disk or CD on the secure PC and then copied to the device on the un-trusted PC via supporting software. In this case the device could also display some signed information about the file like the date or a serial number etc. This prevents a small replay threat.

Your UDP approach is a good idea and it could also work but perhaps it would be better to force the manual air-gap method.

WRT to your question about the IP, it is mine. I have developed this concept over several years and I have patents at various stages. The reason I am now discussing it freely and feeling less proprietal about it is that it is a particularly difficult concept to attract investment around. It’s not easy to convince an investor about the difference between this and the very numerous other authentication gadgets out there. Crypto vendors are built around corporate clients and support contracts, so the sales and marketing mindset is not really compatible with a client-oriented solution.

I don’t have the capacity to defend the patents anyway so it would probably be better for me to share ideas with others and try to find other applications that could get it off the ground. I am probably better off with some sort of open license and even open source model that might encourage others to include it in their designs. I think that it would be in the interests of anyone wanting to take it further to involve me in some way though since I have built up a large body of knowledge in this area and I am also developing new related ideas that I think will be valuable to its cause.

I think that the payroll application can work and it’s a good way to get a bank’s support for the general public use as well.

@ tommy,

You may have already discussed this previously but I just wanted add a point that I noticed when considering the LiveCD approach. The TLS protocol uses a random number source and I think that most OS implementations store a seed on disk that is used to accumulate entropy across future sessions. It is read sometime during PC start up I think. Since it always starts from the same point on a LiveCD I wondered whether this would weaken the security significantly? I haven’t delved into the source code of Linux’s implementation so I can’t really say.

Nick P June 27, 2011 12:59 PM

@ Mark Currie

I have a tendency to overlook the obvious sometimes. The first method you mentioned is probably the best: downloading the data to the device before putting it into the untrusted PC. This helps maintain an “air gap” and that is the best method.

It’s good that the I.P. is yours. That simplifies things. You should really consider releasing it in an open license in some way. Perhaps a start would be dual licensing it: free for personal, but not commercial, use. I’m thinking about building and deploying it. Marketing it is the hardest part, though. As for assurance, we could start the design with medium robustness on really cheap hardware. Then, future sales would fund a gradual increase of assurance.

I’ve also been thinking about using one of these EAL7 level smart card OS’s and just retargeting it for a more powerful processor. Specifically, MULTOS or Caernarvon. They usually have filesystems, signed app loading, etc. What do you think?

RobertT June 27, 2011 8:45 PM

@mark C

You have some interesting ideas here but I fear that you are targeting the wrong market sector.

I’d suggest you think about Systems on a Chip (SoC) ideas and what procurement process this drives for specialty (difficult to understand) IP.

Your device looks to me like it will sell into similar segments as RSA’s systems. So although your solution might be very different, it will immediately attract their attention and a response of their legal team. They will make sure the FUD flows freely and the market sees your solution as a potential liability. Business systems IP is a mine field, best not entered especially if your strengths are technical rather than legal.

Second thought, focus on non-US and non-European markets. This lets you ignore the IP FUD. If you are not US based and your products do not sell into the US than there is very little that a US IP owner can do, so they will leave you alone. There is a huge market for secure payment systems outside of the developed countries.

Nick P June 28, 2011 3:17 AM

@ RobertT

The sad, sad truth. This and more legal mumbo jumbo is the reason I’ve intentionally shelved the majority of my designs. I simply can’t take the risk as an American citizen in an American market to deploy secure technologies that might accidentally step on some patents.

Did you know even Red-Black separation, as a concept, has been patented in the past? That’s utterly ridiculous. Aside from being obvious, it’s been done for decades now by all kinds of different groups. What crypto or guard can really claim it doesn’t implement Red-Black to some degree? Any could be sued. Patent reform is essential for innovation. Otherwise, we [American citizens] just sit on good ideas worried about getting sued, while the rest of the world leaves us behind.

RobertT June 28, 2011 5:38 AM

@NickP

I left the US partly because of this patent issue, and I’ve really have not looked back.The world is much bigger than just the US. I can name at least 5 start-up companies (emerging from US entities) that are setting up their business activities in other countries just to get around the unintentional IP infringement issue. (Well in one case it is intentional infringement, but a completely bogus patent, like your red-black separation)

As you say something needs to be done to maintain the innovation of US start-ups and it is certainly not more patents. I believe IBM filed for 5000 patents last year, I haven’t even attempted to read them, but I just know that the quality will be rather low. But legally each and every one of those patents has the same legal weight and affords the owner the same legal rights. It’s crazy!

Nick P June 28, 2011 12:59 PM

@ RobertT

“I left the US partly because of this patent issue, and I’ve really have not looked back.”

I’ve been considering doing the same. Bangkok or Seoul, anyone? 😉 That 1Gbps broadband at $26 a month is awfully tempting.

“but legally each and every one of those patents has the same legal weight and affords the owner the same legal rights. It’s crazy!”

The crazier thing is that the legal standard of evidence required to dissolve a patent is higher than almost any other situation. It’s “clear and convincing” instead of beyond a reasonable doubt. You can be convicted of rape and murder with weaker evidence than it takes to get rid of a bogus patent. I think the supreme court also recently upheld that standard of evidence.

So, they can say any superfluous thing to prove they own an idea, but the opposition is required to present nearly perfect, doubt-free arguments to prove they don’t own it. Not fair in the least. They need to change the standard of proof so people can challenge bogus patents in a fair way.

Mark Currie June 28, 2011 4:58 PM

When I said defend my patent I really meant defend against infringement (sorry about that). Of course I may also not be able to afford to defend infringement claims against me. However when you defend against a claimant, all your informal descriptions as well as all other prior art can be used as defence. Whereas when you are the claimant, you can only sue on infringement of your specific formal patent claims. The courts also tend to show lenience towards the smaller party.

RoberT is right that you are more likely to be sued in the US, but I have been through what I believe to be all the US prior art and I remain confident. However I wouldn’t worry too much about this. If you are a small company and you happen to get sued by a large company, then it’s probably because you have struck on some real business value. As the small guy you will score anyway, since the claimant would probably prefer to negotiate a deal with you in order to get into your customer base. He might even offer to buy you out.

Nick P June 29, 2011 2:59 AM

@ Mark Currie

That was essentially the point we were making. If we had a real competitive chance, the big companies would take notice at the potential loss of profits. If any had similar patents, then they could sue us. The successes of Intellectual Ventures make me have a hard time putting faith in the potential success of informal arguments or prior art claims. It’s not surprising to find patents like this:

https://w2.eff.org/patent/wanted/patent.php?p=acceris

This patent gives the owner the right to claim any VOIP service is an infringement. Fortunately, EFF got a reexamination approved. Will we be so lucky with whatever we get hit with? Not sure. If I do this design, I’ll probably take Robert’s advice and do it in another country. Maybe give ownership of the IP to a trustworthy non-profit group and get our companies an exclusive license of it for implementation. There’s a good chance, though, that it might not survive in the US long.

Even so, I might still be willing to invest in the product if the breakeven period is rather quick. Then, might be forced into paying licenses or doing a buyout instead of being put into bankruptcy. They’d always rather increase their cash rather than accounts receivable. 😉 I said “forced” because my secure designs can’t be “bought:” too many have been shelved by COTS firms. You could say I have an ideological, rather than professional, interest in designing and deploying secure systems.

Mark Currie June 29, 2011 3:01 AM

@ Nick P

The high assurance OS route is not a bad choice.

I would be quite happy to extend a free license for personal use. If you were able to attract a viable market there is no way that I would start getting stingy on a commercial licence.

Given that the service provider investment is so small, it might only take one viable application to launch the start of the big one. If that ever happened, then anyone involved in this would benefit, of that I am certain.

@ RobertT

The SOC route goes without saying but you will always need a tightly coupled user interface and this is what makes it difficult to come up with something like a licensable core. Of course the ARM core business model is the envy of all chip designers, but I think that there is little chance of something like that here. Besides I really think that the supply of security gadgets must be governed by policies developed by experienced security vendors. As soon as you start to bring in too many business optimisations you run into the kind of problems that RSA have. I think that you are right about US companies spreading FUD but I doubt whether RSA would be one of them. They are probably in the worst position to be spreading FUD right now.

BTW I like what you do. Do you ever get involved in developing PUF’s ? I think that innovations in this area are valuable. This has probably been discussed before here, but a big problem facing the large scale production of crypto hardware is the trusted supply chain problem. PUF’s have been associated with helping to solve this problem but I don’t think that there has been enough emphasis on this.

Nick P June 29, 2011 3:21 AM

@ Mark Currie

I agree that one big company going with it may be what it takes. Might even let the first company have it for free if they don’t disclose that. 😉

“Do you ever get involved in developing PUF’s ? I think that innovations in this area are valuable. This has probably been discussed before here…”

Actually, you should google “schneier” and “PUF.” There have been quite a few interesting discussions of them. The biggest one involved a magnetic stripe technology that was based on PUF’s. Clive was quite vocal against the implementation, believing certain techniques could defeat it. This topic get’s quite a mixed rap here. Not being a hardware guy, I stay off it because I know when I’m dealing with issues beyond my knowledge. I don’t remember RobertT’s stance on it, if he ever pushed one. If I have a stance, I’m very suspicious of PUF’s. The reason? See biometrics’ promises vs delivered results. Biometrics and PUF’s are very similar in the abstract.

As for the trusted supply chain problem, yes that’s a huge problem. I don’t know if PUF’s will cover it. It’s why I pick an attacker’s enemies to make the device. For instance, if foreign spies are the issue, a DOD certified fab in the states might be good for subversion resistance. If the US govt is the issue, a neutral (gotta be careful there) or anti-US firm might do. I also try to use at least three suppliers with randomized ordering to reduce likelihood of compromise. Random sampling of a percentage of units and testing for anomalous circuitry might help as well.

RobertT June 29, 2011 5:38 AM

@Mark Currie
“Do you ever get involved in developing PUF’s ? ”

Yes, but the PUF’s that I’m developing are different from the typical Butteryfly cell or default state of an SRAM array that most people call PUF’s.

I have developed and characterized 3 different PUF structures. At the moment I’m not willing to reveal any details of how they work. The underlying physics for my devices is also not widely known, so I prefer secrecy to IP protection with competitor education.

The other important thing about secrecy is that until someone’s own PUF’s are shown to be weak, I don’t have anything to sell. And I certainly don’t want to suggest a small variation on a structure that is already proven weak.

One PUF that I have rejected will give you an idea of the effects that I’m using. This cell was for standard CMOS process but looks similar to a (4 bit per cell) SONOS Flash device except that the exact position of hot electron damage (and oxide trapping) can be inferred by comparisons of the forward and reverse subthreshold performance of the cell (If your into devices physics than think about how the charge trapping modifies the peak electric field in the Drain region). This makes the device very difficult to fake.

Regarding security of the fab chain, I spoken about this before, I believe the only security rests in making sure that any compromise will destroy the basic function of the cell. To do this I use fully differential cell and logic circuitry. I also don’t add any additional gates, so good luck resolving fully differential signals when all you have a single ended logic cells.

tommy June 30, 2011 1:00 AM

@ Mark Currie:

“You may have already discussed this previously but I just wanted add a point that I noticed when considering the LiveCD approach. The TLS protocol uses a random number source and I think that most OS implementations store a seed on disk that is used to accumulate entropy across future sessions. It is read sometime during PC start up I think. Since it always starts from the same point on a LiveCD I wondered whether this would weaken the security significantly? I haven’t delved into the source code of Linux’s implementation so I can’t really say.”

I’m not a cryptogeek per se, but it would be possible to gather fresh entropy every time the Live CD is booted. To cite the simplest possible, TrueCrypt will tell you to move the mouse randomly for 30 seconds or so, continuously generating x,y pairs. Never looked at the algorithm, but some subset of those relatively-random xy’s are processed in some manner to generate a fresh random number every time you do the mouse thing. I doubt one could accidentally or deliberately duplicate the same pattern, even if trying, and would assume that adjacent pairs aren’t used, e. g. take every nth pair and feed it into the PRNG, and even let n vary during the process. Should be pretty random, no?

Lots of other sources of environmental entropy, but don’t want to sidetrack to that, as I’m sure that topic has been covered well over the years.

@ Robert T., Mark Currie:

I think RSA themselves might be interested in licensing Mark’s technology, if convinced that it’s more secure than what they have now. Offers them a comeback from disaster: “New! Improved!” (two most favorite words in marketing). And such companies have the muscle (legal staff and funding) to guard the patents they licensed from you and ensure they’re not infringing, addressing Nick P.’s concerns about stepping on other toes. Innovation comes from individuals and small groups; mass marketing can come from a successful startup, but a large existing corporation has a huge advantage.

@ Nick P.:

“Nice work with the bank. What we need to do now is figure out how to teach companies to only deploy security measures that have a chance of accomplishing something.”

Thanks. But take out the “only”:
Not ” teach companies to only deploy security measures that have a chance of accomplishing something”
But just: ” teach companies to deploy security measures that have a chance of accomplishing something.”
Which as we all know, is an as-yet-unsolved problem. Strict liability, Gov regulation, etc… it’s a whole ‘nother topic.

“we could start the design with medium robustness on really cheap hardware. Then, future sales would fund a gradual increase of assurance.”

Permit me to mention that I do have a background in sales, and that the company for whom I once worked paid me very well to teach the rookies. The problem is that if you tell the bank, “This isn’t a complete solution, but it’s better than what you have now”, they’ll think, “He’s prevented some problems, but the others can still bite us. Why spend more money, training time, customer education, etc., when the Real Thing is just going to change it all again? Come back when you have your air-tight product.”

So I would suggest marketing only the complete high-assurance solution, whatever it is. Imagine your medium-robustness product somehow being attacked. All credibility is out the window for a long time. (RSA?)

“It’s “clear and convincing” instead of beyond a reasonable doubt. You can be convicted of rape and murder with weaker evidence than it takes to get rid of a bogus patent.”

IANAL, but have a fair amount of “street creds” in some legal areas. I think you were using hyperbole for effect, but to be clear, the standard of proof for rape, murder, shoplifting, or any other criminal offense is “beyond and to the exclusion of every reasonable doubt”. As I write this, the conviction of Casey Anthony for alleged murder of her daughter is still very uncertain. O. J. Simpson was acquitted of the criminal charge of murder, but found civilly liable for “wrongful death”, due to the lower standard of proof in civil vs. criminal cases. It may be true that patent cases are the hardest civil cases, but “clear and convincing” is still well below the criminal standard.

@ Nick P. especially, but also Robert, Mark, Richard, Clive:

I appreciate the thorough analysis of the Live CD idea. The consensus seems to be that it’s either not useful, or could be partially useful in some ways. I don’t really have anything further to add, since the discussion seems to be going in other directions, but at least it seems to have stimulated some thinking, which is good enough. Thanks to all for that.

It seems the VPN idea brought up some ideas, then was dropped. Not sure of the status of that, but it could be a near-zero-cost idea for the home user, running on familiar platforms (big plus for home users). The problems of untrusted PCs will always be there, until HA becomes the norm, but VPN eliminates some problems and reduces others. So it’s a large gain, using the extra lockdowns mentioned in this thread.

Side note: When I configured Grandpa’s machine to accept remote-admin connection only from my static 5. VPN address, I also set a specific port number in the five-figure range. So the attacker has to spoof my 5. IP, steal the passwords (two separate ones), and get or guess a port number between, say, 10,000 and 60,000+ The bank could do the same for each customer: randomly pick a port for VPN connections from each. Since Gramps has (very) limited privileges on his own machine – I’m the Admin, pw-protected — attacker may have to compromise me to succeed. (These settings are only changeable with Admin privilege, so compromising Gramps is of no use without also getting a privilege escalation, which admittedly is possible.)

Which means, compromising the one who knows more about security and has the far-more-hardened machine. (Gramps still needs convenience and no need for the user to make choices.) Yes, mine is Windows, but not your average Windows — Nick P. will vouch for that — plus additional defense-in-depth. So, substitute “Bank” for “me” and “Customer” for “Gramps”. Home users are usually Admin, but it’s still an improvement, no?

For business customers, buying a few extra machines for banking-only is a trivial expense, compared to potential losses. Let those machines do nothing but the VPN to the bank, from their assigned static 5. or 10. addresses — you get the picture. Then if the payroll files are sanitized and validated by the various ideas proposed here, and air-gapped over to the VPN machine … Worth pursuing that avenue any further? If not, cool.

Great discussion!

(O. J. criminal-vs.-civil satire:)
http://www.amiright.com/parody/60s/johnnycrawford0.shtml

Nick P July 3, 2011 12:38 AM

@ tommy

“It seems the VPN idea brought up some ideas, then was dropped. ”

I figure I should respond to this. I think all of us are trying to keep the security baked-in at the protocol level and close to the application. This is higher than where most VPN’s operate (Ethernet or IP layers). You could say our scheme’s more akin to SSL. Having it application directed has some advantages: easier to verify proper functioning; can be tuned to needs of application; entire network infrastructure can be untrusted.

I didn’t use LogMeIn Hamachi because I didn’t trust them. I think their IP addressing scheme is novel, even brilliant. It might pose a risk in enterprises because those routers might assume that address space isn’t used and be ignoring, using or modifying such packets someway. Pure speculation, but I don’t like unknowns. In any case, the idea is nice, but my main reason for not talking about it is that I’d have to thoroughly analyze it. I just don’t have time recently to do that. Maybe in the future. 🙂

tommy July 4, 2011 4:29 AM

@ Nick P.:

Thanks for the response.

You don’t need to respond further, but I wanted to be sure that it was clear that I wasn’t advocating Hamachi itself for the banks. Only that the same type of technology, with the bank itself doing the mediation and hosting the server, using the 5. and 10. spaces, with AES-256 as Hamachi does, and the bank controlling all access to the network, could be a workable way to improve online bank security /now/, with existing technology, at relatively low cost.

I added the personal details of remote-admin for Grandpa just to show the kinds of additional security that can be added, such as randomized port numbers unique to each bank customer.

Hamachi isn’t the last word. You might look at their model some day and come up with something better. Or not. Just trying to prevent bank-phishing and the kind of unauthorized access that was the subject of the OP. Even though the thieves stole the login creds, they wouldn’t get access unless they could somehow spoof this non-public IP address of the customer, etc.

If this is something you’ve never explored, and have no time to explore now, cool. Richard Steven Hack said he’s already used Hamachi. So he could probably find ways to lock down their system further with his own (bank’s own) hosting. Maybe some day, you’ll have a chance to look the idea over. I appreciate the different-level approach, but (rhetorically) should we eliminate any level of approach that could be a vast improvement over the present, and doable now? Cheers.

spaky July 7, 2011 5:14 PM

@tommy – I believe IronKey has a product which tries to emulate this to some degree.

tommy July 7, 2011 8:00 PM

@ spaky:

“I believe IronKey has a product which tries to emulate this to some degree.”

Sorry, there have been so many things discussed in this thread, but — tries to emulate what, specifically? I’m somewhat familiar with IronKey, but wasn’t sure what you were saying it emulated. Thanks.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.