Schneier on Security
A blog covering security and security technology.
« Obama's Role in Stuxnet and Iranian Cyberattacks |
| Flame »
June 1, 2012
Friday Squid Blogging: Mimicking Squid Camouflage
Cephalopods - squid, cuttlefish and octopuses - change colour by using tiny muscles in their skins to stretch out small sacs of black colouration.
These sacs are located in the animal's skin cells, and when a cell is ready to change colour, the brain sends a signal to the muscles and they contract.
This makes the sacs expand and creates the optical effect which makes the animal look like it is changing colour.
To mimic these natural mechanisms, the team used "smart" electro-active polymeric materials, connected to an electric circuit.
When a voltage was applied, the materials contracted; they returned to their original shape when they were short-circuited.
"These artificial muscles can replicate the [natural] muscular action… and can have strong visual effects," said Dr Rossiter.
"These materials, and this approach, is ideal for making smart colour-changing skins or soft devices in which fluid is pumped from one place to another.
As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.
Posted on June 1, 2012 at 4:40 PM
• 75 Comments
To receive these entries once a month by e-mail, sign up for the Crypto-Gram Newsletter.
Ladies and Gentlemen,
I have developed a new approach to password entry which I would like to offer for your consideration. It is called "CerebraLock" and is based on the categorization of items (text or images) into two categories: known and unknown. An access procedure requires you to categorize these items to gain access - only *you* can do this, since the categorization is only stored in your brain.
It has a one-time setup which involves some effort, but should from then on alleviate a lot of the problems inherent in the current user name / password scheme such as keylogging, phishing etc. There is nothing to memorize and no passwords to change regularly.
I have tied this together with encryption keys such that only the correct access procedure 're-assembles' a usable private key; however, this is secondary to the authentication method.
I am by no means a security expert; therefore I would greatly appreciate an evaluation and feedback by the readers of this blog (or even the Man himself :-) Lacking a forum, I have set up a Facebook page - admittedly not the place to go for people concerned about security - as a place to comment.
Please visit the website: http://bitSplit-enterprises.com/HiVaultage.html
It has software for download (Macintosh only, so far,) tutorial videos and other documentation.
I wish to offer this as a proposal for a way out of the authentication mess we're all in and believe it may have merit and great potential beyond the initial implementation. But I need your help.
Thank you very much!
Interesting concept. A few thoughts:
I can think of several ways in which this could be successfully sniffed - one problem is that once broken, the seed images all need to be changed.
It might be difficult to generate enough distinguishable images that are hard for a third party to note familiar themes/ similarities yet easy for the first party to recognize.
Transmitting the images (in the case of a remote application) has a higher bandwidth cost.
Wouldn't there be a problem if the user forgets an image or remembers a new one?
Have the Diablo III issues really not been mentioned in any of the open threads yet? Okay, then, here's a rundown:
1) We're talking about a hugely anticipated computer game that is "protected" from piracy through a DRM setup that requires a connection to the publisher's server every time it's played, even in single-player mode. The result on debut night was... entirely predictable.
2) Now that the server issues have been mostly sorted out and people are actually playing the game, there seem to be systematic attacks on accounts, stripping player characters of their accumulated stuff. The publisher has both hardware and software tokens available to try to stop hacking and fraud, but even people using those are reporting problems.
3) As the second story notes, there's an upcoming "real-money auction house" meant to allow players to sell items to each other in-game for real-world money (with Blizzard taking a cut). Meaning those accounts in 2) which are apparently being so easily compromised will soon be linked up with real-world financial information. Opening day for that feature has been pushed back twice so far, hopefuly to address security concerns.
That website trips almost every red flag I have. There's no description of the algorithm, you're a self-professed non-expert in the subject, the site apparently claims the impossible (preventing access without any secrets), etc. Also, I'm not familiar with precisely what Apple computers offer in terms of security, but I fail to see how your software is immune to memory process reading. Likewise, an MoTM attack would probably be effective against it.
@Fred P, Fishbot
thanks for the feedback.
I had a very hard time making this whole concept understandable to complete laypersons, so I focused on that. I will add a section that goes much more into the internal workings. Please bear with me. Till then (if you remain iterested,) please check the website for news.
As to "claiming the impossible" - the secret is the categorization of the items. That (ideally) can only be done by you - if you pick the items carefully (e.g. pictures or names of old girlfriends, teachers, favorite hangouts; anything only you would know - and also never forget!)
If you pick 50 to 100 of these then you can generate a decent amount of 8-item screens that contain a mix of known and unknown items and each sequence of screens is a different "password".
You never identify the item category directly but indirectly by counting known items. So the solution to a screen is 2 for example. Even if I display this screen again (and, since I draw from a large pool of them, that may take a number of access procedures,) the items will be arranged differently and not easily recognizable as the same screen, but the solution will still be 2.
A server would store the screens with encoded solutions. If the server were compromised (and the data successfully deciphered) it would still reveal only the sequences and their solutions but not the categorization of the items. The question is whether one could somehow derive the categorizations from that a la MasterMind (the game.) Only then would the secret be known by someone else.
Fishbot, what would you read from memory? If you refer to the private key once it's re-assembled, yes you'd be done for. But isn't this *always* the case? If you refer to the solutions to the entry screens: see above; it may take a while for the exact sequence to repeat.
I appreciate you all giving some thought to this; I won't be able to respond to each and every concern and it was not my intention to make *this* the discussion forum. Again, everyone, thanks for bearing with me and not dismissing me out of hand.
"Abstract—This paper is a short summary of a real world AES key extraction performed on a military grade FPGA marketed as 'virtually unbreakable' and 'highly secure'. We demonstrated that it is possible to extract the AES key from the Actel/Microsemi ProASIC3 chip in a time of 0.01 seconds using a new side-channel analysis technique called Pipeline Emission Analysis (PEA).
This new technique does not introduce a new form of side-channel attacks (SCA), it introduces a substantially improved method of waveform analysis over conventional attack technology. It could be used to improve upon the speed at which all SCA can be performed, on any device and especially against devices previously thought to be unfeasible to break because of the time and equipment cost. Possessing the AES key for the ProASIC3 would allow an attacker to decrypt the bitstream or authenticate himself as a legitimate user and extract the bitstream from the device where no read back facility exists.
This means the device is wide open to intellectual property theft, fraud and reverse engineering of the design to allow the introduction of a backdoor or Trojan. We show that with a very low cost hardware setup made with parts obtained from a local electronics distributor you can improve upon existing SCA up to a factor of x1,000,000 in time and at a fraction of the cost of existing SCA equipment."
For a host of reasons I'm dubious of the legality Blizzard's "real money" AH. They are pushing the legal envelope with that project.
The security issues are probably overstated. IMO Blizzard has one of the best security groups in the gaming business. To the best of my knowledge there has never been a single verified instance where a Blizzard account has been compromised by a hostile party when two-factor identification was used appropriately by the target.
The article you linked too is just whining. It has no point other then the author is a dumb player. "Yes, they have an optional “authenticator” which sends a secret code to your phone to login like you’re working at the CIA, but who thought you’d need a security measure like that?"
Well what part about 10 million customers would make one think Blizzard would not be a tempting target. Really, the guy should just go...die...or something. Two-factor identification is only for the CIA? WTH. The hardest part about the internet is that it never stops an idiot from offering an opinion.
@bitSplit am i to understand that a hacker would stand a one in eight chance of guessing the password ?
Mentioned by Richard Stallman was an article about cities using acoustic sensors (similar to microphones?) to triangulate the locations of gunshots. Among other concerns were whether the sensors could pick up other noises (in one case, audio from a loud argument outside was picked up) and whether that could go against someone's "expectation of privacy." The question could be asked as to whether the sensors could be designed to be sensitive only to gunshot-type sounds.
Sorry the basic idea is a very old one (pre MS Windows), and as far as I'm aware originaly involved nine photographs put up in a random pattern (atleast it was nine in the version I saw). One of the photographs had significance to the user (ie photos of people, significance being friend/enemy etc). The user pressed a number key on the numeric keypad to select. the significant image.
Then another nine images were presented for the same process, and this was continued untill the required security margin was achieved (ie 5 sets for a 1 in 59,000).
I've described the system several times before on this blog including the sugeation of using different catagories of the users preference. However all these systems have a number of major flaws that neither the server operators or users will tolerate,
1, The vast amount of server storage required for more than a handfull of users.
2, The waste of valuable server bandwidth.
3, The slow time to load pages and thus long login times.
4, The user has to use a grafical display.
5, That it's not a "password" system in the traditional sense.
Whilst the first three have improved over the years, the system still does not scale well and will always be slow compared to typing a password. And the last two issues have such inertia in the computing community they might as well have been handed down as the eleventh and twelfth commandments to Moses...
The reality is "passwords" and "passphrases" are here to stay as they are in effect the lowest common denominator, they are well understood, easy to implement, a requirment on the auditors check list and fully engraved in (ab)users brains...
Most sysadmins realy don't care how good or bad the technical asspects of resource access are, they know from long experience (ab)users will do exactly what they please, and irrespective of the actuall system in use "password reseting" is as much a part of an (ab)users life as is getting their skinny latte etc from the coffee shop on the corner, thus the admin want's the simplest possible system....
If you know how to get past all these hurdles then you have a vague chance of getting the system accepted, if you can't then the last 50years or so of computing history tells you that your time would be better spent going out and enjoying yourself.
I concur with Clive Robinson. Basically authentication could is based on:
1- something you know (password / pass phrase)
2- something you have; a token, for example
3- something you are; biometrics ( finger print, retina scan, ...)
4- what device you are on; device (not user) authentication
5- your location; geo-fencing for example.
Or a combination of more than one of the above (multi-factor) authentication. What you essentially did is a "something you know" type authentication. To show it's value, you will have to demonstrate how it is better than a username / password type authentication - AKA weak authentication. "Better" is intentionally left fuzzy.
@ Clive Robinson,
I recently read a comment you made that security is inversely proportional to efficiency (I am paraphrasing). Somehow I am not convinced. Would not this solution by bitSplit and your comments on that negate your stance? When you have the chance, I would appreciate an elaboration.
Armchair philosopher reporting in.
I wonder, could perhaps superfluous bits in the long IPv6 address be used to create a unique cryptosystem for use within the constitutive hardware inside a nation state? (any state) Not relying on any cipher, but relying on a dynamic, ever changing, yet unstoppable world of network traffic, under a administration/regime with standards, protocols and more importantly owned hardware restricted to the area within a nation state. I guess I imagine some kind of system that would not only allow for single end to end authentications, but for layered or unlimited instances of authentications as the traffic is funneled through a network, perhaps more than once, or as a result of traffic being forced to branch and be broken apart over a network.
Unless I have misunderstood completely, I believe there is already an issue with something called "covert channels" with regard to IPv6. I simply imagined that having ownership of major hardware parts, or at least owning important parts of a network, it could be thought to mix with aspirations for controlling and manipulating all the traffic going through the network; and in this way somehow create a functional cryptosystem that could be immune to interference, given the limitations imposed on anyone or anything thus never being able to monitor in real time the entire traffic going through the network and with this limitation being prevented from interfering or ever accumulate enough information to make any useful analysis.
Perhaps military forces around the world already uses some kind of key management system similar to this which I had vaguely in mind?
Hmm, heh, I guess any random generation of ip's would make my notions of all this fairly meaningless, as there would not be an all encompassing regime of ip number generation as I imagined anyway.
"Everything we see has some hidden message. A lot of awful messages are coming in under the radar - subliminal consumer messages, all kinds of politically incorrect messages..."
---- Harold Ramis
“RFID in School Shirts must be trial run”
The trial runs began a LONG time ago!
We’re way past that process.
Now we’re in the portion of the game where they will try and BRAINWASH us into accepting these things because not everyone BROADCASTS themselves on and offline, so RFID tracking will NEED to be EVERYWHERE, eventually.
RFID is employed in MANY areas of society. RFID is used to TRACK their livestock (humans) in:
* 1. A lot of BANK’s ATM & DEBIT cards (easily cloned and tracked)
* 2. Subway, rail, bus, other mass transit passes (all of your daily
activities, where you go, are being recorded in many ways)
* 3. A lot of RETAIL stores’ goods
* 4. Corporate slaves (in badges, tags, etc)
and many more ways!
Search the web about RFID and look at the pictures of various RFID devices, they’re not all the same in form or function! When you see how tiny some of them are, you’ll be amazed! Search for GPS tracking and devices, too along with the more obscured:
- FM Fingerprinting &
tracking methods! Let’s not forget the LIQUIDS at their disposal which can be sprayed on you and/or your devices/clothing and TRACKED, similar to STASI methods of tracking their livestock (humans).
Visit David Icke’s and Prison Planet’s discussion forums and VC’s discussion forums and READ the threads about RFID and electronic tagging, PARTICIPATE in discussions. SHARE what you know with others!
These TRACKING technologies, on and off the net are being THROWN at us by the MEDIA, just as cigarettes and alcohol have and continue to be, though the former less than they used to. The effort to get you to join FACEBOOK and TWITTER, for example, is EVERYWHERE.
Maybe, you think, you’ll join FACEBOOK or TWITTER with an innocent reason, in part perhaps because your family, friends, business parters, college ties want or need you. Then it’ll start with one photo of yourself or you in a group, then another, then another, and pretty soon you are telling STRANGERS as far away as NIGERIA with scammers reading and archiving your PERSONAL LIFE and many of these CRIMINALS have the MEANS and MOTIVES to use it how they please.
One family was astonished to discover a photo of theirs was being used in an ADVERTISEMENT (on one of those BILLBOARDS you pass by on the road) in ANOTHER COUNTRY! There are other stories. I’ve witnessed people posting their photo in social networking sites, only to have others who dis/like them COPY the photo and use it for THEIR photo! It’s a complete mess.
The whole GAME stretches much farther than the simple RFID device(s), but how far are you willing to READ about these types of instrusive technologies? If you’ve heard, Wikileaks exposed corporations selling SPYWARE in software and hardware form to GOVERNMENTS!
You have to wonder, “Will my anti-malware program actually DISCOVER government controlled malware? Or has it been WHITELISTED? or obscured to the point where it cannot be detected? Does it carve a nest for itself in your hardware devices’ FIRMWARE, what about your BIOS?
Has your graphics card been poisoned, too?” No anti virus programs scan your FIRMWARE on your devices, especially not your ROUTERS which often contain commercially rubber stamped approval of BACKDOORS for certain organizations which hackers may be exploiting right now! Search on the web for CISCO routers and BACKDOORS. That is one of many examples.
Some struggle for privacy, some argue about it, some take preventitive measures, but those who are wise know:
Privacy is DEAD. You’ve just never seen the tombstone.
@Curious: CJDNS might make you happy - it's a mesh routing system using IPv6 internally and *anything* for node-to-node connections (cantenna? IrDA? LAN?), and the IPv6 addresses are hashes of public crypto keys. Set up CJDNS with your friends using VPN:s and you can use some pretty secure IPv6 links.
Thanks for the feedback! :)
Blizzard has admitted to compromises even with the authenticator in use. Their representative blames stupid users for having other software on their computers, apparently missing the point that there would be no vulnerability to exploit if Blizzard's servers didn't have to be involved in an isolated single-player game.
I selected the Forbes article because I felt it gave the most comprehensive overview. But if you don't like that one, try this slightly less comprehensive one from Ars Technica or this one from Eurogamer, which is much shorter but does feature a brief interview with the compromiser.
I recently read a comment you made that security is inversely proportional to efficiency (I am paraphrasing). Somehow I am not convinced. Would not this solution by bitSplit and your comments on that negate your stance? When you have the chance, I would appreciate an elaboration.
It's actually realativly easy to explain and is to do with three basic areas,
2, Transparency/covert channels.
3, Inexperiance/lack of depth of knowledge.
Depending on your views, "100% efficiency" can be termed as "maximum through put for a given resource" (yes there are other efficiency measures but I'm concentrating on "information" in this argument, as it's control is all that security is about).
Now you can argue as the resource is finite that to maximise through put you have to minimise the time per item of information through put. That is you optomize for minimum information processing and latency.
So an inexperianced designer (often driven by "marketing specmanship") will optimise the code for what is in effect, minimum latency and maximum bandwidth. Which usually brings good specs so makes marketing happy and keeps the senior managers happy and thus the designers pay checks keep ariving...
Unfortunatly without "significant care" (which comes with knowledge) the result of this is the design becomes wide open to time based side channels and attacks as well as being transparent to other time channels and attacks. Both of which can leak vast amounts of information in the forwards direction and thus be very insecure .
Worse if flow control is also implemented incorrectly it can be used in the reverse direction  that is by blocking the output you block the input and this in turn blocks the previous stage...
The solution to these issues is to "have the knowledge" to "know what you are doing at all levels". Which for obvious reasons is not always possible, not just because such people are less common than "hens teeth", but because not all attack vectors or classes of attack vectors are currently "known"...
Thus you generaly follow a very conservative design approach with a few basic ground rules used as "musts" not "guidelines". This following list encapsulates the "Public Knowledge" rules,
The first rule is,
"Clock the inputs and clock the outputs."
This reduces amongst other attacks those that use "timing jitter" to leak information.
The second rule is,
"Clock from most secure to least secure"
That is you design the entire system to be driven from the most secure point outwards to the least secure. Thus to be able to be clocked in this way it has to be capable of handeling all information in a single clock interval. This is also coincidently a skill needed in the design of RTOS devices and thus gives you a big clue as to why ordinary Multi Tasking OS's are very very rarely "secure by design".
The third rule is,
"On error fail hard and fail long".
Generating "errors" is a way to use the "error correction" mechanism as a covert channel to leak information and works in both directions. Thus when an error happens you fully abort the communication all the way back to the source. The source then holds of resending for a long period of time. This effectivly reduces the bandwidth of the covert channel down to just a few bits per hour or day. BUT this is still long enough to leak sufficient "key bits" to allow other attacks.
Another rule that should be included is,
"Always maintain data sent at full rate"
There is a whole class of side channels that leak information such as the number of messages sent and their length etc that enable "traffic analysis" the way to avoid this is to "fully occupie the insecure communication channel" at all times.
As can be seen from these rules with one exception each stage in the pipline has to wait on a previous stage and is thus not working at anything close to full capacity and is thus not an efficient utilisation of the resources. However it can be demonstrated as being secure within reasonable definitions (including being monitored continuously to ensure this is maintained).
Likewise the single "master clock" generating stage is till inefficient as it has to ensure that data is reliably there to maintain a constant and uninterupted through put.
There are a number of other rules to do with simplification and segregation of stages in the pipeline, and not buffering or storing data in the pipline, and no feedback, but these you can have a think through on your own.
 : It is this drive to efficiency on the AES competition code that made nearly every example wide open to "cache" and other "timing attacks" as they were optomised for "efficient CPU" use and speed. Thus nearly every practical implementation at the time was based on this competition code and leaked key information, and many new implementations are still insecure in "online" modes of operation because of "the need for speed" or efficiency.
 : It is also the resulting transparency that enabled some of Matt Blaze's students to develop a keyboard "logger" that by using timing jitter on the effective key press time exhibit coresponding jitter on network packets.
 : This unfortunate "reverse" or "susceptability" property has been used in "active" EmSec attacks to pace network packets such that "time based channels" are opened on response times and data thus leaks due to measurable information dependent time delays.
I don't know what you agenda is but the articles you link too don't support your statements here.
Blizzard offers an Authenticator designed to provide extra security to your account. Donlan did not have the authenticator before the hack, but reports suggest accounts have been compromised even with this enabled.
Well reports can "suggest" all they want but as the article goes on to note those reports have been "unconfirmed". So all these articles are just gossip and slander.
As for the first article you link to let me quote myself from my prior post, "when two-factor identification was used appropriately by the target."
Now, in fairness, one can debate what an appropriate use of two-factor identification is. Some people believe that they can do whatever they want however they want and that it's Blizzard's job to protect them. I don't subscribe to that theory. If you buy a hammer and repeatedly smash your thumb with it that not the fault of the hammer maker; that's your fault.
I'm going to stand by my statement. I have yet to see one verified instance where a Blizzard account has been compromised when two-factor identification was used appropriately. Smashing your own thumbs doesn't count.
@ Clive Robinson,
Thank you for the elaborate response. Yes, security comes with a cost in terms of resources, including efficiency as you have defined it. Then I understand that your rule of thumb security-v- efficiency does not imply that:
1- The more secure a system is, the more inefficient it becomes.
2- The more efficient it is, the less secure it becomes.
And if the two statements above are true, then that is a consequence of basic area number 3 (Inexperience / lack of depth of knowledge)
If that's the case, I am in agreement with you. If not, then I will have to chew more on that... And I will take item 3 to refer to me ;)
After firing some of the neurons I have left, I have two more remarks:
I think you are treating the information flow at the subsystem level. And I was looking at the system level. Optimizing the subsystem does not necessarily optimize the whole system since resources are shared with other subsystems (information flow-wise included).
1- "Clock the inputs and clock the outputs."
2- "Clock from most secure to least secure"
3- "On error fail hard and fail long"
4- "Always maintain data sent at full rate"
These are rules of thumb that can be adhered to during a design stage.
But "Security-v-Efficiency" is not a rule of thumb per se. Rather, it is an observation, or a result of applying the above rules of thumb. If you consider it to be a rule of thumb, the implication would be: design your systems to be as inefficient as possible to attain maximum security, which I am sure was not your intent.
Moore' s law applied to classical cryptanalysis
gives the idea i guess what clive wanted to explain. Even if we foretell the future of code-breaking beyond the quantum computing, time-cost trade-off seems to always suggest efficiency inversely proportional to security.
I don't think Clive is referring to Moore's law here. Moore's law does not apply to this discussion.
"time-cost trade-off seems to always suggest efficiency inversely proportional to security."
Clive is talking about a different efficiency. He is talking about information throughput and measures taken to harden the system against attacks. These measures impact the efficiency of the system by reducing the throughput of information.
You are also looking at it from the attacker's perspective. Clive is looking at it from the defender's perspective.
Yahoo to Log "Source Port" with IP Address/Time
It's about time... Seriously there is a major issue with Smart Phones and other devices that connect through the mobile phone networks.
Put simply the block of IPV4 adressess most mobile operators have is a very tiny fraction of the number of devices they have trying to connect to the Internet at any one time by as much as 300 : 1. Thus each IP address gets very very overloaded via the mobile operators equivalent of a "NPAT" firewall.
Whilst it might be possible using other logs to work out which one of the three hundred users it might have been it means extra work and patriot act letters etc etc...
Whilst a little hard work never did anybody any harm the FBI and other LEO's bleat for more powers, not because they need them (they've even admitted they don't) but because they don't want to do that extra little bit of work for a couple of reasons. Firstly "no two clocks are the same" thus when comparing logs from different sources it can be quite difficult to correctly align and maintain that alignment on the logs. Worse they then have to try and explain this to a jury as it looks like "fudging the record" it's fairly easy for a smart legal person to throw significant doubt on their methods and results with regards to "beyond reasonable doubt" which is the LEO's legal burden to get a conviction, and if they fail the accused (rightly or wrongly) walks away...
If that's the case, I am in agreement with you. If not, then I will have to chew more on that... And not, then I will have to chew more on that... And will take item 3 to refer to me ;)
First off 3 applies to all of us to some extent simply because we cannot see into the future in anything like sufficient detail so we will always have the old Donald Rumsfeld,
With a few "known unknowns" from those with predictive ability (ie can see instances of a new attack from a general class of attack).
However that asside, there is a lot to chew on, in general security is obtained by "strong encapsulation" with "strong segregation" via "very controlled interfaces", in "very reduced complexity" designs with little or no state and importantly little or no feedback or for that matter feed forward.
As a starting point you have to assume the full predictability of a "state machine" with seperate data and instruction inputs such that data does not have the ability to act as an instruction such that it can influance the state machine in unpredictable ways.
Thus in effect you have to live without the benifits of a Turning engine... Not that this has to limit you, you can do one heck of a lot with a Harvard architecture Signal Processing engine including if you know how / or are not aware of the pitfalls a full RISC Turing engine...
Complexity is without a doubt the enemy of security as without control it can quickly produce more states than it is possible to see let alone check and mitigate if required.
Thus you will often see pipeline designs and think to yourself, but just one of these blocks could do this... to which the answer is "Yes BUT NOT securely". The trick is to identify "natural interfaces" where you can apply strong control to ensure security.
As I indicated above being an adept RTOS designer helps get the right mind set as does being an embedded control system designer for the likes of "critical safety systems" for aircraft and other systems where human life is almost always at risk.
If you have a dig back through this blog you will see that Nick P and myself have had various discussions about what I call the "Prison-v-Castle" design of secure systems and the resulting "Probablistic security" of having a pipeline of very constrained very simplified CPU's with minimal and seperate instruction and data memories and strongly controled interfaces, which are monitored by state machine hypervisors looking not just at the data passed from stage to stage but also at the simplified CPU's "processing signature". The idea is you develop "tasklets" that are loaded into the CPU's instruction memory via the hypervisor and each tasklet is sufficiently simple that it's signiture is well defined. Also each CPU is unaware of any others it receives data via a buffer and passess processed results via another buffer it has no idea of time either as it will be stoped, suspended and started by the hypervisor as required. The hypervisor also has access to the CPU's memory areas and will periodicaly suspend a CPU and check that the memories have not been modified in an incorrect maner. The main aim of the system is not to give malware a place to hide, and also have the secure tasklets and their coresponding signitures be written by those with appropriate security experiance (they are a very rare resource) and general programers (code cutters) use them to write scripts to provide everyday programs. Thus leveraging the work of the security aware engineers to everyday programmers. Whilst this will give secure and strong foundations I'm also aware that "coders" could still build applications with more holes than your average Swiss Chease, but the tasklets and hypervisor will have lifted things sufficiently for automated code checkers to find most holes and either block them or mitigate them appropriatly.
First of all thanks to Bruce, Clive, and wael.
Yes, you're right, if we take history of the security with an analogy to castle and prison
example by clive in the discussion. The castle and the prison had the same security (detective or preventive). There was only some difference
in access/control. But if there is any kind of machine computing involved there, Moore's
Law would be universally applicable whether we discuss information intrusion (extrusion) or data
transmission (leak) security issues..
@ Clive Robinson,
You touched on another area of interest to me. I am surprised that you have come very close to what I had in mind - I will bring that up sometime for discussion in the future. It has to do with knowns / unknowns, hypervisors, controls, and simplicity. I just have to wait for the correct thread to inject it in.
Computing power and Moore's law are not relevant here because:
1- That law is a common to both attacker and defender! Factor out the common denominator, so to speak
2- Efficiency in this case is independant of computing power and Moore's law. Doubling computing power does not double efficiency for the system designer trying to protect information.
You are probably confusing Moore's law with an attacker brute-forcing a crypto-system, which is not what we are discussing here.
@ Wael 06/02/12 4:11PM On Security/Performance
Clives rules of infosec are not the rules of infosec. His are a mix of infosec rules & a superset of them at the same time. Mixed with his, here are some of the infosec rules that can kill efficiency:
1. Full reference monitor. Killer No 1. Must
a. Always be invoked (complete mediation, the efficiency killer)
b. Tamperproof (these requirements might hurt efficiency)
c. Small and simple enough to verify rigorously
(this often means many novel ways to improve performance will be banned)
2. Covert timing channel suppression (see Clives rules)
Main way to do this is to use eventcounts or do an ARINC-style fixed partitioning scheme. In any case, the computer becomes more akin to a batch processing machine than an interactive one. Additionally, you have to turn off any shared resources like hyperthreading or caches (read: high bandwidth timing channels). Guess what that alone does to performance? Try running a computer with all that turned off.
So, just these two requirements alone are almost 100% inversely proportional to any performance goal. Or usability, asthetics, etc. It's so hard to secure a modern system that DARPA initiated Clean-Slate to get teams to devise radical new architectures, often with word-level access control, to hopefully solve the problem.
Even so, every transistor and step being dedicated to security is something that could have contributed to performance. So, the rule still stands: security is what you get with good design & after sacrificing certain other things, which things depending on the project at hand.
@ Clive and Wael
Oh, this again? Fun, fun. Wael, don't let him think what he described is what WE came up with. It was his idea I contributed to a bit. His is the Prison, mine was the Castle. What he calls the castle is essentially how high assurance design has been done up to this point. I like it b/c we have a number of worked examples, including some commercial products still around, too. Of course, both academics and I have refined the idea over the years to reduce work required & make good tradeoffs.
So, it starts with an idea of what you want the system to do. You get a precise description of the requirements & security policy. Then, you build it in a way that it can be verified & bug count is kept minimal. A reference monitor usually exists to mediate any activities that might violate security policy. As stated in a previous post, it's non-bypassable, tamperproof (for software), and designed for easy verification. A number of these have been mathematically verified for correctness, with seL4/OKL4.verified being most recent (verified to the C code).
These systems are designed with modules & layers. Clive was right on about encapsulation, minimizing shared state, tight control on interfaces, etc. Safe programming languages, strong type systems, etc. can help build the trusted software in a way that provably eliminates entire classes of vulnerabilities. For instance, a number of high assurance secure systems in the past were coded in languages like Ada, Pascal & PL/1 instead of C. Many also used call-by-value instead of pointers/references to make info flow easier.
Wael, the crux of security is assuring correct information flow & transformation. The security of the system is built from ground up (hardware to app) and side-to-side (protocols & external interaction). So, the castle (read: old school) approach was to use our best design & assurance technology to build things where failure COULD NOT OCCUR. As far as anyone could tell... ;)
Many of these systems lastest for years against high end opponents without a compromise & still haven't been compromised. Examples from the old days that still exist in some form are Aesec's GEMSOS (A1/EAL7), XTS-400/STOP.OS (B3/EAL6), LOCK platform (A1-certifiable), and Boeing SNS (A1/EAL7). Modern products following suit include the SKPP separation kernels, Rockwell-Colins AAMP7G (EAL7-equiv), Bodacon's Hydra Web Server, Secure64's SourceT OS, and Green Hill's INTEGRITY RTOS w/ middleware.
So, we have working castle-like solutions. At the least, they provide a solid foundation to build secure networks and appliances on. Academics are finally catching up to Clive's prison design. I think it might be a good interim solution to leverage COTS hardware. One involves virtualization for monitoring & another uses an attached PCI device to monitor the system (close to what I advocated in a prison discussion). I say use Castle approach where you can, continue R&D on some useful prison products, & let the discussion go from there.
Note on prison stuff: Clive shorted his own design with the requirement for suspending the CPU to check on memory. (He might have a good reason I haven't thought about.) I think that the designs I mentioned, definitely PCI/hardware attachment, can check the system regularly while it operates.
I'd also further suggest, Intel-style, that maybe the first 4-32MB be designated for critical code (not data) so that a simple hash could be used to check for integrity. The system would do a trusted boot, activate the monitor, get everything ready for production, & then get the monitor to check its state. If the monitor detects a change, it can shut things down & contact the administrator.
@ Rajesh, Nick P
Moore's law states: "Computer processors doubles in complexity every year" or "The number of transistors that can be placed on a chip doubles every 18 month" let's approximate that in terms of computing power and say "Computing power doubles every year or two"
Clive Robinson defined efficiency as: "100% efficiency" can be termed as "maximum through put for a given resource"
For a concrete simplified example, let's say you have an insecure sub-system that is capable of and can maintain a 100 Mb/sec throughput. Then as a result of securing the system, throughput drops to 60Mb/sec.
Knowing that efficiency is a ratio, In this case it is 60%, how does Moore's law impact this efficiency, given that it will have the same (approximate) performance boost to both secured and unsecured information throughput?
@ Nick P,
I will chew on that a bit so I don't post more than once. I will say again that paying a price for securing a system is not the point of contention here.
@ Nick P,
"Oh, this again? Fun, fun."
This is a much more intresting discussion. Give me some time :)
Yes. There is a classical example in this context, ie computing power-v-cryptanalysis design, once cited by a trainer in his security training:
"Even nine women put together cannot bear a baby in one month"
same way it takes 9 to 10 years time to judge the real strength of any current security algorithm design.
Do we know each other? The Rajesh I know always used to say "Even nine women put together cannot bear a baby in one month"
Sir Bruce would never allow social engineering attempts thru his security blog..
Nor would he try to stop them unless it was a big deal. A friend of mine once impersonated three people in a discussion to make a point about it. Lucky for you guys he doesnt come here anymore.
@ Nick P, Clive,
I think I will disengage from the thread that I started. I got all I needed from your excellent feedback. Will pickup later on castles and prisons... In a few days or weeks.
OT but On-topic? :)
Finally, a squid post about one of the coolest (in my view) aspects of cephalopods. Bruce loves squid, I'd have to say I'm more fascinated by octopuses (matter of personal preference)
The coolest feature of octopuses in my view is their uncanny ability to imitate their surroundings. Not only can they mimic shapes and colors (supposedly they are color blind too!) of starfish, sea snakes, flounders, lion fish, and sea moss, but they can change the texture of their skin! For anyone who's ever imitated something, you know that takes very focused, careful observation of appearance and movements/behavior (and thought!). http://www.sciencedaily.com/releases/2010/08/...
They can make a shelter (tool use, sign of intelligence) out of beer bottles, broken glass, and coconuts. Video I saw showed them walking with their tentacles while carrying back pieces of their future home.
They can open a jar with a lid (multiple times, decreasing likelihood of "luck").
They can crawl out of water, and then crawl right back in. (maybe a future land species may spawn from them?)
Of course, the well known ability to detach an arm that's snagged by a predator (which can grow back), and then squirting of sight/scent obfuscating ink and "whooshing" off to live another day. Kind of sounds like what Mr. SH said to do in a dangerous situation (I remember random things), but I love relating security mechanisms in "the real wild" to "our wilderness"--http://www.schneier.com/blog/archives/2011/11/sam_harris_on_s.html
--His recommendations may sound cowardly, but it is a spineless creature I'm talking about. Good discussion ensued after the post.
Lastly, who could forget ole "Paul the Octopus", with his paranormal ability to predict soccer (football for the rest of the world:) matches.
Ok, that last one may have been lucky, but still truly amazing creatures; I guess I can relate to Bruce's obsession :)
saw this article linked elsewhere.
It's not surprising news (to me). The main claim is that one person created a search tool capable of finding various SCADA-and-related machines connected to the public Internet. And his search engine is publicly-viewable.
The question I have is: how do authorities (or concerned security experts with influence) turn this knowledge into increases in security?
I figured it was going to turn into decreases in security. It seemed their strategy was to create awareness, either via their tool or how it got abused. The awareness or damage puts pressure on SCADA manufacturers. Then, they change their practices.
At least, that's the only strategy I saw working.
"Even nine women put together cannot bear a baby in one month"
Do you not mean "Even nine women put together cannot give birth to the same baby in one month"?
Your example is as flawed as your logic.
I believe the original is this quote from Wernher von Braun: "Crash programs fail because they are based on the theory that, with nine women pregnant, you can get a baby a month."
@Clive and Wael,
thank you for raising some good points. I have implemented a text version of the proposed authentication method to experiment with online: Access Screen.
Would either one of you be kind (and interested) enough to allow me to pick your brains directly? If so, please send me an e-mail - a contact button (along with a link to a list of questions) is on the web page.
Thanks again, everyone.
Access Screen gives a 404 Not Found.
Some quick feedback: Key loggers come in many forms. Some work by taking successive screen shots, either timer based or interrupt driven by a mouse click or a key press.... Such a key logger will be able to sniff the user's input in your product. So you may want to hide your mouse cursor as well, which will make it more difficult for users to know what to click on. Or you can come up with a solution for that. You may want to consider side channels ...
This was only one example for a key logger that will sniff your "passwords". There are still other techniques ...
Please fix the button, and I will give it a try when I can. In the meantime, you may ask Bruce for my email address.
It seems to be a prohibitive effort for the user to prime the system with enough unsuspicious data for each category (line) in order to avoid that always the same specimen are being presented within lineup.
Also, in most of the cases, if no specific care is being excercised by the user, the user supplied specimen will stand out like a sore thumb.
It would likewise not be acceptable to use specimen from other users to fill the system supplied non valid choices per category lineup. So the system designer also needs to come up with a credible set for each category which will allow the data provided by the user to blend in.
From my point of view too much requirements and understanding of the concept to be asked from an ordinary user.
@Nick P and Clive Robinson,
I searched through the blog for "prison-castle". I only found six entries. The earliest was dated June 9, 2010 which talks about previous discussions between the two of you. Is there a better search string I should use?
The logic here was to carry on the discussion. with fun, fun, fun.
The basis of modern applied info. security is crypto. The crypto logy can not go without
computing. Means, in the present digital world,
machines are still simple devices, on-off, yes-no, hi- lo..And security in obscurity is fake-o-logy.
Now, about the biology of security, it's a proverb, babies are first born in the minds
but few take birth in experimenting security and privacy. Matter of the fact, if we take all the powerful resources available to check the efficiency of the new resource real test process
takes its own time.
Philologically speaking, for any kind of real knowledge about applied crypto always follow:
1) Bruce Schneier : Cryptography Engineering
2) Chritof Paar : Understanding crypto graphy
Before you progress much further with your "password replacment scheme" I would suggest your have a good look at,
And evaluate your scheme against the requirments in there.
Also have a look at the Cambridge Computer Labs web site as they do a lot of work on password systems in their various forms.
Remember the old saying about "The world beating a path to the door of the designer of a better mouse trap"?
Well it's not true, the world requires the mouse trap to be not just better but... Cheaper, easier to use, cleaner in use, less noisy, kinder to the mouse, etc, etc. Oh and one other thing not only has it got to be better, to get everybody out of the very very deep grove of "tradition" they are currently in it has to be "infinitely better"...
Have a read of,
It gives the overview of the idea of puting a hundred or so "lite CPU's" onto a single chip along with the statemachine hypervisors and "task signiture checking" hardware.
With regards Nick P's comment about the halting of the CPU cores I think he is thinking more in term of heavy weight CPU's like IA32/64 cores on low cost multi CPU PCB's with the state machine hypervisor on a seperate extender card (on say the PCI bus etc) than the lite CPU's and state machine hypervisors all on the same chip.
The reason behind the halting of the CPU's is to do with the passing of time. There are as far as computers are concerned three types of time,
1, External time.
2, Elapsed system time.
3, CPU Cycles elapsed count.
For malware to use time based channels out of the system it either needs access to external time or for any elapsed time to have some relationship to external time. For malware to communicate between CPU's within the system it likewise needs a common time frame between the communicating parts. Breaking the time relationship prevents time based communications either accidental or deliberate as a time refrencee needs to be established between the transmitter and the receiver. Obviously a time based channel can still exist if it has very low bandwidth, but that would break the signiture the hypervisor is looking for.
Is breaking the time relationship an issues, well it does reduce efficiency of use. However for the majority of functionality in a computer program the only time that matters is what you might call "event time" that is the task is compleated before some other activity takes place. As this has considerably more to do with task sequencing than it has to do with time, "blocking" a CPU whilst waiting on an event is more an effective scheduling issue.
In a system with a hundred or so light weight CPU's all in the same chip you could also look on it as the analog of "task switching overhead" in a conventional multitasking OS running on a single heavy weight CPU.
@ Nick P,
I can't find the words to show my gratitude to you. Will read these links and come back with some feedback. Also, I was mainly interested in the discussion you had with Clive Robinson about "Castle-v-Prison", I have something in mind about that :) - I think we can take this discussion further from a different angle, and perhaps for a different end goal.
I posted a reply for you with with plenty of links. Either it's awaiting moderation or was lost. At the end, I referenced subversion attacks which are the hardest to deal with. I've dug up a nice example for you.
Cryptophone is an encrypting cell phone. They try to maintain real security by being outside of US jurisdiction, using a good protocol, publishing their protocol source code for independent review, strong hardening of their WinMobile OS, and anonymous purchases of their phones to detect distribution tampering.
This is good, but doesn't make it trustworthy to ME. The reason is subversion: how can I be sure the advertised code is on the phone or the firmware hasn't been tampered with? A principal for the company, Frank Frieger, showed up & we had a short little debate. Subversion is such a tricky subject that, at first, even he didn't really get the point I was saying. Hope the example illustrates what it takes for something to be trustworthy.
You touched on another important aspect of "security" - the lifetime cycle of a "device". We'll try to fit that into the "Castle-v-Prison" discussion - at the right time ;)
Are you referring to something other than your post of June 6, 2012 12:11 PM? As you can see, that one went through okay. If there was something else, it's not in the queue or the spamfilter.
The logic here was to carry on the discussion.
That seems to be the only logic you usually have. For some reason you like to comment here so much that you do it even when you have nothing to say, so that your comments are full of vague platitudes and other pointless filler. I don't know what you're getting out of doing this, but you need to get it somewhere else.
I do recall that when you first showed up here, all your comments were quotations from other people -- in one case, an uncredited copy-and-paste from Wikipedia. When I asked to you comment only in your own words, you acknowledged that, then did the same thing twice more before actually stopping. In light of that, I'm not going to try asking you to improve your signal-to-noise ratio again. You are banned.
It apparently went through slowly. It's happened before. Must be a Movable Type quirk. Thanks anyway.
It appears LinkedIn has had it's password database stolen,
An interesting quote from the TechCrunch article,
In the meantime, the company notes that users who have already changed their passwords (you already did, right?) or created a new account won’t have to worry, as they have recently begun hashing and salting their current password databases
If that is true, then there are going to be one heck of a lot of unhappy "Professionals" worse I suspect there will be quite a few red faces in the ITSec industry when weak or very weak passwords get linked to their owners...
But more importantly what on earth were LinkedIn's security people playing at with respect to not just the intrusion (these things are par for the course these days) but with such a weak password protection system...
And people wonder why I'm not on these "new fangled social and business networking sites" (aside from the fact they appear more about glad-handing and self promotion)...
@ Nick P,
You have inadvertantly reminded me of something I forgot to do some time ago...
And yes at some time I should write up a bit more on "Prisons -v- Castles" and make it available (somehow not sure which way would be best for me ;-)
I had a think on various aspects to do with the u-Kernels a while ago and from reading the documentation only I'm favouring the Fiasco kernel with the L4Re layered approach (though I not keen on C++ due too toolchain issues, oddly I'd already been favouring Lua as the scripting language which they use in their ned layer).
You've mentioned in the past that you had dabbled with various "open" u-Kernels, which have you found the more workable and why?
I must admit my main kernel criteria are "very small footprint" with all but essential features (object/task factory, stream/IPC, Interupt MUX, and Compulsory Capability/Role authN/authZ) pushed out.
Aside from the security asspect it has some other advantages that you could liken to the *nix idea of having a choice of shells. For instance memory managment, is most definatly not a "one size fits all" it's not just (CPU/MMU/DMA) hardware dependant but system application dependent (embedded dedicated function devices through to high end servers for users/comms/storage) and even programing language (garbage collection etc) so it would be handy to change the manager to fit the end system. Likewise file systems and the dredded I/O control issues (*nix "ioctl" was a bad cludge and has lots of hidden issues) that bite, even the apparent simplicity of "streams" has nasties especially when dealing with modern issues. I'm a person who favours frameworks with robust API's where basic functional modules can be pluged and unpluged at will and likewise stacked/piplined to give the desired functionality.
On LinkedIn password leak
Anderson's blog had a better report
They pointed out that hardly anyone is reporting that there were no repeated hashes. They suggest the leaker ran it through 'uniq' to limit damage. This and other attributes of the leak make the risk minimal.
@ Clive Robinson on uKernels
I thought you might like Fiasco. Did you see the new version, Fiasco.OC? Has nice features. The thing I like most about it is they tend to make pretty high quality stuff & it's freely available. It's honestly hard to tell which is better at this point. I used to look for most secure, but you & RobertT have about made me hopeless there. ;)
So, it's really about what suit's one's needs. If we're defending at the software level, most of the good ukernels & sep-kernels should do fine. It's going to be an issue of decomposition, info flow control, app/protocol level stuff, etc. Let's look at their strengths and weaknesses, eh?
INTEGRITY is one of the best designed & most mature. Green Hills has tons of middleware, pre-made solutions, partners, etc. VxWorks, LynuxWorks and SYSGO are up there. Graphics, sound, robust networking, Ada/Java/C++ support, POSIX/Linux layers, virtualization, etc. Pick your need, try some middleware & evaluate it. OK Labs is great on the mobile front & optionally have OKL4Verified (EAL7+ equiv). GEMSOS & XTS-500 are last in TCSEC lines of B3/A1 OS's, but GEMSOS is barebones compared to above RTOS's & XTS-500 line dropped to EAL5+. (Prolly expensive & restrictive too.)
Honestly, I think many of these products are way ahead of mainstream OS's in security, reliability, & general quality. I don't think which one we choose matters that much any more. The public expects vulnerabilities & legacy demands create issues. Hence, my interim focus has been on using these types of products to inject high assurance into low assurance environments to increase robustness. This might not help with "code," but it can with systems.
Examples follow. L4-based systems in Nizza & Perseus show how to let a person do most things using apps in a legacy OS (maybe VM), but security critical function runs on ukernel & carefully interacts with legacy. We can also use them as hypervisors to do monitoring of legacy (similar to prison) or isolate critical network admin stuff into safe areas. Recovery-based architectures where stuff is just reset seemlessly every so often are becoming popular in academia & proposed technologies help implement them robustly. Then, there's using them & customized implementations to made application-specific appliances in the network (allows POLA, TCB reduction, easier monitoring/assurance, etc.). Poly squared at CMU is doing something similar with Linux & MUCH more hardware. ;)
So, I see the game as something that changed. Back in the day, there were plenty of people wanting secure systems. Today, it's less about that & more about ROI. People expect stuff to fail a bit and breaches to happen. They're mainly worried about preventing worst failures & quickly recovering from others. So, I advocate using high assurance appliances/techniques for critical stuff, especially mitigation, and using technologies like ukernels/middleware/virtualization for risk reduction & recovery acceleration in other areas.
I agree with you on that. It would be nice. Problem is many of these ukernels are incompatible in their designs in over and subtle ways. Frameworks like L4Re & NICTA's CAMKES help. I'm thinking much of it might not be doable at a very low level. eCos and the FluxOS Toolkit show us that we can make the kernels & OS libraries modular to a degree. Where we can't, we must make do with making applications modular & platform neutral. Those doing that so far, using the likes of Apache Runtime or QT, haven't been complaining too much. ;)
I hate to say it, but the main ukernel strategy for cross-platform is to include POSIX or OS/RTOS emulation layers. The main apps are just too messy & complicated to port. It's why I advocate cleaner & better alternatives. For instance, NGINX is looking a bit better than Apache HTTPD right now & UDT is a nice TCP/FTP replacement. My strategy for most stuff is unchanged for past few years: port to container on better platform (often ukernel), carefully extract (important-thing-here)-critical part, run them side by side, & repeat until satisfactorily decomposed/portable. (Note they might also be physically separate or clustered.) Until we have something better...
@ Nick P,
Anderson's blog had a better report
And to think I'd been on the blog just a little while before it was posted. I tend not to visit a lot of sites as often as I used to because many are either slowly getting less and less active or are becoming less relevant.
Worse some just copy over from somebody else without doing any checking themselves, as the CambLab blog points out. Mind you some of the comments were a bit "sharp". One even indirectly refer to Bruce (through bcrypt).
Which ever way you look at it LinkedIn InfoSec dropped the ball and I've a sneaking feeling I know why, I've seen it a lot of it over the past few years but (unusally for me) I didn't put my finger on it (I must be getting old ;).
However I was reading an article a short while ago on another blog that flicked the light switch as it were,
It's about the "game" you play, that is trying to play "a winning game" or "a losing game". Usually only the elite can play a wining game and win, the rest of us are way better off playing the game to stay in and wait for our oponent to lose by playing to win...
The article starts getting interesting with this paragraph,
We'll always have threats, yes we need to focus on them but not solely; if you have something someone else wants threats are never going to zero. Its better to focus on the thing you have that someone else wants and where you should have a knowledge advantage - your assets
And the following paragraph puts it quite nicely with,
What matters in investing and what matters in infosec is building margins of safety. Assume failure. This is stark contrast to how the rest of your business operates and its a valuable service that infosec provides when its done constructively
The realisation is that an attacker will always play to their strengths and your weaknesses, if you accept that then you know that they are going to get past your outer defenses at some point the only question is when and how.
So play your defences as though they will fail and don't leave the "crown jewels" on display, lock them up properly in the treasury (as all good castles have ;-) And importantly don't rely on your opponent being an outsider not fit enough to be able to "swim the moat" and "climb up the garderobe". Or worse assume all insiders are trustworthy enough not to be tempted or daft.
Infosec is thus unlike the rest of the business, in that it's in it for the long run not the sprint to the "quick win" on next quaters figures. If Infosec play like the rest of the business they will lose quickly and painfully...
Oddly it's a sign that InfoSec is actually maturing as a business unit, for some time I've advocated that the "techies" should learn to "speak business" as it's the only language "the man who cuts the cheques" wants to listen to. However I've always said not to play the short term game as the result will always be failure if you do, as the longterm view usually wins overall (something the Chinese Gov appears to understand).
But I failed to realise that by learning to speak "business" the "techies" would be "seduced by the dark side" and try to play the "corporate game the executive way" without realising it only works if "you cut and run", and unlike the executive types they usually have "nowhere to run to" as techies usually cann't "glad-hand" themselves up the corporate ladder.
@ Nick P,
Of course this LinkedIn attack raises the old questions of
1, Why passwords?
2, Why communicate them?
Whilst the first question remains open for now (and may do for some time yet...), the second question has already been answered for around a quater of a century, and it's "you don't".
Stanfords Secure Remote Password (SRP) protocol is available, it's embedded in many protocols and RFC's,
And solves two major problems,
1, The password no mater how good or bad is only required on the client computer.
2, Because of 1, it enables organisations to externalise the risk of password loss back to the owner of the client machine.
It also means that if an attacker want's to target passwords then they need to target the client not the server. Which (depending on other factors) might reduce the desirability of attacking the server in the first place...
I wonder if a smart share holder in LinkedIn is asking the execs the pertinent question on "share holder value" depreciation due to what arguably is negligence by the LinkedIn designers...
@ Clive Robinson
Hmm. Interesting thought INFOSEC & business. I can't say it was obvious that teaching them to speak like businessmen would lead to Wall Street style IT thinking. Most that I've known personally who interact well with business counterparts don't suffer from this problem. I tried to improve myself in that regard by making my last degree one in Business. So, if it's happening, I don't think it's intrinsic to connecting IT to business, but I think we should probably watch out for it in the future.
It was a really good article. My last post to you showed I was in the middle of a major shift in my thinking on INFOSEC in business. Here I am adjusting to that when you throw something at me that will force another shift in mid-shift. Naturally, I think it will take a while for me to fully wrap my mind around this. (Might have to study some investing or contact a double MBA who used to visit this blog.)
The moats idea is immediately usable. However, even that we can drop a bit and focus on the real gist of the article: identify most critical resources; concentrate INFOSEC on them. Most good INFOSEC practictioners have been doing this for a while, myself included. Example is that I would push for sub-$1000 firewalls for DMZ, then major investments into baking [acceptable] security into policies & operations. I'm sure thinking in terms of moats can help us model what's important even more quickly.
The play to loose mindset & things that go with it bothered me, though. I have a hard time seeing that this applies entirely to INFOSEC. Here's why. For one, we aren't fighting the Almighty Market in its Omnipotence, Omniscience, and Omnipresence. ;) Our opponents are quite human (well, most 8). This means they exist on a range of skills, motivations & resources. Most attackers go for the low-hanging fruit or download kits made by sophisticated ones that spent much time finding a way in.
This means eliminating low hanging fruit, usually affordable, get's rid of entire classes of both attacks and attackers. For instance, it's pretty straightforward to defend against SQL injections & some platforms make it hard to do them & other attacks. Before these attacks were prevalent, how many massive databases were stolen in the news? Hardly any. Lax security practices opened up new avenues of attack to previously underskilled opportunists. Deny them something to hit & they hit nothing.
The other issue is dealing with the more sophisticated. Suddenly, the author's position makes more sense. They almost always find a way in b/c they're clever. (Note: I'm talking about good black hats, not top spies or nation states. They usually win anyway.) These people will find obscure vulnerabilities in common areas like configuration, insecure protocols, application security, OS features, etc. So, what to do about them?
Well, the author suggests focusing on what matters most. If they take over a mail server, then signed emails, awareness, & controls in critical apps might mean they accomplish nothing. End-to-end IPSec + good monitoring might typical exploration a painful process leading to detection. Microsoft SDL or Fagan SIP development methodology using quality platforms makes it really hard for them to find a vulnerability. Thin clients or app virtualization can ensure patching & monitoring go way better. And so on and so on.
Diversity is also a good strategy. We discussed it here before. I remember Bruce disagreed a bit on its importance. He was focused on the flawed coarse-grained diversity that's often promoted, with valid criticisms. I'm thinking of a bit finer-grained & automated diversity. We need to make the interfaces sensible & hide everything behind them. Then, we can swap out aspects of internals at will to confuse attackers about network, system, app platform, etc. Much work has been done in this area by academics & even commercial groups.
One of my extreme examples was for a group to use Red Hat Linux on POWER boxes, then have a Linux hacker with OS fingerprinting knowledge changing things to look like x86 Linux. Imagine the malware author's frustration when his best code fails to run in spite of him sneaking a shell of some kind on the system. A lesser one might be to use obscure, yet production solid, languages for implementing important services. Or put guards between trusted apps and untrusted network, database & apps, etc. They can be really fast if designed correctly. The ZeroMQ system, although not INFOSEC focused, is really showing potential for these purposes. (LOOK IT UP!)
So, I don't really think the author's connection between INFOSEC and investing is totally solid. I think we have plenty to learn from it where it applies. Particularly, organizations treating INFOSEC like a minor business function should probably embrace the author's points wholeheartedly before something happens to their business. ;)
For us, though, I think we need to integrate into our existing knowledge/practices the defensive security investing and expect to fail mentalities. The former should become conceptually easier to model thanks to motes, while the later should take into account the type of attacker we're facing & whether the budget allows risk mitigation against the majority black hat. Many times, it does: they just don't know what to invest in or how it pays off.
If you're analyzing password replacements, look at this excellent table from the Anderson group. A replacement must do well in a number of ways, more than apparent at first thought.
I think they'd try to hit the server anway. There's enough social engineering material for enough important people to make it worth it for some. As for SRP, I had forgotten about it. The salted-hash method has worked well enough for mitigating risk for passwords stolen from servers. It's also quite simple & implemented by many online libraries (often leads to winning protocol). At worst, it gives us plenty of time between the breach & them cracking our individual password.
So, your "why communicate them?" question is more practical. The answer: no reason to at all. Groups like OWASP are giving out so much free information, tools & code that there is really no excuse for a compromise like this. That LinkedIn was just getting around to using salted hashes is pretty damning of their ITSEC staff. A system exploit that undermined web apps or some new attack is understandable. Configuration, SQL injection, salted passwords... these are the basics & cost little to get right. They certainly had the money... just didn't care.
Nick P. wrote:
"Hmm. Interesting thought INFOSEC & business.... Naturally, I think it will take a while for me to fully wrap my mind around this. (Might have to study some investing or contact a double MBA who used to visit this blog.)
I disagreed with the article's fundamental thesis: That investing is not about winning, but about not losing. That is only one of many possible investment strategies. If you want to be sure of not losing money, put it in US Treasury Bonds or FDIC-insured bank accounts. Your principal is safe, but the return is very low -- and the article's author totally ignores another risk factor that is taught in Finance 101, namely: Inflationary risk. 10-year Treasuries have been paying about 1.5 - 2% recently, but you don't get your principal back for ten years. By that time, those dollars might have lost more purchasing power due to inflation than the interest received.
For example, had you bought a $10,000 ten-year Treasury bond in 2002, when it matured in 2012 you would find that it would take about $12,800 to buy the same goods that your 10k would have bought in 2002. That's a 28% drop in purchasing power. (*Real* dollars, as the econogeeks like to call them, vs. "nominal" [non-inflation-adjusted] dollars that we usually think about.)
(Source: data.bls.gov/cgi-bin/cpicalc.pl )
An 85-yr-old grandmother might definitely care most about preserving capital, regardless of low yield. A 22-yr-old person or couple just starting out is better advised to invest in a variety of risk/reward alternatives: Some, for preservation of capital; some, for income plus growth of capital (utility stocks being one example); some, for long-term growth (less-conservative stocks, expanding companies, etc.), and some, as a "fire insurance policy", to hedge against rampant inflation. Gold-mining stocks and gold-related investments fit this latter need.
Each investor determines her own tolerable risk/reward ratio, and to reduce risks, *diversifies*, both among categories and within each category. (not all eggs in one basket.) This parallels a certain ITsec concept that will show up in a bit.
Also, while the individual investor certainly is at a disadvantage to the insiders, overall, the stock market has done well over long periods of time. (Not day-trading or market-timing). The suggestion of a non-managed S&P Index fund is a good example. If the Big Guys know that Company X has some good or bad news coming, their purchases or sales cause the indexed fund to do the same thing, to maintain a consistent "weighting" of stocks in the portfolio -- thus riding the Big Guys' coattails, although admittedly with a bit of a delay.
Someone once said that the stock market is like a giant casino in which the odds are in favor of the player. This is true, if one takes the long-term approach, and counters what that article said, that investing is a losing game. (Then why does anyone do it?) However, for those trying to make a killing next week, yes, that's about as good as rolling dice.
So, I don't really think the author's connection between INFOSEC and investing is totally solid.
Agree 100%, for the reasons stated above.
ITsec admins should do the same kind of risk/benefit/cost analysis. I hope a simple example will suffice. I use Yahoo mail for daily use, and a PGP-encrypted account for sensitive things. The Yahoo account isn't very secure, but I never send messages by which I could be harmed if an eavesdropper got them. The PGP is a bit more of a PITA, especially because most of my family and RL friends are non-tech, don't have such an account, don't understand the need for one, and can't be bothered. So I tell them to snail-mail, fax, call from landline to landline, etc. instead.
IOW, I expend very little effort on securing non-critical info (Yahoo), and expend more time and effort on securing sensitive communications. (PGP)
Similarly, the ITsec admin should determine the harm, *including long-term harm from loss of confidence*, and expend proportionate resources to safeguard each value-class. But the article's author did note quite correctly that Wall Street takes the short-term view, and I agree with him there. Companies -- especially *security companies* -- who get hacked may lose customers, gain fewer new ones, etc., and this effect lasts well beyond next Friday.
I think we have plenty to learn from it where it applies. Particularly, organizations treating INFOSEC like a minor business function should probably embrace the author's points wholeheartedly before something happens to their business. ;)
Absolutely. The most precious asset that *any* business has is its good name and reputation. Once lost, it can be a costly and lengthy process to restore -- if ever.
For us, though, I think we need to integrate into our existing knowledge/practices the defensive security investing and expect to fail mentalities.
No question about being very defensive of valuable information, and of assuming that some defenses will fail. There is a very old prescription for this; I believe it's called "defense in depth". :)
See the parallel to "diversification" of investments mentioned earlier?
Far greater minds than mine have explored this topic, and the INFOSEC industry needs to tell the customer the unpleasant truth: "Some of our measures WILL fail at some time. So, we have prepared a complete package of deep defense, beyond mere firewalls, including segregating information; employee need-to-know; multi-factor authentication; air-gapping of certain data storage... (You fill in the rest. I'm here as a business consultant, not an ITsec consultant. ;)
I hope that this is what you wanted.
@ Nick P,
Was thinking about subversion for a while. My conclusion is: For individuals, protecting against subversion is an impossible task. For a government, the task is formidable but not insurmountable. The (necessary, but not sufficient) condition is, they have to fully control the design, manufacture, test, and deployment of the hardware / software. This implies nothings outsourced including the FABs.
@Double MBA Who Used To Visit at
" the INFOSEC industry needs to tell the customer the unpleasant truth: "Some of our measures WILL fail at some time. So, we have prepared a complete package of deep defense...."
Really...Hmmm based on my own experience the investing community actively ignores all unpleasant truths, think kid with his head buried in the sand, hands over his ears, screaming waw, waw, waw,,,, and you've got an accurate image.
I had some discussions about 4 years ago on the concept of enumeration attacks on "dark pools" and "Buy side algorithmic signature spreading". I was able to show that their attempts to hide their BUY signatures, were worse than worthless, in fact the information side channels created by actively hiding the BUY, not only showed conclusively who the buyer was but also what their target accumulation rate as well as various intermediate buy/sell thresholds. Talk about a free gift to the HF traders.
Of course they didn't want this information released to their Pension fund customers, otherwise it would be kinda hard to demand a premium for providing these trading tools.
I was addressing my comments, as per the article discussed by Clive and Nick P., to the business community as a whole: manufacturers, retailers, service industries, ... etc. -- and not to the investment community per se.
I agree that the investment community often has the faults you describe -- I've witnessed them from the inside. (I like to tell people, "Lehman Brothers went bankrupt, and I'm still in business. Do the math." :)
So my post was aimed at securing the info, internal communication, web sites, etc. of General Motors or Wal-Mart or whatever, who at least realize that they don't have this knowledge, as opposed to the know-it-all Big Investors, who seem to fail regularly.
Thanks for the reply.
Double MBA Who Used To Visit at
"So my post was aimed at securing the info, internal communication, web sites, etc."
Who cares about the actual security of their Information systems if I can derive exactly the trading information that I need from the side channel leakage caused by their operations and an educated guess about spreading algorithms.
This is akin to having strong military crypto to protect the message but forgetting that an adversary can often imply exactly what he needs to know by studying traffic end points and patterns.
In the case I'm talking about a well respected bank wasn't listening but just up the road at Stony Brook a little "technology" company was all ears.
I think you're missing the point of the discussion. We're not really talking about securing investment firms. Clive posted a link above where an invester says good INFOSEC and investor mindsets are the same. He draws connections and conclusions. Interesting read.
I askes one of the investors i know with some INFOSEC knowledge to review the article & give his opinion. I had already shot a few holes in it and figured he might have a few insights while delivering the fatal blow.
@ wael on subversion
You've found the simplistic and brute force answers. Now, you must derive the better answers. For starters, which potential subverters would cause you big problems and how would they subvert you? Can this be used to prevent subversion?
Example: NSAand Chinese govt might hack your computer. You're a US company with valuable IP in development and plan to use strict security procedures. Is the NSA a problem? Unlikely: they have ur back unless u compete with US contracter. That means buying security-critical boards from a DOD certified manufacturer like Freescale significantly reduces hardware/firmware subversion risk. This is just one situation.
Note: if my cots stuff aint subverted, there must be hope for the rest of you. ;)
@ Nick on subversion,
First, I need to understand what yo mean by "if my cots stuff aint subverted"! Will have a delayed response due to travel and another encounter with our friends at the airport ...
I think the other part of my post was more important. The meaning of the that minor statement is that there is no evidence of subversion on my friends' and I's systems. We took few steps to prevent it. Why?
Large scale subversion AND usage of that hasnt seemed to take place in a noticeable way for PCs almost a decade that ive seen. So, foreign and local manufacturers of personal PCs (minus TPM or UEFI) so far have been safe. In practice, subversion usually takes place when a serious group has reason to put serious resources into a target(s) theyre focusing on.
So, the average system probably isnt subverted by default. Most such subversions today use physical access. The national tools like stuxnet use rather obvious comms techniques rather than clever covert channels. You prolly arent subverted unless you're extremely important and have something worthwhile.
You might say why worry if it is do rare? Well, Clive and I's designs focus on high robustness or beating top end attackers. The best way to beat a high assurance system is subverting it at some point in development or distribution. My designs also mix assurance techniques and obfuscation, meaning subversion is the most likely attack to have a payoff(theyd think).
Even at our level, tradeoffs must be considered carefully. Assumptions must be considered even more so. And, finally, this was hastily typed on a fone without time for revision so dont take it all to the letter.
Schneier.com is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc..