Friday Squid Blogging: Dissecting a Squid

This was suprisingly interesting.

When a body is mysterious, you cut it open. You peel back the skin and take stock of its guts. It is the science of an arrow, the epistemology of a list. There and here and look: You tick off organs, muscles, bones. Its belly becomes fact. It glows like fluorescent lights. The air turns aseptic and your eyes, you hope, are new.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Posted on July 6, 2012 at 4:58 PM • 117 Comments

Comments

Petréa MitchellJuly 6, 2012 5:06 PM

The pixilated camouflage the US Army has been using for the last 8 years... doesn't actually work. Allegedly there were tests on a variety of patterns, but some highly placed person decided on pixilation before the tests were completed because the Marines had it.

Not fully explained in the story is how the Marines wound up with a pixilated camo pattern which, presumably, doesn't hide them any better than any other soldier.

Petréa MitchellJuly 6, 2012 5:09 PM

A lot of people discovered this week that giving Facebook "access" to their smartphone contacts meant it could change their contacts. Which apparently it did, and then lost some of the e-mail which was sent to it as a result. Here's a good roundup.

(For those looking to leave Facebook at this point: the center of gravity is reportedly shifting towards Tumblr and Twitter.)

Petréa MitchellJuly 6, 2012 5:22 PM

New candidate for the biggest financial fraud in history: the LIBOR fixing scandal. Here's an overview, and a more detailed explanation of what's going on.

The tl;dr version is that a critical bit of the world financial machinery relies on banks estimating what it costs them to borrow money and then reporting it on the honor system. You can guess the general outline of the scandal from there. Allegations include misreporting for both self-preservation (honest numbers would have indicated how bad the situation was for banks that were in trouble) and for collusion (e.g., helping out someone who needed the rate to be low or high on a particular day for a deal to be more favorable). The second article also takes a look at proposed fixes.

Clive RobinsonJuly 6, 2012 5:36 PM

OFF Topic:

Bruce,

I don_ know if you have seen this site,

http://safeman.org.uk/

But it contains a history of safes and safe breaking in the UK. It also has an attached site,

http://peterman.org.uk/

Which has a potted history of UK "petermen" who were safe crackers who ended up using explosives of various types.

kingsnakeJuly 6, 2012 5:39 PM

The "So You Want to Be a Security Expert" article put the Byrds in my mind so that I couldn't get them out ...

-----

So you want to be a "security expert"?
Then listen now to what I say
Just get a Schneier book
Then take some time
and learn how hack
with your system locked tight
And your router a-light
It's gonna be all night

Don't sell your soul to the Company
Who are waiting there to buy vaporware
And in a week or two
If you crack the net
The feds will tear you apart

The price you paid for your riches and fame
Was it all a strange game?
You're a little insane
The money, the fame, the public disdain
Don't forget what you are
You're a "security expert"!

-----

Not my best work, but the best I could come up with in about 15 minutes. :-)

kingsnakeJuly 6, 2012 5:40 PM

Hmmm ... separate lines in the comment box got smooshed together. Oh well ...

WaelJuly 6, 2012 6:07 PM

@ Kingsnake

For some reason I have the same problem. Blog is not prose / poem / limerick (11221) friendly.

Clive RobinsonJuly 6, 2012 6:09 PM

@ Petréa Mitchell,

Allegations include misreporting for both self preservation ... and for collusion...

In your comment about LIBOR you forgot to mention (now ex) head of Barclay's Bank PLC one "Bob Diamond" who was called to testify in front of MP's (who had stiched him up to force him to resign).

Well to their horror he briefly let the "cat out of the bag" and gave evidence that strongly suggests that on atleast one occasion the manipulation of LIBOR was at the behest of the previous (Labour) UK Government...

I have a feeling that this story is "going to grow legs"...

@ kingsnake,

Hmmm ... separate lines in the comment box got smooshed together. Oh well ...

I notice it's been doing it for about a weak now, and it's very anoying as it looks like we'll have to resort to "list tags" which are just a pain :-( so I guess the question is,

"What's the Moderator been upto?"

NobodySpecialJuly 6, 2012 10:42 PM

@Clive Robinson - "calls may be recorded for security and training purposes", hope he kept the tape!

NobodyspecialJuly 6, 2012 10:57 PM

@Petréa Mitchell - the Marines pixellated grey camo works. The army wanted a modern pixelated computer-ish modern camo as well.

But the army corporate colors are green and brown - so it got green+brown pixelated camo which didn't work.

andrewsJuly 7, 2012 3:27 AM

@Petrea Mitchell and Nobodyspecial - the Marine Corps woodland(green) MARPAT camis are slightly superior to the older camis. The green digital camo works better in the dark while wet. Which happens more often then you might think. The other camo patterns tend to look like a dark black blob. A large reason the USMC changed patterns was because we wanted to distinguish ourselves from the other services. The "enemy" also understands the differences between the army and Marines.

Clive RobinsonJuly 7, 2012 3:35 AM

ON Topic,

Having seen Bruce's,

This was suprisingly interesting

I thought I'll give it a read in the morning. So there I am munching my breakfast with one hand and scrolling down the article on the mobile with the other.

No worries I thought I've "dissected" a few squid "for the cooking pot" in my time so I'm not going to read about anything I've not actually seen and in some cases enjoyably eaten (lightly battered and deep fried is nice, or slowly stewed and then baked in a loaf of bread etc :-)

Then I got to the bit about licking the pus of the back of a cockroach... and for some reason my breakfast, that upto that point I'd been enjoying, suddenly became distinctly unappetizing...

TomJuly 7, 2012 4:04 AM

Yesterday there was a interesting documentary about airport security on belgian (dutch) national television.
http://www.canvas.be/programmas/terzake
(the episode of 06/07/2012 starting at 21:25 min).
They showed rather detailled how easy it was to get weapons on an airplane in France, also on intercontinental flights to the USA (around 33:00 min). When they confronted the TSA whith this information I found their reaction rather hypocryte...
The conclusion of the documentary was that it is too costly to have adequate screening. And that most of the screening done is security theatre to give us peace of mind.

Unfortunately most of it is in dutch but ppl interested in airport security should give it a try :o)

robJuly 7, 2012 4:28 AM

In UK, thousands of people held up in traffic queues for several hours, a major motorway was blocked in both directions and a full-scale turnout by anti-terrorist police, and according to some reports, the military, because someone was using an eCigarette. This story was all over the media but in the nature of 24-hour news it is quickly disappearing. Serious People on the radio this morning were saying that the response 'was proportionate' and that we must all be on the lookout for 'terrorist threats'. The threat was reported by another passenger who phoned the police on their cell-phone.

Not difficult to envisage a couple of terrorists, one with an eFag and one with a mobile phone creating havoc without actually doing anything illegal or disapproved of by the authorities.

WaelJuly 7, 2012 9:08 AM

@ Clive Robinson

"Then I got to the bit about licking the pus of the back of a cockroach"

I hear you. Reached that point too, but luckily after my breakfast. I kept telling myself it's not pus, it's guacamole... Didn't help much. Drank some pink stuff -- a generic brand of Pepto Bismol(R)

More on the article... I did not think equipment would freeze at the depth squids live at!

Petréa MitchellJuly 7, 2012 10:01 AM

Clive Robinson:

The second article does touch on the possible collusion of British regulators (and other national ones) but the allegations on that point are so vague as of yet that it didn't seem necessary to include in the summary.

ModeratorJuly 7, 2012 11:52 AM

so I guess the question is, "What's the Moderator been upto?"

Blundering about like a bull in a china shop, apparently.

I think once this comment triggers a rebuild, the missing line breaks will reappear above.

dbCooperJuly 7, 2012 12:58 PM

Subverting airport security via ground crew method, seems to have worked for about six weeks in Omaha, NE.......

"Court documents obtained by the KETV NewsWatch 7's I-Team detailed allegations made by an FBI special agent, which show that Foster pretended to be a United Airlines employee at Eppley Airfield for six weeks starting in April 2012. Foster is accused of accessing secured areas and computers at Eppley Airfield, according to the court documents."

http://www.msnbc.msn.com/id/48093487/ns/...

AnonJuly 7, 2012 1:06 PM

http://bash.org/?949560

Bash.org is a quotes website, usually quotes off IRC involving some form of impressive stupidity. In this case, impressive stupidity, USB magstrip reader, credit card, and IRC.

Clive RobinsonJuly 7, 2012 2:55 PM

@ Wael,

I did not think equipment would freeze at the depth squids live at!

In this respect "freeze" has a couple of meanings, the first being the litteral temprature "freeze" as in water going solid. The second simply meaning "to stop" moving/working from the mechanical terminology which if I remember correctly derived from that of biological meaning as in "frozen to the spot/rigid", which in turn was derived from the. firsst meaning where a side effecto of water freezing is it stops moving in the usual way...

You also need to remember that ~32ft of water is the equivalent of one atmosphere of preasure, so it mounts up quite quickly as you descend into the depths. And with respect to this the second thing to remember is explosives have been and still are used to generate preasure waves to "cold weld" metals together and if you get the preasure right soot turns to diamonds (with a little extra thermal help).

Thus one problem with all underwater equipment used at reasonable depth is how to operate it from/at what is effectivly a low preasure point.

Whilst a hollow ball of steel with sufficient thicknesss will not crush in most depths the minute you make a hole in it for a camera to look out of you start getting problems unless your optical material has simmilar properties to the steel or you take other precautions. Likewise drilling holes to take electrical or mechanical signals in and out to operate the equipment. Then there is the issue of maintanence hatches etc...

Then ontop of that there are thermal issues of working at depth, for instance batteries realy do not like the cold their capacity can quickly drop below 20% of that at room temprature when below some technology dependant figure (ni-cads and lithium battery packs used in high quality "TV Cameras" for instance are down to about 30% at tempratures of arctic autumn aand spring as wildlife documentors have found out.

Then the electronics it's self can be very tempreture sensitive, for instance "voltage refrence" sources used for A-D converters can easily be susceptible to tempratures even a few degrees outside of their operating design (0-40C for the majority of consumer grade equipment). Remember that a few years ago hackers found that various "secure electronics" features could be bypassed by putting the electronics in a domestic freezer (aprox -18C) overnight. Something the original designers had not considered.

Outside of consumer grade equipment are the "industrial" and "military" temprature grades but equipment in these temprature brackets are either custom built or inordinately expensive (have a look at the price of ordinary civil UHF 2 way radios and then those that are Mil temprature spec it will make your eyes water).

Having designed equipment for Mil, Oil/ Chemical industry and Fast Moving Consumer Electronics (FMCE) I can assure you the price differential is in many cases justified.

As an example, in FMCE it is not unusual to see the base bias on a transistor being "current bias" as this saves the PCB space/costs of a resistor and improves battery life. However for "outdoor use" in winter the circuit will not work because current bias is way to temprature sensitive, so you have to have the extra resistor and much higher current of "voltage bias". Likewise the choice of capacitors in oscillator circuits. If you look at the basic "inverter circuit" oscillator for quartz crystals at 32Khz, you will actually find that the capacitors and resistors are usually selected to make the inverter an RC oscillator close to or at 32KHz. The reason for this is so they actually start up and thus excite the crystal sufficiently close to it's desired resonant frequency such that it "pulls-in" and then the significant change in the quartz crystal impedence in the circuit causes the quartz resonantor to become the frequency selective component not the RC time constant. If the capacitors are to temprature sensitive then the RC oscillator frequency may not get close enough to the quarts crystal frequency for the impedence change to happen... Similar issues apply to other oscilator circuits and tuned amplifier circuits. Likewise some "spring coil" inductors. A good circuit design is such that any change in charecteristics with temprature are known and the appropriate temprature coefficient cappacitors are used in a way such that the resonant frequency remains as constant as possible over the desired temprature range. Even things like "bees wax" used to reduce "microphonics" in inductors has physical temprature coefficients that can change the electrical charecteristics of a circuit...

Clive RobinsonJuly 7, 2012 4:13 PM

@ Petréa Mitchell,

... but the allegations on that point are so vague as of yet...

Not sure how your local politicos/civil servants go about this sort of thing.

But in the UK the wording as given by Bob Diamond is exactly what you would expect in as a "politicaly inspired communique" from the likes of a "Civil Servant" (who are supposandly politicaly neutral) to a business/industry chief.

Which is why when Bob passed it on there was a "jump to it" attitude to comply.

It is acting correctly on such political hints where recomendations for Knighthoods from politicos to the awards and honours commities arise from.

The reason such phrasing is used through an intermediary is to allow for "plausable deniability". When we see the offending Civil Servant appear before the enquiry he will use the line that "to much was read into the otherwise innocent statment they had made" and it's an almost certain bet the politicos asking the questions will say "just so" and give him a nice comfortable time. There might be a faux pretence at serious corss questioning for the journalists but nothing that the politicos know the civil servant won't be easily able to refute/rebuf.

Such is the game these people play. We know this because we have seen it all before countlesss times and also that we know the politicos used the intemediaries of "The Governor of The Bank of England" and the head of the FSA to send exactly the same sort of message to the other directors and major shareholders of the bank to cause Bob Diamond to resign.

The last time we saw this sort of game played out big time was over Iraq and the "doddgy dossier" where Dr David Kelly made the mistake of pointing out the report was at variance to the known facts and shortly there after was found dead under a tree with his wrists slit, and the head of the BBC was forced into a position where he resigned. Oh and the head of that enquiry became known as "Lord Whitewash" because he did exactly what his political masters wanted which was that they should be exonerated. And yet another reason to call the then PM "Teflon Tony".

Clive RobinsonJuly 7, 2012 4:59 PM

@ Moderator,

Blundering about like a bull in a china shop apparently

Err no, and I'm sorry if that's the impression I gave.

I've sometimes been asked why I don't have my own blog and my answer has fairly consistantly been,

1, There is a lot of work involved with getting original content together.
2, Keeping ontop of the related admin functions including those related to security is also a major issue (that gets worse the more visability/popularity a site has).

So no I don't envy the task nor would I condem anyone who works hard to provide such resourcess to others.

Also I've been caught out a number of times with functionality changes on software upgrades etc so I'm aware that "smooth running" carries a significant overhead in testing of things that don't make into the manual, release note or upgrade/patch info. Some I've even brought onto myself by making a Maintanence Upgrade of software I had written myself some years previously (one particularly embarising one was '=' instead of '==' in a section of code that did low level memory managment as part of garbage collection).

Further I had the misfortune to once work for a company that had an online product that had so many conflicting configuration options that changed almost weekly that even the "programmers" could not tell you from day to day what might be effected. It was so bad that the "customers" were not alowed to change things, it was a job for the Tech Support Staff who received no training or support from the programmers. The programmers also had an attitude that "testing was for wimps", needless to say it was not a happy place and the company appears to have reorganised it's self virtualy out of existance...

RonnieJuly 7, 2012 10:01 PM

TrustChip (by KoolSpan) was mentioned in the July/August 2012 Technology Review.
http://www.technologyreview.com/tomarket/428250/...
"No ordinary memory card, the TrustChip can upgrade any phone to make super-secure encrypted calls and data transfers—which would usually require expensive specialized mobile devices. Making encrypted connections requires the phone on each end to have a TrustChip installed in its memory slot. Apps can then route calls and messages via the chip and its 32-bit encryption processor. The product is aimed at organizations, like security services and banks, that worry about eavesdropping. "

http://www.koolspan.com/trustchip/

KoolSpan was previously mentioned (in 2005)
http://www.schneier.com/blog/archives/2005/03/...

Clive RobinsonJuly 8, 2012 9:54 AM

@ Ronnie,

TrustChip (by KoolSpan)...

What worries me is the lack of information to decode what,

Apps can then route calls and messages via the chip and its 32-bit encryption processor.

Means in reality.

There is also the point that this device is not using the smartphone in the way intended and as such I can think of a number of attack vectors.

For instance how does the app ensure a secure channel exists between the microphone and the SD card and from the SD card to the earpiece/speaker. From what I can tell of the current leading phone OS's this is not going to be easy unless you have "root privileges" which you can only get by "jail breaking" the phone...

Also how does the app stop the phone supplier from downloading an I/O Device driver "shim" that effectivly puts a "T-Piece in the pipe work" thus copying the sensitive data on to some other hidden app or other hidden comms channel that is either "real time" or "store and forward" such as the CarrierIQ software did with keypress and other data...

So untill a lot of other detailed technical information on the design is released by Koolspan on which a sensible evaluation can be performed I for one will treat the product as "unknown" at best which in turn means I won't be using it any time soon (if ever)...

Clive RobinsonJuly 8, 2012 10:16 AM

@ Wael,

With regards the C-v-P issue lets take a sideways look for a moment.

As Nick P has pointed out you can get a measure of security with various software tool chains to build the software for the Apps and OS.

Though with the way web browsers etc are used these days they need to employ exactly the same techniques that OS's do for user seperation/security for individual window seperation/security.

That is as users nolonger "login to run a single app" the security has had to rise into the application level or the presentation apps need to sink down into the OS securit layer depending on your view point.

However there is still the issue that once malware either put into the software during design or injected subsiquently via flaws has access, some or all of the RAM and Secondary storage such as the HD etc become available.

Whilst HD's can be encrypted there is for most implementations the issue of RAM and the encryption keys.

Have a look at these two papers,

1, TRESOR - http://www.usenix.org/event/sec11/tech/...

2, CryptKeeper - http://tastytronic.net/~pedro/docs/...

And have a think about how tresor would improve CryptKeeper.

Whilst it won't produce a fully encrypted RAM it will certainly reduce the attack surface considerably.

ModeratorJuly 8, 2012 12:36 PM

I'm sorry if that's the impression I gave.

It wasn't. My bull-in-a-china-shop comment was prompted by realizing I'd created the problem by "fixing" something that wasn't, in fact, broken.

WaelJuly 8, 2012 5:33 PM

@ Clive Robinson,

The deepest sea area was reached (or descended to - what is the antonym of "reach" in that context anyway?) by a manned submarine with cameras and electronic sensors. Search for "Deapsea challenger". As for transistor biasing and heat compensation, and material properties under temperature and pressure variations, that can be designed too, although I have not encountered any pressure limitations on solid state devices. I am a HW engineer by education and early career (descrete components, BJTs, FETs,...). I mainly worked on the analog side with high frequency (at the time) waveguides, microwaves, microstrip and slot antennae, Smith charts, so I "dig" what you say... Oh the good old days, when men were men, (ok, and women were women) and people just wrote their own device drivers :) Been stuck with software and firmware -- what you call "code cutting" -- since then...

As for C-v-P... So we can still talk about that. I promised Nick P I will drop that analogy, but I need a way out. Hmmmm! I see it as a model, not an analogy. Yea! That's it!. So out with the analogy. C-v-P is a model we can talk about, although you almost confused me when you referred to the Castle as a Fortress in a recent post. Stay tuned... Maybe we can sneak-in an iteration or two while Nick P is on vacation :)

Clive RobinsonJuly 8, 2012 8:48 PM

@ Wael,

I am a HW engineer by education... ...been stuck with software and firmware since then...

Yup me to but I moved on from "software engineering" because the bosses wanted the software equvivalent of "junk food tastless pap" and I wanted to make "healthy enjoyable food". And when it came to the choice of "taking the money and run" or the "ethical approach"... well lets just say I walked... into a different career (which I'm probably going to do again soon).

[Imagine if you can a conversation between me and an MD of a fairly large organisation, he was bemoning the deficiencies of Micro$haft products and how it was costing the company $X, I (somewhat peved as he had button holed me at a social function that was not work related) simply said "What do you expect, they are simply doing to you as their customer, what you are doing to your customers" lets just say it was not career enhancing].

Any way the issue with preasure is mainly two fold, the first is the effects of static preasure and the second significant changes in preasure. For instance a JEDEC TO-33 style package with glass base seals on the likes of many 1-5W transistors, will fail at quite moderate static preasure when compared to that which mil subs fairly routinely have on their preasure hulls. But if you look at the likes of the cables they change in preassure causes them to expand and compress and has similar life shortening effects to bending (as well as changing their charecteristic impeadence sufficiently to cause return loss issues). And obviously the changes in cable diameter due to preasure directly effect the use and design of sealing glands into equipment housings.

Thus external sensors on submersible are effectivly special/custom designs and in some cases it's a custom package that is machined to take a standard part, ie some low cost ROV motors are internaly actually the same as some bilge pumps just in different housings, and many cameras are low light security style cameras in specialised casings whilst feedback systems on actuators are often custom designs including such extras as water ingress detectors.

The reliability of ROV style submersibles has a strong corelation not just to depth of operation but also the number of decents / ascents and is why some systems you can only rent and come with a compleat set of spares,tools and technicians along with the operators. When I was working in the Oil / Chemical industry I did some design on top side, head end equipment and dabled in the electronics side öooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooothe guys doing the design for the ROV etc were a fatalistic morose lot who worked on the asumption it was all going to go horribly horribly wrong the minute you went to sea. For them success was measured in sleep time (or the lack there of) when off shore. I suspect that things have improved a bit in the past twenty years or so due to improvments in materials science etc but a look at job adds for ROV / submersible technicians tells a few stories if you can read between the lines.

RonnieJuly 8, 2012 9:12 PM

@ Clive Robinson
Thanks for the analysis - I thought it was interesting but didn't delve into it too deeply. Maybe Bruce can use it as fodder for The Doghouse.

Nick PJuly 8, 2012 9:50 PM

@ ronnie and clive

I second clives complaints with koolspan. I swear i debunked them with many of the same points a while back on this blog but couldnt find the link. Might have found the criticisms too redundant to post. ;)

Anyone wanting smartphone encryption talk to OK Labs or INTEGRITY Global Security. Both claim to have solutions that utilize microkernal platforms to isolate untrustworthy main OS from security critical code. Id assume a smartfone based solution is insecure though.

One can, however, make a "mobile" semisecure comms solution pretty easily. It's mobile in the sense that it takes up little space. Red-black design. Untrusted fone or laptop is fine for Black transport layer. Red is two logical units: interface/voice/text part & security (e.g crypto) part. Basic design is run VOIP with careful compression over IPSec/TLS on hardened OpenBSD wired to Black device with nonDMA & careful protocol. Both parties need the device. Far from perfect, but more secure than any smartfone solution & supplier neutral.

RonnieJuly 8, 2012 11:15 PM

We need an official Tor discussion forum.

I didn't see this issue mentioned in Roger's *latest* notes post, so for now, mature adults should visit and post at one or both of these unofficial tor discussion forums, these tinyurl's will take you to:

** HackBB:
http://www.tinyurl.com/hackbbonion

** Onion Forum 2.0
http://www.tinyurl.com/onionforum2

Each tinyurl link will take you to a hidden service discussion forum. Tor is required to visit these links, even though they appear to be on the open web, they will lead you to .onion sites.

I know the Tor developers can do better, but how many years are we to wait?

Caution: some topics may be disturbing. You should be eighteen years or older. I recommend you disable images in your browser when viewing these two forums[1] and only enabling them if you are posting a message, but still be careful! Disable javascript and cookies, too.

If you prefer to visit the hidden services directly, bypassing the tinyurl service:

HackBB: (directly)
http://clsvtzwzdgzkjda7.onion/

Onion Forum 2.0: (directly)
http://65bgvta7yos3sce5.onion/

The tinyurl links are provided as a simple means of memorizing the hidden services via a link shortening service (tinyurl.com).

[1]: Because any content can be posted! Think 4chan, for example. onionforum2 doesn't appear to be heavily moderated so be aware and take precautions.

--

DNSCrypt for Linux, Windows, Mac (from opendns.com)

"In the same way the SSL turns HTTP web traffic into HTTPS encrypted Web traffic, DNSCrypt turns regular DNS traffic into encrypted DNS traffic that is secure from eavesdropping and man-in-the-middle attacks. It doesn’t require any changes to domain names or how they work, it simply provides a method for securely encrypting communication between our customers and our DNS servers in our data centers. We know that claims alone don’t work in the security world, however, so we’ve opened up the source to our DNSCrypt code base and it’s available on GitHub"

https://www.opendns.com/technology/dnscrypt/

- Download the right package for your Linux distribution:
https://blog.opendns.com/2012/02/16/tales-from-the-dnscrypt-linux-rising/

https://github.com/opendns/dnscrypt-proxy/blob/master/README.markdown
https://github.com/opendns
https://blog.opendns.com/2012/05/08/dnscrypt-for-windows-has-arrived/
http://techcrunch.com/2011/12/05/...
http://www.h-online.com/security/news/item/...
http://blog.opendns.com/2012/02/06/...
https://www.linuxquestions.org/questions/debian-26/dnscrypt-930439/

Petréa MitchellJuly 9, 2012 11:53 AM

Autolykos:

Well, they do keep saying the Army needs to adapt more to fighting in urban environments.

Petréa MitchellJuly 9, 2012 11:59 AM

andrews:

A large reason the USMC changed patterns was because we wanted to distinguish ourselves from the other services. The "enemy" also understands the differences between the army and Marines.

Wouldn't it be better to all wear the same uniform and let the enemy go nuts wondering, then, rather than helpfully pointing out who they should shoot at first?

WiskersInMenloJuly 9, 2012 1:10 PM

Check out this video on YouTube for a squidliceous musical interlude.

http://www.youtube.com/watch?...

Pulsating Pigment Cells of a Dead Squid Are More Beautiful Than You’d Think
Chromatophores are muscle-controlled pigment-filled cells that allow cephalopods to blend in with their surroundings or even communicate with others. Now, you can see the cells expanding and contracting up close in this mesmerizing video Michael Bok, a graduate student at the University of Maryland, filmed of a dead Longfin Inshore squid.

Nick PJuly 9, 2012 1:48 PM

@ ronnie

Thanks for the links. I might visit them if I dare put Tor on my machine again. Fact is, I don't trust it. Tor has had numerous real and proposed weaknesses. There's also a large academic community focused on finding more. Personally, I like the Freenet design a bit more, but it's Java. (wth were they thinking?) There's better (cheap & free) alternatives to Tor that a a bit more involved, but faster & more trustworthy.

I've been thinking about setting up a little Tor VM just for visiting Onion sites, not anonymity per se. Put a proxy listener in there, too. Then, just redirect my browser through it when its loaded. Might do something similar for Freenet.

WaelJuly 9, 2012 2:17 PM

@ Clive Robinson

Read the first paper... Second one, glossed over.

"Whilst it won't produce a fully encrypted RAM it will certainly reduce the attack surface considerably." -- True.

Important phrase to be noted: "Reduction of the Surface of Attack" which is one defense (defence for you) strategy!

Nick PJuly 9, 2012 3:04 PM

I'm sure most of you have heard of Apple's Siri. I've told iphone addicts that there were quite a few Siri-like products out there. Including one I was considering using from an AI lab many years ago, although I forgot which it was (maybe MIT). Siri was less invention & more good implemenatation, integration & marketing. Great product, no doubt, just less credit to Apple for the concept than they claim.

Well, unsurprisingly, Apple is getting sued for Siri by (gasp) a Chinese firm. It already gave $60 mil in cash for using the name iPad. Now, this company seems to want either cash or to simply block its competitor in China.

http://www.techweekeurope.co.uk/news/...

I figured this would be the right blog to ask a funny question: anyone else see the laughable irony in a Chinese company suing over intellectual property abuse? ;)

WaelJuly 9, 2012 4:51 PM

@ Nick P

" anyone else see the laughable irony in a Chinese company suing ..."

In some countries, "NDA" stands for New Data Available :)

WaelJuly 9, 2012 5:55 PM

@ Nick P

"I'm sure most of you have heard of Apple's Siri. I'v ..."

Well, unsurprisingly, Apple is getting sued
for Siri by (gasp) a Chinese dude
It already gave $60 mil in cash
for using the name iPad in a flash
He wants a block in China or some cash for food

:)

GIMPJuly 9, 2012 7:58 PM

@ Nick P

"I might visit them if I dare put Tor on my machine again. Fact is, I don't trust it. Tor has had numerous real and proposed weaknesses. There's also a large academic community focused on finding more."

I don't trust it either but it works well for its stated goal.

Most any code has had numerous real and proposed weaknesses, and it's in corporate and government's favor to drill holes in any privacy/security tool(s) and subvert them.

Here was one ugly incident:

anonymous [dot] livelyblog [dot] com/2012/04/10/linux-bug-compromises-tor-users-makes-list-of-all-sites-the-user-has-visited/

"Personally, I like the Freenet design a bit more, but it's Java. (wth were they thinking?) There's better (cheap & free) alternatives to Tor that a a bit more involved, but faster & more trustworthy."

Free net? What people say is it's slow and filled with a lot of illegal content. That is certainly not for me!

What are the other (better) free alternatives to Tor which are faster and more trustworthy? (and not grey or black hat related) And are they open source? Been subject to a lot of peer review? Have no history of back doors? Are they legal for all to use? Do they have a large user base? The Tor Metrics page fluctuates around 500k clients. Without a large user base, you don't blend into the noise of others very well, your activities stand out more and this damages your privacy/security.

Please post links here to "better" free services.

Nick PJuly 9, 2012 9:55 PM

@ gimp

Hint: most revolve around wifi hotspots and covert use of them. In practice, this concept has worked well for around a decade now. It doesnt rely on esoteric mathematics either.

AutolykosJuly 11, 2012 4:48 AM

@Nick P: That's actually far less anonymous/secure than you might think. Depending on the setup, even other users on the same WiFi can see what you do. And you'll always have to trust the guy who set up the network.
With Tor, the weaknesses get studied, published and fixed. With some random schmuck's WiFi - probably not.

RonnieJuly 11, 2012 12:02 PM

http://arstechnica.com/tech-policy/2012/07/...


A year ago this coming Sunday, the US Court of Appeals for the DC Circuit ordered the Transportation Security Administration to do a notice-and-comment rulemaking on its use of Advanced Imaging Technology (aka “body-scanners” or “strip-search machines”) for primary screening at airports. (The alternative for those who refuse such treatment: a prison-style pat-down.) It was a very important ruling, for reasons I discussed in a post back then. The TSA was supposed to publish its policy in the Federal Register, take comments from the public, and issue a final ruling that responds to public input.

So far, it hasn’t done any of those things.
...
So on Monday, I started a petition on Whitehouse.gov. It says the president should “Require the Transportation Security Administration to Follow the Law!”
...
The petition says:
Defying the court, the TSA has not satisfied public concerns about privacy, about costs and delays, security weaknesses, and the potential health effects of these machines. If the government is going to “body-scan” Americans at U.S. airports, President Obama should force the TSA to begin the public process the court ordered.
...
Getting 25,000 signatures requires the administration to supply a response, according to the White House’s petition rules.

The response we want is legal compliance. The public deserves to know where the administration stands on freedom to travel and the rule of law. While TSA agents bark orders at American travelers, should the agency itself be allowed to flout one of the highest courts in the land? If the petition gets enough signatures, we’ll find out.
...

Nick PJuly 11, 2012 1:36 PM

@ Autolykos

"That's actually far less anonymous/secure than you might think. Depending on the setup, even other users on the same WiFi can see what you do. And you'll always have to trust the guy who set up the network.
With Tor, the weaknesses get studied, published and fixed. With some random schmuck's WiFi - probably not."

I said it "revolves around wifi hotspots and covert use of them." I didn't say using wifi hotspots was the method. I've intentionally left parts out because (1) the methods have worked for years and (2) the obfuscation helps. In case you'd like to guess at it, here's some of the threats my schemes try to counter:

1. IP tracing
2. [Meaningful] browser-level profiling
3. Remote attacks on OS or browser
4. Persistent malware infection
5. Analysis of internal network traffic
6. Fake AP's
7. Wireless signal tracing to source computer.

The more things you want to counter, the more cumbersome and costly it gets. However, little about the countermeasures are black boxes: you can be pretty sure each does what it's supposed to do. Anonymity software less so.

(Although, Tor is one of the best options for free, anonymous web surfing if you decide to go that way. Just have to take some additional precautions and be OK with SLLLOOOOOWW access.)

Nick PJuly 11, 2012 6:00 PM

@ Clive Robinson

So, about that method of sending you a message other than this blog... how's that been coming?

I have a suggestion for you to save you (err, me) time on solving this one. I know you have an email address you haven't posted. So, how bout you set up a 2nd one that's public & use a whitelisting scheme to only pull/forward/whatever messages from addresses you recognize. So, we can get a semi-private message to you and still not know your private address.

A free web site, online message submission form, etc. could be used to do this as well. Might make it easier to script the whitelist. I haven't done web programming in a while, but I might throw something like it together anyway. Quite a few potential re-uses down the road. Maybe even make it mobile friendly haha.

Many potential hosts
http://www.free-webhosts.com/...

WaelJuly 11, 2012 6:27 PM

@ Nick P

"bout that method of sending you a message other than this blog... "

Why don't you two eat some of your own dog food and do a manual DH key exchange on this blog ?
Don't worry about an Active MiM attack ...

Or perhaps the moderator can facilitate a method for private replies, a script that allows two parties to do that DH key exchange, or even an email service at a nominal fee (with no advertisements).

@ Moderator:
O großer Moderator, make that happen :)

Nick PJuly 12, 2012 12:05 AM

@ Wael

Ah, but how do I know it's actually his key? What he's said is Googleable & people can impersonate him. Bruce has his email & the mod prolly has his IP. Both are better for authentication & I have no reason to think they'd lie to me. He or I have suggested having one send his email address to my registered email, but that didn't happen for reasons I can't remember. I'd rather not burden them anyway. After all, if Clive really wanted we could have pulled this off by now.

Hence, my newest proposal. This is the 21st century. That Clive doesn't have a public email for these situations is... he should have one. Throw in some whitelisting & have him only check it when he's expecting a message, then it is quite convenient. Or a web page thing if he doesn't like that. And it probably won't get autodeleted for inactivity.

Hey, I'm trying on my side. Clive has plenty to say to people on many blogs. The kinds of people that might want to talk to him or use his expertise productively might need privacy. Or just say what they feel like saying with no restrictions or possible deletions. One-on-one medium, at least. Hence, he should have a way to do that easily [like the rest of us].

WaelJuly 12, 2012 12:30 AM

@ Nick P

"Ah, but how do I know it's actually his key?"
There is a reason I said don't worry about Active MiM attack :)

WaelJuly 12, 2012 6:18 PM

@ Nick P

There is a reason I said don't worry about Active MiM attack :)

Strange! You did not ask. Here it is anyway:

There are a couple of kinks introduced to this "problem" that changes the dynamics a bit.

So normally DH-M is immune to Passive MiM attacks, but vulnerable to Active MiM attacks -- Your concern. One kink is the abilty of Alice to ask Bob, and vice versa, to do something "visible" on the blog, with two consequences:
1- Active MiM attack can achieve a DOS at best
2- Passive MiM attack can also achieve a DOS -- Somehting a normal use-case DH-M is immune to (I think)

Passive here means:
1- Read-only of the protocol communication on the Blog (does not hand out any prime numbers to either Alice or Bob)
2- Ability to write to the Blog as an imposter (causing a denial of service)

And active means
1- Participation in the communication protocol as an imposter (pretends to be Alice to Bob, and Bob to Alice)
2- Ability to write to the Blog as an imposter (causing a denial of service)

So, Alice and Bob agree on a key
They don't send any confidential stuff yet.
Alice sends Bob a message on the Blog asking for the hash of the key and Bob Does the same

If there is an Active Eve in between, the hashes will not match. So a MiM -- Or WiM in this case, since Eve is a feminine name, will not succeed.

Once Bob and Alice do not hear any complaints from either, then they are ok. Complaints would take the form:

Alice -> Bob on the blog:

@ Bob
Here is my prime number: 3

Alice --> Bob:
@ Bob
Hey! That was not me, I did not post that number. Freakin' Eve is at it again ...

Or alternatively Passive Eve can post:
@ Bob
Eve -> Bob:
Hey! That was not me, I did not post that number. Eve is at it again ... (She doesn't want to curse herself)

Also, Passive or Active Eve can post a bogus hash to casuse a DOS (inability for Alice and Bob to exchange a key) and that is where the other kink lies ...

However, I may act as Eve should this contrived communication take place, because I want to read what you guys are talking about :)

I did not verify my thoughts here much, so it all maybe broke ...

Nick PJuly 13, 2012 1:25 AM

@ Wael

As it sometimes is, it was a hurried reply. Without analyzing your reasoning (please don't get offended) & appreciating the time you put in, I have to say what is the point in all that if the other party is unwilling to even give a public email? (Much less time/effort/mentally consuming by comparison...)

Clive always seemed to be a bit eccentric or unusual. No problem with that, of itself. However, those types of people do have a hard time bringing themselves to do the minimum of social expectations, huh? ;)

WaelJuly 13, 2012 1:57 AM

@ Nick P

No offense taken. Was just reading through old posts and came across one from 2010 about funny questions and answers. Saw your long reply ( I am Clive Robinson Biatches...) and thought it was pretty funny. It also proved your point that it is easy to impersonate some characters here... because when I started reading that post I was thinking. Hmm! I am already lost, and I am only in the first
sentence, and there are 10 more pages to go. It's gotta be Clive ;) I was shocked to see it was you :)

Sometimes chatting over private channels is boring,
and gives you a false sense of privacy -- You know, you might as well make it easier for them® and write in the open. Kiss privacy goodbye ;)

Nick PJuly 13, 2012 8:09 PM

@ Wael

LOL! I needed that. Totally forgot about that. Hope you saw Clives reply, as it was the exact reaction i was looking for. Well, the first part: i just saw the 2nd one and now i feel guilty for not continuing the discussion after the poor guy typed all that stuff on a tiny phone. Ill have to give him his reply in the near future.

Nick PJuly 13, 2012 10:11 PM

@ Wael edit

Oops. I felt guilty too soon. I went back over that thread to make sure I missed replying to Clive. He had read something someone else wrote, assumed it was me, replied & dude disappeared. Explains why "my" statements looked alien to me. I'm in the clear, for once haha.

DavidJuly 13, 2012 10:19 PM

@Nick P

speaking of Clive, where is he? Haven't seen a post from him in about a week.

WaelJuly 13, 2012 10:56 PM

@ Nick P, David

No worries Nick P. You can feel guilty another
day ;)

@ David,
I was also wondering where Clive Robinson is.
Maybe Nick P can ping him for us on Clive's
Private email :)

Nick PJuly 14, 2012 1:43 AM

@ concerned of Clive Robinson

His last post was July 2012 Google says. He's recovered from far worse. If you must worry prematurely, pray for him. Otherwise, just don't. That geezer will show up to answer some questions or make some smart dude look uneductated. He will if he is still alive, as he enjoys that stuff too much. Mark my words. ;)

WaelJuly 14, 2012 6:00 PM

@ Nick P

Well, when should we start getting concerned? :(
I don't mind Clive making me look uneducated as long as he is Okay. What is the longest he abstained from showering us with his wisdom?

Nick PJuly 14, 2012 9:08 PM

@ Wael on Clive

I cant remember how long it was. He has serious medical issues and spends time in the hospital. Id say give him some prayers and wait a week or so.

Clive RobinsonJuly 14, 2012 9:24 PM

@ Wael, David, Nick P,

I'm still alive and have returned today from lodging with the medical profession once again. You know you are "not well" when you are not only on first name terms with Drs, Nurses and Porters at your local hospital but also their children and in some cases grand children from having said hello to their parents in the street (the nice thing is they don't ask "how are you feeling").

@ Nick P,

or make some smart dude look uneductated

Ouch... I look on it more as "broadening their outlook with a different view point". For one thing I'm certain there are a lot brighter people than me around, I just appears that "I've got around" a bit more than they have, so like "any old dog" I've learnt a few tricks in my time ;-)

I know you are occasionaly amused / dismayed by Journalists and their antics when it comes to security. But whilst we tend to think of it academicaly or in a detached sort of way sometimes their lack of ITSec genuinely costs lives,

http://www.cjr.org/feature/...

And to be honest, I don't think I'm up to totaly locking down a consumer grade laptop computer / mobile phone beyond the capabilities of a type three opponent if it can even be done (which I doubt). I even doubt many type three agencies are capable of it either unless they have a significant "in" with the manufacturers involved.

I'm also well enough aware that such a device if found on you (and it very probably will be) is going to attract rather more attention than you would want. Likewise specialised harware that is potentially secure is going to raise an even bigger stink than ripe Brie in the full heat/glare of the midday Mediterranean summer sun.

Back in the old days of "fieldcraft training" they used to advise you "to keep yourself clean and unfragrant" so that your "back story / cover" would not be compromised or the "officers" of the opposition "smell you out". Thus you kept hard technology to a minimum, and instead relied on what was "in your head" and where required additional field support personnel via appropriate cut outs and dead letter boxes etc.

Modern journalists don't have the luxuries of resources time or support today, and the modern TV Hard News requirments make it a lot worse than it used to be.

In this particular case of the filming a few simple precautions would have gone a long way, such as head scarfs etc to hide identity before recording video, using an "anonymous room" to film in and not voice recording interviwees voices onto digital media but old fasioned analog tape, then burning the tape after transcribing word for word so it could be "voiced in" by an actor at the production stage. Whilst the first bits are fairly easily done the issues of recording/transcribing the voices of the journalist and the interviewed person need time resources and skill that are not "field available" these days in 24Hour News.

One advantage for these forign level three agencies is that the countries for whom they work have "jumped over POTS" and gone straight to fully digital mobile phone technology which makes intercepting and recording data almost trivial and alows statistical analysis of call paterns etc sufficiently easy that use of digital phone technology should be considered "suicidal" at almost any level no matter what precautions you take (a point that was not lost on OBL, and even he paid the price of insufficient communications security in the end).

It's a hard problem and is getting harder by the day as new tracking technology becomes rapidly available (at a very profitable price) to level three agencies the world over and it is not helped by the likes of Chinese and Israeli telecoms hardware provider companies building in the tracking facilities from the silicon upwards.

WaelJuly 14, 2012 9:38 PM

@ Clive Robinson

The first thing that came to my mind is Holly S..t, my prayers have been answered.

@ Nick P
"as he enjoys that stuff too much. Mark my words. ;)"
Your words were marked. You are correct, Sir :)

WaelJuly 14, 2012 10:13 PM

@ Clive Robinson

"I'm also well enough aware that such a device if found on you (and it very probably will be) is going to attract rather more attention than you would want. Likewise specialised harware that is potentially secure is going to raise an even bigger stink "

Intresting, seems like steganograohy is going to gain more importance than cryptography in some situations.

Clive RobinsonJuly 14, 2012 11:58 PM

@ Wael,

Speaking of re-reading comments in posts etc, I realised I've not answered,

As for C-v-P... So we can still talk about that. I promised Nick P I will drop that analogy, but I need a way out. Hmmmm! I see it as a model, not an analogy. Yea That's it!. So out with the analogy. C-v-P is a model we can talk about, although you almost confused me when you referred to the Castle as a Fortress in a recent post. Stay tuned... Maybe we can sneak-in an iteration or two while Nick P is on vacation

First off I'm not sure Nick does "vacation" in the normal sense, I get the vague feeling "alligator wrestling" is his idea of a nice quite time ;-) [1]

But more to the point yes in many respects it is a model. The reason for the use of the names well a couple of reasons. Firstly somebody has used the title "Cathedral and the Bazaar" and it's a catchy title so Castle -v- Prison is name wise following in the footsteps as it were. Likewise we used to talk of "Bastion Hosts" years ago which at the end of the day a Bastion is a hardened and defended point which both C&P's are (so the naming is semanticaly relivant).

Secondly and more importantly it relfects the mind set of system design.

But this is where the fun starts... we use the word "malware" to cover a multitude of sins. As a very lose definition it is "software that causes a system to perform actions not intended by it's owner". The reality of malware is there are a few basic types,

1, Software injected from outside during operation.
2, Software added during design and build.
3, Firmware that makes a system untrustworthy.

As an "overly general rule" the IT security industry currently tends to think only of the first two types, and assume that the hardware system we own is inherantly trustworthy. Academically however we know that a single "Turing Engine" cannot be 100% trustworthy, simply because the reality is it cannot test it's self reliably and thus when viewed as a "black box" it can lie to us and we won't know. The US Military amongst others has woken up to this unplesant little truth about COST equipment an talks about "supply chain" poisoning.

Thus we have a significant problem, how do you build trustworthy systems on untrustworthy hardware. An English language idiom of what is apparently a futile task is "To build your Castle on shifting sands". Which is a little odd because we have known since Henry the VIII's time that you can build Castles on a lot worse such as water... a Naval vessel is at the end of the day a floating Castle through which "power is projected". We also know from the Napoleonic era that "floating hulks" can also become "Prisons" as well.

From those outside looking inwards C&P's look very much alike irrespective of what they are built on in that they are designed to keep out the "uninvited".

Thus to type 1 malware the systems are no different externaly.

However when you get inside a Castle it is a lot different from it's exterior. Castles tend to be built on the notion that those "invited" in are trustworthy and the internal defensive measures are still directed at the "uninvited" not the "invited". Further that for those invited the environment within the Castle should be relativly comfortable.

When you get into a prison one of the first things you notice is that the defenses are not aimed just at the "uninvited" but very much if not more so at the "invited". That is invited or not the prison considers all to be extreamly untrustworthy. A prison uses strong segregation / separation and minimal functional environments as it's basic mechanism to ensure the securrity requirments.

Thus to type 2 malware the systems are very different and as such even invited software finds the system hostile to all but the owners requirments.

Now the problem with type 2 malware is it comes as part of the package and the only limitations are in most cases absolutly minimal to non existant. Even development tool chains won't stop this if the attacker can get the malware functionality included into the design specification as part of say the "run time test environment". The most obvious recent example being the CarrierIQ test environment...

The problem is actually applications are developed to be loaded to far down the stack. That is usually the end result is a compiled program of machine executable instructions that sit down way below threshold at which the code can sensibly conveniently or efficiently be monitored.

One of the ideas behind CvP is that the majority of programers cannot be trusted to write secure code. As Nick P has frequently pointed out there are many tool chains that produce code that whilst not perfect is way way way ahead of the run of the mill code cut in most app shops.

Without getting into the politics of it those that can code securly are a very very very scarce resource and finding a person with all the requirments is about as likley as finding a diamond stuck between a hens teeth.

Thus sensibly you should try and leverage their talents across as wider front as is possible. Back many years ago this problem was solved by the simple extent of employing them to write the OS and code libraries onto which single use apps would run. Unfortunatly the world has moved on and OS level security is far from being sufficient with apps like web browsers being effectivly the equivalent of an insecure OS like CP/M and DOS with a single memory space in which every user task runs at the same privileged level and little or no attempt is made to seperate task information from other tasks.

Thus the follow on idea was to provide "re-usable code", not in the current insecure form of a code library but as a seperate task that is in it's own secure environment and has it's input and output "piplined" to other tasks. Thus it is similar to *nix shell scripting but with much stricter segregation and monitoring.

This way the development tool chain would stop well above the executable code level in the stack it would be at the level of secure tasks. Some scripting languages like Perl have similar design philosophies but the approach has been to go "monolithic" for efficiency reasons. This makes effective security monitoring next to impossible.

This is because one of the major inefficiencies in any CPU environment is "task switching" that is whilst the CPU might have a relativly well defined and efficient "context switch" from background normal operation to foreground interupt handeling, extending beyond this to tasks is releativly inefficient. It also opens a whole lode of security risks at the same time.

Thus eliminating task switching would add greatly to the efficiency and security of the design.

Further "well found" tasks don't actually require the heavy weight resources modern high transistor count CPU's bring or the complex instruction sets that come with them. Infact much of the time much of a modern CPU is idle which is actually quite inefficient.

So the idea was to use many light weight RISC CPU's and not to task switch them. The CPU would live in it's own virtual world behind the Memory Managment Unit, which unluke current designs would not be controled by the CPU it's self but by a hypervisor. Thus the MMU becomes the prison walls within which the CPU is jailed. It's resources would be stricctly controled by the hypervisor and as previously discussed each taks would get the minimum of required resources and be subject to signiture analysis.

So the prison model is designed to work like a massivly parallel system of CPUs with small tasklets running on each CPU which pipeline there results not to each other but to and from the CPU hypervisor.

However although this would provide a significantly secure environment to help prevent type 2 malware it does not tackly type 3 malware as currently described. However it does allow it tob much more easily facilitated than the single heavy weight CPU systems.

I'll stop at this point with "any questions?"

[1] For those not up on certain English/US idioms there is a saying which in the UK starts with "Sometimes when you go to drain the swamp..." and referes to the problem of being side tracked from your objective. Normaly most engineers etc know what you mean from just the first few words and you don't need to say the whole phrase to get a knowing nod. The US version is shorter and more to the point with "When you're up to your neck in alligators, sometimes you forget that your mission is to drain the swamp" (this is the polite version the more common usage uses another lower part of the anatomy than the neck with the further unsaid idiom of things being a considerable pain there ;-)

Nick PJuly 15, 2012 12:41 AM

@ Clive Robinson on journalists & opsec

Good to see you, buddy. And I see you're jumping past the personal straight to interesting topics, as usual. ;)

Nice article. I might have to share that with a few people. I agree that modern types could learn plenty from the older folks. I'm not totally sold on dodging digital, but it's admittedly way easier to subvert things these days. Much harder to secure modern methods, as well. Yet, Anonymous and Wikileaks show it can work in practice if you do it right. (For a while, at least haha.)

"And to be honest, I don't think I'm up to totaly locking down a consumer grade laptop computer / mobile phone beyond the capabilities of a type three opponent if it can even be done (which I doubt). I even doubt many type three agencies are capable of it either unless they have a significant "in" with the manufacturers involved."

Agreed.

Wael hit on one of the more obvious points with steganography. I'm skeptical about stego, though. My MO about this stuff, along with others, was to just put stuff on media or computers that's easy to hide. There's books to help with the hiding, we both know. Also, if I used encryption, I'd try to make it where it takes a remote, extra key whereever possible. This is to resist rubber hose cryptanalysis. (Maybe we should call it fingernail cryptanalysis, to be more accurate, eh?)

I'd also rather stego the SYSTEM. You recently reminded me of the AES scheme that avoids RAM. It gave me a different idea, though. We know there's plenty of black boxes in a COTS desktop or laptop. We know there's many chips that have their own code & memory. How bout we modify one to hide the stuff? Of course, might have to do COMPUSEC on the main system to prevent subversion. Let's say it was unlikely they'd subvert the system or it was protected well. Then, they just grabbed it and started looking for stuff. Hide sensitive info in an onboard chip that's supposed to be there, maybe with extra functionality (GPU comes to mind), and modify firmware of main CPU to retrieve it during a certain key sequence. Main point of using black boxes & main CPU onboard memory is that it should be harder to extract w/out uncommon expertise. What you think?

"Modern journalists don't have the luxuries of resources time or support today, and the modern TV Hard News requirments make it a lot worse than it used to be."

That makes it sound pretty inevitable. I don't like that. I think they could do way better with a good approach and hardly any extra resources. Let's take a specific example: voice recording on tape, transcribing, and burning it. So, how pressed is the journalist for getting word out from a source? Does he really have no time to anonymize it? (I doubt that.) More likely, he only has so many resources. So, what to do? Well, he could record the conversation onto a TrueCrypt volume with random strong passphrase (written down). He could physically transcribe it all, or use speech-to-text by reading it, then burn the paper with the key & delete the volume. (RAM disk if it's a short conversation.) Gotta make sure no extra copies are made in background by apps, but if not this is decent for making sure nothing is there to pick up.

In that example, we can also make purpose-built voice recorders, Raspberry Pi-like devices, whatever. I'm sure this isn't the only example where it would cost virtually nothing & take an acceptable amount of time/energy to maintain OPSEC. So, I think they could do way better if taught how & provided reasonable methods.

"gone straight to fully digital mobile phone technology which makes intercepting and recording data almost trivial and alows statistical analysis of call paterns etc sufficiently easy that use of digital phone technology should be considered "suicidal" at almost any level no matter what precautions you take (a point that was not lost on OBL, and even he paid the price of insufficient communications security in the end)."

You seem right on, there. There's not a single "mobile security" solution the two of us haven't mostly shot down. It's a ground up thing & ARM/SOC-style COTS just can't be trusted. I've designed and posted "mobile" (read: you can carry it) solutions in the past. However, it might take some convincing to get journalists to do it. It also draws too much attention when it sticks out like a sore thumb.

"It's a hard problem and is getting harder by the day as new tracking technology becomes rapidly available (at a very profitable price) to level three agencies the world over and it is not helped by the likes of Chinese and Israeli telecoms hardware provider companies building in the tracking facilities from the silicon upwards."

They're not trying to make it easy on us.

Nick PJuly 15, 2012 12:57 AM

@ Clive Robinson on CvP

"However when you get inside a Castle it is a lot different from it's exterior. Castles tend to be built on the notion that those "invited" in are trustworthy and the internal defensive measures are still directed at the "uninvited" not the "invited". "

"A prison uses strong segregation / separation and minimal functional environments as it's basic mechanism to ensure the securrity requirments. "

The first quote is the problem I have with the analogy. There's certainly "trusted" code in a system designed in my model. However, most parts of the OS are subject to security enforcement & POLA. The trusted components are developed according to rigorous processes that are unlikely to result in critical bugs. So, you're second quote applies to my model quite well, especially as those goals were part of the security kernel approach that inspired my preventative view.

I think the prison analogy is fine for your system, but the castle one stretches to thin trying to cover the alternative. Also, to be clear to readers, I'm not expecting the average programmer to do this: the base platform will be done this way & the other developers leverage it. How? Well, there's many options and I leave it open. Only the TCB needs to use the rigorous development methods & past projects show that part can be pretty small.

WaelJuly 15, 2012 3:49 AM

@ Clive Robinson, Nick P

I'll stop at this point with "any questions?"

Nope! Just proposals:
Moving from an analogy to a usable model... The model should to be simple and ideal, or perfect. It should be the simplest construct possible. Prisons and Castles share some attributes; they both present an interface or a boundary between two separate domains or worlds. They both have zero or more inhabitants. From the outside as you have said, they look alike. But they also serve a different purpose.

C: Keep uninvited people out
P: Keep invited people in

C: Protect insiders from outsider
P: Protect outsiders from insiders

C: People enter by invitation
P: People enter by force when they break a protocol (law)

C: People leave at will
P: People must meet certain conditions to leave

C: Has a king
P: Has a Warden maybe with fewer powers than a king in a castle

C: Invited people are equal citizens
P: Inhabitants are not equal; different sentences

C: People are equal when they leave
P: People may leave on parole -- probed periodically to monitor behavior,
with occasional visits from a parole officer – a hypervisor again?

C: People inside are good
P: People inside are bad (but maybe not in the model, if applied to data, as opposed to code)

Would it make sense to propose the following mapping?

1- Hardware in a castle and a prison is the building materials; bricks, doors, windows…
2- Software is the people in the castle or prison
3- Firmware is the Helpers of the Warden or King?
4- Warden is the hypervisor, so is the King – Who runs a Castle anyway?

Does that make sense, or am I having a pipe-dream at almost 2:00 AM?

WaelJuly 15, 2012 3:58 AM

@ Nick P, Clive Robinson on journalists & opsec

"I'm skeptical about stego, though. "
Steganoraphy: The science/art of hiding the existence of information.
Cryptography: The science of hiding the meaning of information.

If hiding the meaning of the message causes problems by attracting unwelcomed attention, then the rational course is to hide the existence of the message - in my view.

DavidJuly 15, 2012 5:07 AM

@Nick P, Clive & Wael

I'm not sure who raised it first... but the mention of stego along with the idea of hiding extra resources in a chip that was supposed to be there (I think someone suggested the GPU) led me down some kind of mental path to hardware stego - hiding the computing hardware in plain sight.

Yes the GPU is a pretty good place to put it, but that's only useful for code and storage. We need to develop better methods of hiding the actual compute infrastructure.

I seem to recall a discussion around wafer-thin and flexible circuitry (I know the digital SLR manufacturers have been keen on this for a while) but perhaps this might be an opportunity to pull the ICs out of their black packaging and find more subtle places to put them.

As an off-the-wall suggestion, how about embedded in the journalist's elastic knee bandage (please don't hold me to that one... it was just intended as a thought provoker).

Clive RobinsonJuly 15, 2012 9:13 AM

@ Wael,

Criky there's a lot to reply to... So,

Intresting, seems like steganograohy is going to gain more importance than cryptography in some situations.

Yes and no, stenography is almost the perfect description of "security by obscurity" in the informaion world and by and large does not work due to signal to noise issues. If you think of it as just information we have ways to show stego at around 1 bit in 2^10 and in only mildly different cases around 1 bit in 2^20. The reason is there is "noise" and there is "random" and they are very far from being the same as analysis with FFT's and FWT's shows up quite easily and other statistical systems show.

Most "noise" is actually due to cyclical interferance of one form or another be it regular wave forms (think mains hum and clicks from motor noise) or iregular waveforms with predictable shape. Most "random" signals are neither cyclical or have predictable shapes so can be spotted as being different therefore suspicious. Thus getting good information stego is as hard as getti.ng compleatly unbiased random from physical sources (they are if you think about it the flip sides of the same problem). If you hunt back about twenty or thirrty years you will see that the likes of the NSA, GCHQ, et al were advertising for applied mathematicians in the "signals analysis" area and this is just one reason for them (another being digging out "odd traffic" / "odd behaviour" from large data sets as the basis of more refined traffic analysis).

To try to make stego work you need to do four basic things,

1, Find a carrrier channel with sufficient bandwidth to hide the final stego covert channel.

2, Apply strong encryption to the covert channel data to make it appear totaly random.

3, Apply reversable mapping techniques to turn the encrypted data from "random" to "noise".

4, Add the covert "noise" to the carrier data in a way that appears natural on analysis, but still alows the recipient to strip it out reliably.

Whilst stage 2 is fairly easy and stage 1 is a numbers game and thus similarly easy theoreticaly (but not practicaly). Stages 3 and 4 are not, for a start both are going to involve significant "data expansion" which reflects back onto stage 1. Further stage 4 is multidimensional real noise both adds and multiplies to the carrier signal and due to channel limitations it effects not just the amplitude and time domains but also the frequency/phase/sequency domains. This requires a complex convolving process which likewise requires a complex deconvolving processs just to recover the expanded covert encrypted data to get back to stage 3.

Using hidden hardware as a stego system is likewise a difficult task especialy with COST systems where the level three adversary has access to the system for as little as a few moments (think about the cat and mouse games between customs and drug runners and what has evolved.

First simply weighing the device might give notice that additional hardware has been added. Then a simple visuall analysis after using a screw driver to open the case will show up most changes like the addition of extra hardware. That is simply tipping the PCB's through the light will show frozen flow marks in the solder etc from the manufacturing process which soldering in additional or replacment items will visualy disrupt very visually (as any end of line PCB quality control inspector will tell you). In some respects this is like the raised printing used as a security feature in printing bank notes and other security documents.

Further low level radiation sources and photo multiplier systems can produce an image like an X-Ray that can be "overlaid in a blinker box" against a known standard for the COST make and model which will show any differences up as a flashing or different coloured signal.

A similar "blink box" system can be used for system software such as the OS and Apps (think forensic hashes etc). Especialy if the manufacturer has kindly provided a test port or worse a DMA based port of some kind such as FireWire.

Stego is a hard problem. One solution is to write the "next must have game" (such as Angery Birds or Hungry Shark) and include the stego function within it. Say an "etch-a-sketch charades" game for networked mobiles etc.

Hence my comments about having an "in" with the manufacturers.

Life would become much easier for Journalists, dissidents and spys etc if FDE with remote key managment became standard because then they would be no different from the norm. However I can not see most Governments alowing this to happen and would probably roll out "The Four Horsemen Ot The Internet" (terrorism, kiddy porn, drugs etc) as an excuse to legislate against it's use.

WaelJuly 15, 2012 1:08 PM

@ Clive Robinson

Yes and no, stenography is almost the perfect description of "security by obscurity"
And spread spectrum is almost the perfect implementation of steganography. Will look like noise to Mr. Fourier if he is not aware of the channels and randomness ala CDMA type spreading.

Nick PJuly 15, 2012 3:02 PM

@ Wael on stego

I don't know what the webster definition of stego is. Perhaps it's close to the definition you gave. I'm using the common usage version. When people say "stego", they usually mean tools that hide secret information in a publicly visible type of information. I've seen this done in pics, videos, audio, fake spam emails, network packets, etc. I've done it in other media, too.

So, hiding things is certainly a solution or preferred approach. I corroborated that in my post on the matter. However, when most talk stego, the above is what they mean. And detection methods on that are always getting better. It's usefulness will depend on group tareting you. Physical concealment of memory or computing hardware might have fewer issues.

Nick PJuly 15, 2012 3:06 PM

@ Clive Robinson

"One solution is to write the "next must have game" (such as Angery Birds or Hungry Shark) and include the stego function within it. Say an "etch-a-sketch charades" game for networked mobiles etc.

Hence my comments about having an "in" with the manufacturers."

Might be a game or popular app. Can do a binary or bytecode modification, too. The main risk is the modified app will be on the filesystem. A very easy countermeasure is comparing its hash to a known legit copy.

We might need to further define enemy capabilities. Many of these groups are just vacuuming up information, intercepting packets, etc. They aren't doing the above check and might not be doing subversion of the asset (again, depending on the group). So, perhaps defender-supporting organizations should try to map out their capabilities & change the advice to fit the likely risks.

Clive RobinsonJuly 15, 2012 4:22 PM

@ Nick P,

Sorry for the time delay, one downside of lodging with the medical profession is the use of "sleep inducing" chemicals whilst they might solve one problem, they do have side effects one of which is the "Hangdown effect" where the stuff ends up destroying your sleep rythm and you end up napping all the time. You end up (in the words of the song) looking "hungdown and brungdown".

So back to the chase,

Also, if I used encryption, I'd try to make it where it takes a remote, extra key whereever possible. This is to resist rubber hose cryptanalysis. (Maybe we should call it fingernail cryptanalysis, to be more accurate, eh?)

The remote extra key(s) are essential as a "proof" of "zero knowledge" by the person/employee carrying of the hardware. However as we have previously discussed it needs some spatial as well as time based ellements as well as a duress key to be effective and the key needs to be "shared" as several partial keys across many jurisdictions.

Preferably there also needs to be a series of unknown steps the equivalent of "cut outs" and "dead letter boxes" of old style field craft such as say having to use the likes of Google to search for a series of chained sites compleatly outside of the control of either the carrier or his employing organisation (ie openish blog sites or social networking sites). Such that watching the actions of the carrier this time will provide "zero future knowledge" that can be used as a watch point or predictor of this employee or other employees. All of which requires one heck of a lot of discipline on behalf of the carrying employee and their employer (knowing how to do something and actually doing it properly in practice are poles appart in abilities and discipline...).

Oh as for "fingernails" did you ever see "Running Man" where they torture him by drilling holes in his teeth with a dental drill?

Apparently (according to a book writen by a defector in the cold war) a form of it was used as a field interrogation technique. You would grab some happles conscript soldier tie him to a tree etc and start filing down his teeth with a large "Railways bastard file" (so called because it's 18inches long and has 13teeth per inch). only after he started screaming almost incoherently did you start asking questions like "what is your name?" and "what is your serial number?"... just plain nasty and very probably effective for local tactical info against conscripts etc... So how about "Dental cryptoanalysis"? It definetly puts a new meaning into "getting your teeth into the problem" ;-)

With regards,

Main point of using black boxes & main CPU onboard memory is that it should be harder to extract w/out uncommon expertise. What you think?

From discussions with RobertT and info discernible from recent US DoD project requests we know that detecting "Fake Chips" or "Backdoored Chips" is verging on impossible by direct examination. And that poisoning of a chip design done at the "macro" level via "test hookups" put in at the foundry is more than easily possible. However as I said to Wael above replacing a chip or adding a chip on an existing PCB leaves fairly obvious "re-work tell-tails" to a trained goods inwards quality control inspector. This is esspecialy true of the new lead free solders that actually leach copper off of the PCB and thus make re-work fraught if not impossible in many cases. Worse the likes of Apple now nolonger provide sockets by which you can upgrade your hardware and this is a trend I expect to see continue as more and more systems become "commodotized" in the likes of set top boxes, games consoles, thin clients, smart phones etc.

So to do it effectivly you are going to either need an "in" with the manufacturer or know how to make a very convincing fake iPhone etc (not impossible apparently the Chinese are currently doing this which is just one reason Apple are building a production plant in the US in Texas).

Which brings us onto custom gadgets such as your sugestion,

... also make purpose-built voice recorders, Raspberry Pi ike devices, whatever. I'm sure this isn't the only example where it would cos virtually nothing & take an acceptable amount of time/energy to maintain OPSEC So, I think they could do way better if taught how & provided reasonable methods

The technological idea is sound unfortunatly there is some very human issues to contend with.

Firstly and foremost whilst some individual journalists might be concerned about what happens to those they interview, the "hard news" managment attitude is they are a fully expendable infinite resource to be exploited at the minimum cost and fastest time to air. And as the journos themselves are only to aware these days they are likewise fully replacable as well by some "blond anchor girl" and a bunch of low paid (or not paid interns) Internet trawlers all sitting safe behind their desks in corparate America juicing out on their "skinny late with quad shot of esppresso" and energy drink meals.

The mantra of such managment is "efficient production" which means in reality the lie of "shareholder value" or the reality of max profit in minimum time to get this quaters figures and the bonus attached and thus the possability of moving up. One asspect of this is "fast technology churn" where the latest gizmos are purchased and the journos are just given them and told without training that they have to be 10% (or whatever) more efficient. In this sort of environment security is effectivly a non starter because it does not have a "bottom line book value" that can be massaged to improve their end of quater figures. So rather than being seen as an "enabler" to trust and future stories it's actually seen as "lock in" to a journalist and thus a dead end. Worse it's not in the Advertisers interests to have journos with protected sources, for the advertisers corperate lawyers and image makers it's all about their transparent control of the product to not negatively impact the projection of their "one true image". This extends down to the likes of playlists on radio stations where advertisers dictate not just what gets black listed but white listed as well.

It sounds almost unbelivable I know, but when you've worked in the broadcast industry you realise that the "F**k Button" that bleeps out or silently drops spoken content is used not just for "naughty words" but Advertiser protection as well you start to understand the ethos behind it.

The dirty little secret of this came out big time in the UK with a well known manufacture of choclate bars, breakfast cerials and baby milk formular. They had been censured by amongst others the World Health Organisation for pushing baby milk formular into various third world markets where it's use had the measurable net effect (through the use of dirty drinking water to make up the formular) of increassing the risk of infant mortality. Well the company concerned sponsored a well known "Reality TV" show and were horrified to hear what they considered the contestants bad mouthing them (ie in "reality" discussing the companies dirty little practices). As you can imagine the company had a major "hissy fit" and "the toys got thrown out of the pram", unfortunatly the production company went a little over board to protect the revenue stream and it became so obvious it became public knowledge and was reported in major news papers etc (which had the perverse effect of getting the papers new revenue from the company in effect to buy them off and shut them up).

But it gets worse, many of the agencies responsible for advertising and placing stories etc have clients who feel the need to be "loved" one such person (amongst many such) is the now deceased President / Dictator of Libya (Gadaffy) who for many years has been very free with the money to "build his humanaterian image" and many organisations (Universities, Influential NGO's, News Organisations) have along with various elected officials (including if published reports are to be believed ex UK PM Tony Blair) have been quite visable recipients of the largesse of Syrian influance / money. Obviously such back channels involving both monetary / positional gratification and humiliation potential are very powerfull motivators and it would not be unknown for people to protect their own personal position by aquiessing to "information requests" from the sponsoring organisations... Thus the journalists get it not just from the the level three adversaries of the countries concerned but potentialy from certain face saving individuals in their employing organisations...

Then there are other issues such as cost / usability / conveniance / replacability / maintanence / etc etc you will get told when you try to encroach on somebody elses "paid for" patch with the big broadcast organisations (can you afford to take "purchasing managers" out to dinner at trade shows or invite them to hospitality events at major sporting events or product promotions in exotic locations, with free transport, accomodation and endless supplies of "goodies" the cheapest of which would be vintage champagne?).

I've seen also first hand when working in the telecommunications industry those working for "government apointed" regulators responsable for verifying a manufactures facilities for compliance with required National and International standards be not just "wined and dined" but given "special rates" to buy goods for their homes and cars to drive around in but also given "company" in case they feel lonely during the extended visits.

Clive RobinsonJuly 15, 2012 7:22 PM

@ Nick P,

The main risk is the modified app will be on the filesystem. A very easy countermeasure is comparing its hash to a known legit copy

Hence the reason you have to write the original version so the hash is valid.

It is however based on the assumption that even level three adversaries don't have the capabilities to check each and every app/game in existance. And even if they do check the most popular 0.1% by then having it on your phone does not make you a spy just a slightly suspect "game player".

Thus it's the same as having an "in" with the hardware manufacturer only cheeper and possibly much easier to hide (after all how do you background check what appears too be a paranoid games writer secretly working on the next "big thing" in their back bedroom?).

Mind you if I was to try to do such a thing I would try to place myself or my code writter in with the app writers at the OS organisation and get it in the standard release as nicely "signed code" [1] and thus above "gold standard"...

With respect to,

So, perhaps defender-supporting organizations should try to map out their capabilities & change the advice to fit the likely risks

This is a catch 22 problem ;-)

To get the money to do the enumeration, you need to have paying custommers, and they are most likely to be the very people you are enumerating...

It's similar to the "Security workaround / patch problem" as an attacker being the first to get your hands on a patch alows you to work it backwords to find the vulnerability so you can exploit it for either a short window for those who patch often or much longer for those who don't patch for a long time or at all (the majority).

@ Wael,

And spread spectrum is almost the perfect implementation of steganography. Will look like noise to Mr. Fourier if he is not aware of the channels and randomness ala CDMA type spreading

Yes and no Spread Spectrum for "Low Probability of Detection" (LPD-SS) be it Direct Sequence (DS) or Frequency Hopping (FH) works on "code gain" that is the spread bandwidth is X times the data bandwidth and thus you have a coding gain of ~X. If the SS transmitter is sufficiently far from the intercept ECM / Surveillance receiver then it will appear to be below the thermal / atmospheric noise floor. However this is rarely the case even with the likes of the US GPS Military signal or other satelllite signals. If your intercept station has sufficient gain in it's antenna and knows where to point it then the SS signal is nolonger below the thermal / atmospheric noise floor. On a spectrum analyser it's basic charecteristics (type, chip rate, coded bandwidth) are usually fairly obvious to the practiced eye. Worse for DS-SS the spreading code usually has to be linear and it has been shown that you only need to know 2M bits of the code to work out the entire linear sequence. As such DS-SS is going out of fashion for LPD these days, it's more frequent use being CDMA and shared bandwidth/service especially in the likes of the ISM bands and analog TV bands.

Even when the spread signal is below the noise floor there is another issue with LPD-SS systems, which is "receiver sync / lock up". That is the receiver has to have some way of syncing the deconvolution code with the convolution code at the transmitter or it will not receive anything. For various reasons (drift, doppler) accurate clocks don't work very well nor do third party transmitted refrences.

There are a couple of practical methods used to obtain initial lock, one is to use a higher energy beacon signal with a short M-Sequence code [2] and or low chip rate, another is to use a burst signal that acts as a lead in pilot. The choice is generaly dependent on if transmission is continuous or short duration such as in two way PTT systems used for voice comms. Either way the beacon acts like a "flashlight in the dark" and can be fairly easily found with modern high bandwidth IQ receivers and backend DSP as used in high end military ECM / Surveillance receivers. Further it's not just FFT's or FWT's that are used in such receivers these days there are more complex mathmatical systems used to find the charecteristics of the signals and dig them out from the noise or other interferance. The point being is once a part of the transmitted signal has been recognised it can be used as a sync signal and even the use of simple averaging on the IQ memory will bring up more charecteristics and bit by bit the original signal reconstructed and analysed. For instance take FH-SS the charecteristics of each transmited envelop charecteristics for each frequency hop are almost identical and this can be used to tell them apart from other signals in the area. This can be improved by using two or more directional antennas and with modern electronicaly steared / synthersized antennas this can be done almost in real time.

Early PTT SS systems actually used SAW matched filters to provide a sync pulse that had an ERP of many times the coding gain, and in some cases 10db up on the equivalent non spread ERP channel. The more modern version is to use high speed A-D converters and DSP matched filters at the receiver as this can be programed like a SelCall system.

The overal point being that although the chip sequence may not be deducable and thus the data not recoverable by the surveillance receiver the beacon signal indicates both that a transmission is happening and from what direction, and in some cases even the range.

In the intel game you don't need to know what has been said to hang a person just that they have talked to another suspect. Which is why anti-traffic analysis technology is perhaps more important than data security technology such as encryption these days (TOR developers please take note).

[1] and people wonder why I have no faith in signed code or code reviews etc etc ;-)

[2] There are a large number of M-Sequence codes that can be used, the earliest were "Gold Codes" and "JPL Ranging Codes" even simple PN-Sequences, with the more esoteric Walsh-Hammerad, Kasami and even Barker codes being used for other functions within SS systems. To be a good contender for SS use a sequence does not have to actually be an M-Sequence it actually needs few charecteristics other than strong auto-corelation, and for DS-SS it generaly has to be linear to be balanced to avoid offsets in the receiver. Thus you could use the output of a modern crypto function such as a stream generator or block cipher in CTR mode for FH-SS as long as you have some method to sync the receiver with the transmitter ( http://www.netlab.tkk.fi/opetus/s38220/... ).

WaelJuly 15, 2012 7:24 PM

@ Nick P on Stego

"stego", they usually mean tools...
We are basically saying the same thing. The subtle difference is that you are emphasizing "tools" as opposed to "information". That difference does not affect the discussion. I also meant by Stego exactly what you mentioned... Distribute the information in a bunch of movies, JPEGs, etc (for data at rest) and use spread spectrum / frequency hopping or other techniques for transmissions (Data in transit). It still is STO (Security Through Obscurity). We have stigmatized STO a lot, and seems we both agree, that it has a place under some circumstances. I remember reading a couple of good books about these subjects by Simon Sing: "The science of Secrecy" and
"The Code Book: The Science of Secrecy from Ancient Egypt to Quantum Cryptography" They were very light weight, but contained some interesting history and information.

WaelJuly 15, 2012 7:33 PM

@ Clive Robinson

Please define "level three adversaries". Is that a funded organization / major government; the IBM classification?
1- Script kiddies
2- knowledgeable insider
3- Funded organization (could use a team of class 2 as well)?

Clive RobinsonJuly 15, 2012 10:33 PM

@ Wael,

With regards the level of adversaries it's changed with time a little since The IBM / Ross J Anderson definition as evidenced by the likes of Stuxnet DuQu and Flame. But losely,

Level 1, is an individual or small team that are constrained by very limited resources and have no knowledge of the systems other than that which is available to any other external entity.

Level 2, is an individual, team or organisation that is limited in resources but have access to an increased level of knowledge over that which is available to any other external entity.

Level3, is an organisation or agency that is not limited by resources and has access to any information that is available to the system developers and access to related information that is effectively the result of classified research.

It is important to note that the above definitions are (unlike the IBM / RJA definitions,) based on a three dimensional graph.

The dimensions being,

A, The number of skilled human resources.
B, The avaialability of resources that are not human and not target system related information.
C, The level of target system related information available.

In reality the three levels should be further sub-clasified. That is I don't expect the Government intel agencies of some countries to be on par with that of other countries, and in fact I actually expect some private research organisations to be better than many Government intel agencies, likewise some criminal enterprises.

What has become clear in this modern world is effectivly unlimited bureaucratic budjets don't buy "smarts" capable of doing cutting edge research. Likewise they don't buy the sorts of information criminals generaly have little trouble obtaining via simple theft or breaking, entering and taking away etc.

That is "free enterprise" gets results that reliable incomes and pensions don't. Which is why we hear about "exploit brokers" and the likes getting government work / contracts / money.

However until some industry luminary comes up with a more up to date set of generaly accepted catagories I'm stuck with other peoples aproximations hence I still try and squeeze the multidimensional model into three levels :-(

WaelJuly 16, 2012 12:24 AM

@ Clive Robinson

On ""level three adversaries"
Thank you sir. Now we can talk on the same wavelength.

I'm stuck with other peoples aproximations hence I still try and squeeze the multidimensional model into three levels :-( AND (this is the polite version the more common usage uses another lower part of the anatomy than the neck with the further unsaid idiom of things being a considerable pain there ;-)

Yup! That's gotta be a pain in the neck, and some have a lower opinion than that ;)
Another polite version...

RobertTJuly 16, 2012 2:49 AM

@CliveR
"From discussions with RobertT and info discernible from recent US DoD project requests we know that detecting "Fake Chips" or "Backdoored Chips" is verging on impossible by direct examination. And that poisoning of a chip design done at the "macro" level via "test hookups" put in at the foundry is more than easily possible"

Having a third party modify a chip at the foundry level without the original design company knowing the modification occurred is possible BUT it is definitely not easy.

The problem is that the "fab" does not have the original database that created the chip, so it is difficult to verify that modifications do not effect the overall chip performance. Any significant changes in performance, especially bugs, will attract a lot of attention and reveal the changes / unexpected cell hook-ups.

Most chip designers would be VERY confused if the onchip metal hookups in a certain area were different to the expected, however they would never suspect that a TLA was involved.

It is worth mentioning that the final chip design company does not necessarily have access to the whole chip database. This is the case when the final chip uses IP blocks that are licensed as so called "hard macros". Microprocessor cores such as ARM9 or MIPS are very often provided as black box modules. So it is possible for someone at ARM (for example) to add spying structures that Qualcomm (the cell phone chip designer) includes in their database and the foundry then selectively hooks-up.

The real trick is to distance oneself from the database modification, so that even if it is discovered, someone else gets to spend their golden years at the big house.


BTW: From my experience very few security engineers understand that on chip LSFR's and related pseudo random sequence generators can actually enhance data extraction because they act in exactly the same way as a DSSS radio, so you get the same processing gain. I recently saw an anti DPA structure that used a simple LSFR to randomize the clock. they couldn't believe it when I externally locked to the LSFR sequence using a correlator and then extracted the DPA signature.



Clive RobinsonJuly 16, 2012 8:13 AM

@ Robert T,

LSFR's and related pseudo random sequence generators can actually enhance data extraction because they act in exactly the same way as a DSSS radio, so you get the same processing gain

I'm not quite sure where the idea originated but certainly CCITT used "whitening" in their V-modem specs so that the "energy would stay in the mask". It was certainly subisquent to that that Far Eastern PC-Mainboard manufactures added the LFSR to the main CPU clock to "spread the energy" across the frequency band so it would meat the EMI / EMC masks with less (quite expensive) decoupling components.

From a series of experiments I found that although the autocorrelation function was not optimal (unless all from the same manufacture) it would alow something close to a ten times range increase on picking up a PC's signals to reconstitute thinks like keyboard data. Even better it worked against the "sea of noise" theory that a PC for "secret use" could be hidden without TEMPEST shielding in amongst a whole load of other PC's "sea of noise"...

And of course the increased distance further allowed several corelated antennas to be used to give even better rejection of the other PC's...

Some of them "young guns" have not yet learnt to not rush in where us oldun's know not to tred ;-)

Nick PJuly 16, 2012 3:22 PM

@ Wael and Clive

I still don't like the classifications. I say, we take a page from Common Criteria's book. Orange Book used to talk about how secure a system was by putting it in certain categories. The categories had both assurance increasing techniques & security features. This led to tons of problems I won't go into. (Helpful, occasionally humorous read would be Shaefer's "If A1 was the question, what was the answer?")

Common Criteria changed the situation. I'm going to simplify it here. The big change was separation of a security classification into the security target and evaluation assurance level. The security target talked about capabilities, features, etc. The evaluation assurance level was concerned with required steps to establish confidence in those claimed capabilitise. There were also standardized terms that could be used by third party evaluators & that could "augment" a given rating.

Alright, now our current scenario. We need to create a new way of classifying general and specific capabilities. Clive had some good metrics. How much inside knowledge do they have of target organization? Of their countermeasures? How much funding? How much time to RE security measures, map out the organization, plant moles, etc?

That's just the basics. It doesn't really say as much about actual, long-term capabilities, though. That's counterintuitive to most classifications but accurate for this reason: outsourcing. Organizations with the cash can outsource many levels of attack & there are happy mercenaries at most levels of traditional classifications. Additionally, the black hat markets have matured to the point that there are many groups with different specialties able to work together to accomplish goals easily.

So, a company might be hit in many counter-intuitive ways. An intelligent, but non-technical, insider can circumvent DLP using step-by-step instructions on the web written by cutting edge expert. Type 1-2 attackers can target high value assets by leasing or contracting capabilities of organizations with Type 3 technical capability. Additionally, it's hard to distinguish between TLA's, commercial & criminal markets in technical capabilities of both surveillance and offensive tools. Far as a defender is concerned, any of these groups could cause grave damage if they chose.

Any ideas on a better way to do the classification? And especially one that takes into account the decentralized nature of the modern battleground?

WaelJuly 16, 2012 4:53 PM

@ Nick P, Clive Robinson

I am not too strongly opinionated about definitions - so long as we agree on the meaning. That way we won't talk across each other. I have thought about this as well a few years back. I will try to recollect my thoughts and get back to you later on...

WaelJuly 17, 2012 5:10 AM

@ Nick P, Clive Robinson on C-v-P

"Moving from an analogy to a usable model..."
Had an epiphany...
Castle represents: Complete awareness !!!
Prison represents: Total assured control !!!

Each has an owner inside, protecting (confidentiality, integrity, Availability, and accountability) assets.

Assets are people.

@ Nick P, Will give you my thoughs on categorization later. My skull is getting heavy...

Nick PJuly 17, 2012 3:00 PM

@ wael

My stuff derives from the old security kernel models. In those, there is no monitor really: data is forced (controlled) to do certain things and the assurance is put into that. Unusual stuff, if monitored, is logged for admins.

So, nice try again on making that analogy work. ;)

WaelJuly 17, 2012 3:29 PM

@ Nick P

Dude! The Analogy graduated and became a Model. You are such a tough customer ;)
Seriously though, this is a different discussion than the classifications thread. That... I still owe you.

WaelJuly 17, 2012 9:23 PM

@ Nick P

Clarification request:

In those, there is no monitor really: data is forced (controlled) to do certain things and the assurance is put into that. Unusual stuff, if monitored, is logged for admins.

You see, data does't act of it's own free will! It's acted upon by something that's passive-voiced in your statement. Please activate the passive voice...

Nick PJuly 18, 2012 1:38 AM

" The Analogy graduated and became a Model. You are such a tough customer ;)"

But of course! What other type could muster the will to try to parry Clive all this time? ;)

"Castle represents: Complete awareness !!!
Prison represents: Total assured control !!!"

Well, see, that's the thing. The point of the B3/A1 class security kernels, later capability systems & recent SKPP-based platforms is [basically] "total assured control" of resources & information flow. (where it really matters most) The real backbone of Clive's plan combines extreme POLA & signature-based monitoring (awareness?) that works to the function level. So, the representation seems a bit backwards on the surface. It's one reason why that castle bothers me.

If it helps, I have some attributes of what he calls castles that might lead to a better term. It's focus is mitigating by design. It also tries to contain resultant damage. Logging of critical events of some sort was in past designs, although what to log isn't agreed on. The TCB is minimal (as possible), non-bypassable, and always enforces security policy. Correct by construction in concept, development, deployment & production. (There's more but this is more than 99+% of current systems.) I've extended it conceptually to allow for monitoring (behavioral, signature, hash change) & recovery-based architectures. The "system" may be on a chip, local, a mix of trusted/untrusted, or totally decentralized. The important thing is the aforementioned traits are present. (Note: web 2.0 style stuff is excluded mostly because we're still trying to wrap our minds around getting it done right, much less securely)

(Note: Enforcement, configuration, design, etc. was where the real security value was in my model. Monitoring was a just in case that didn't work thing. Might have a greater value these days. I talked like mine's security was passive b/c, how the kernel/user mode works, it ends up that way: user stuff runs, tries something privileged, gets an automatic check by the kernel, passes/fails. How active does that sound? Other measures might be aggressive, but the main technique was reactive. They're gradually getting more proactive, hence me adding monitoring and recovery-style stuff over past year or so.)

Personally, I think the castle model fails to capture this stuff because it talks about physical things & we're talking about information. This is similar, perhaps, to the DRM debate where the RIAA tries to use many physical "theft" analogies to describe illegal downloading of songs. It is left to the rational person to figure out how to equate stealing 30,000 CDs from a store & "copying" 30,000 songs. Managing data is quite different from people in a building. (Who copies people or stashes them steganographically in business files? ;)

Note: I noticed that Clive compared the two, saying castle trusts insiders whereas prison is supposed to contain insiders (distrust). I think that's inaccurate: the "trusted" subjects/portions of stuff in my model are supposed to be incredibly vetted, kept to a minimum, and communicate in restricted ways with other things. Translated, they're assured through a rigorous process, denied any extra resources, and use limited communication/sharing capabilities. To me, it just seems like two different ways (or degrees) of distrusting insiders. Castle analogy takes another hit.

So, you can keep saying it that way if you want. One advantage is people googling Castle vs Prison on this blog will find our old discussions, as it seems you were digging through some. We actually intended that to happen early on. I'm just concerned that the model might limit the way people think about the actual thing, which is quite a bit different. The things it does hit on correctly are that these designs take quite a bit of resources up front & they're fortified (hence, me saying fortress once).

WaelJuly 18, 2012 4:24 AM

@ Nick P

I hope I don’t irritate you with this. I will try to make it a little amusing, just in case…

Models serve to simplify real world objects, phenomena, situations, etc. In this case I am talking about constructing an abstract model of security. I don’t understand what B3/A1, Orange Book, WEB5.0, Kernels, Hypervisors, Programming languages, CPUs CISC or RISC, Common Criteria, Quantum Mechanics, clocks, encryption, protocols… I don’t even know what POLA (or other security principles you did not talk about) is.

So why do we need a model? Quite frankly, I don’t see one. I see security failures and breaches that keep happening over and over. And you have some security people saying “buffer overflow”, “Race condition”, “side channel”, DPA, or “implementation problems”, and my favorite catch-all phrase “security hole / attack vector” or lets run Coverity or Clocwork for static analysis. Oh, let’s sign this code, maybe if we use a “stronger hash”, or have an immutable boot block. Oh! TCB will do it! How about a small verifiable Micro Kernel, that’s gotta be it! But by far, the toughest one is what Clive Robinson threw at me, saying Axioms can’t be relied on -- that will take me some time to recover from. What I see is “best practices”, B3/A1, Orange book other links you and Clive Robinson mentioned – which are all good. But the big picture is missing. I will give you an example.

If I gave you two devices, and asked you, which one is more secure? How would you go about that? Threat modeling and attack trees? (Bruce talked about that, I think, and I admit I do not fully grasp that yet)… We can talk about that in the “classifications” thread, but it is related to this discussion as well.

Once we construct that model, we can talk about security principles, implementations, B3/A1, etc. and the other lower level details.

Maybe we should take a different approach: If I were to ask you to build a hypothetical general purpose computer system, but it needs to be absolutely secure, guaranteed that no one can break it, where would you start? Remember this is only on paper. Start from scratch [1].

Do you “feel me”, Nick P? ☺
I stole that from Samuel L. Jackson from one of his movies when someone asked him. But I remember that his answer was, yes! “I feel you” ;)

[1] I once took a class in the math department with a world-class known Austrian origin professor. I ended up dropping it, but I learned more things there than in other classes that I did ok in. I learned three things:

1- During a proof of a proposition that I had to do on the whiteboard, I started by saying, From Gauss’s formula, we can… He stopped me there. He said, “Who is Gauss?” I told him the German guy! Karl Friedrich Gauss? He said, “I don’t know him”. I stopped there and could not finish the proof, and went back to my desk thinking: This guy is strange! How could he not hear of Gauss, especially that he (the prof) is Austrian? Of course I realized shortly after that he did not want me to take other people’s word for granted, without understanding how they arrived at their formula, even if it was Gauss. He wanted me to derive that formula before I used it to prove the proposition.
2- I also learnt that it is Okay to go backwards when you are given something to prove. I always did that before this class, but always felt guilty that I “cheated”. It’s like going from the end of the maze to the beginning. But it’s ok. He said that most mathematicians that came up with elegant proofs did that. They went backwards, or did a lot of messy unorthodox things. Once they reached what they wanted, they cleaned up everything and presented it as if they went in the sequence they made public.
3- I remember something funny he said. He was giving an overview of a problem. And said, I will give you a heuristic argument (before the proof), something that an engineer or a physicist would call a proof ☺

WaelJuly 18, 2012 11:50 AM

@ Nick P (C-v-P long discussion)

Perhaps we should take this discussion off-line. Seems no one is interested short of you, Clive Robinson, O W, and RobertT.
Post your white-listed-general-purpose email address and I will reply to you. I don't have a white-listed email address.

Nick PJuly 19, 2012 10:29 PM

@ Wael

Now, I'm finally seeing where you're coming from. You said you want an abstract model of security? And you aren't familier with kernels, POLA, or the TCB concept? These aren't buzzwords a la antivirus or "AES-256." These are among tidbits of knowledge in the security domain that professionals use to improve the security of their systems. We have tools, principles, some abstract models, best practices, etc. They each do their part in the field (s). They are much like the body in that it's better to have each part of the whole than a single part. Quite frankly, it isn't a basic engineering discipline or typical math class: the truth of the models, the ability to connect them to reality, etc. is all very different and more difficult, esp as malicious intelligence exists in the system. i.e. "programming Satan's computer"

But abstract models for general purpose security... We can start with Bell-Lapadula for confidentiality and Biba for integrity. Military use-cases somewhat comply. Early orange book systems had to use that as the security model. It failed due to necessary components (e.g. certain drivers or regraders) having to work at multiple levels, difficulty mapping to reality, etc. Then you have Clarks Wilson for integrity, Chinese Wall for distrusting users on the same PC, etc. I didn't see any of these classroom models pan out into real systems. Most recently, we have the separation kernel model that looks to be the isolation version of security kernels. Its profile, Separation Kernel Protection Profile, has led to the production of several complying products: INTEGRITY-178B, VXWorks MILS, and (supposedly) LynxSecure.

Yet, MILS/SKPP has its critics. Academics like Bell, co-inventor of Bell-Lapadula, said it's essentially equivalent to the MLS problem. They also point out it was rushed by government & unproven by researchers first. So, we have very few useful models to work with for security in general. (starting to see why I wasn't using them? ;) The longest-lasting, real-world model is the ring model for access & integrity. Today, systems just use 2 of those states (kernel & user), but the division helps reduce critical bugs. Another model is the capability model where access is granted based on ownership of a token/capability. Other models can technically be built on a capability system. A number of real-world systems use capabilities for access enforcement to minimize their security-critical parts. The hypervisor/VMM models have become popular, yet the concrete implementation often breaks their properties.

"If I were to ask you to build a hypothetical general purpose computer system, but it needs to be absolutely secure, guaranteed that no one can break it, where would you start?"

I'd tell you that it might be impossible. Here's the security problem as NSA's Brian Snow brilliantly summed it up:

"The problem is innately difficult because from the
beginning (ENIAC, 1944), due to the high cost of
components, computers were built to share resources
(memory, processors, buses, etc.). If you look for a
one-word synopsis of computer design philosophy, it
was and is SHARING. In the security realm, the one
word synopsis is SEPARATION: keeping the bad
guys away from the good guys’ stuff!"

"So today, making a computer secure requires
imposing a “separation paradigm” on top of an
architecture built to share. That is tough! Even when
partially successful, the residual problem is going to
be covert channels. We really need to focus on
making a secure computer, not on making a computer
secure – the point of view changes your beginning
assumptions and requirements!"

I've posted projects that are trying to do the ground-up redesign he speaks of. DARPA's Clean Slate program, with projects like Tiara and SAFE architecture, are examples. However, that's got unknowns written all over it. Additionally, they are combining some of the best principles, approaches & specific defense techniques from the past to do the heavy lifting. Only certainty is that the ground up portion will reduce covert channels & the effect of legacy. However, eliminate most legacy architecture & covert channels, then you still have to be able to run many legacy apps on the thing. Not happening: such a redesign requires them to be totally redone. Hence, my middle path: security engineering that decomposes systems into components & allows incremental assurance work. You can have isolated apps running directly on the secure whatever, you can use VM/wrapper for legacy apps, you can... pick & choose your level of security & compatability.

There's nothing magic here. There's no model that you will look at and think, "why, that's exactly what I've been wanting all along!" (That maps to an equivalent concrete model...) If you post it, people will shoot holes in it from design, implementation, or legacy standpoints. A model is only as good as the implementations it allows. Good security must be done bottom-up from available hardware & top down from good designs, models, whatver. It's a complex orchestra of stuff.

So, I'll have to think a bit on what to say next. I'm tempted to send you one of the classroom presentations I found & sent a newcomer recently. It nicely summed up the INFOSEC problem, security models, exempar systems of the past, some recent issues, etc. Honestly, if you're talkin security ground up, it takes a broad array of knowledge. Security kernels were designed against formal models & in conjunction with many other things. In contrast, just switching from C/C++ to a managed language can immunize an individual app against many security breaches with no knowledge of security. It's a strange field. You must pick what you want to accomplish in it & then acquire the necessary knowledge/experience. That varies.

Nick PJuly 19, 2012 10:38 PM

@ Wael & others on email

Well, we have tried this before in the past. Two things come to mind. The first is that Clive's approach is half the discussion & he's always preferred to talk here, never share an email, etc.

The other thing is that the conversation taking place on relevant and general Schneier threads is the only reason you two are in it. Much of our previous discussion on many topics was here presumably so that others would read it & be inspired to solve real world problems. This is, admittedly, a space hog of a topic that doesn't seem to go away. A private forum or discussion group would be better for most of the conversations to save space for others on this blog. (See point 1, though.)

WaelJuly 19, 2012 10:46 PM

@ Nick P

Good to hear from you again bud. You are tougher than I thought :) I was worried about flooding this forum with a subject that seems unending...

Maybe I will talk about the other stuff when I land...

WaelJuly 20, 2012 2:45 AM

@ Moderator
Please be kind enough to delete my post with the email.

@ Nick P
The other thing is that the conversation taking place on relevant and general Schneier threads is the only reason you two are in it. Much of our previous discussion on many topics was here presumably so that others would read it & be inspired to solve real world problems.

Now you made me feel bad. You led me to believe it's ok to exchange an email between you and Clive Robinson. You are right though. But flip flop on me again, and you will find in me a most merciless "poet wanna be". I will post a lemerick (payback is a bitch) which will start like this:

Nick P was a Security Schizophrenic from Nantucket.
;)

Nick PJuly 20, 2012 12:25 PM

@ Wael

"Good to hear from you again bud. You are tougher than I thought :) I was worried about flooding this forum with a subject that seems unending...

Maybe I will talk about the other stuff when I land..."

"Now you made me feel bad. You led me to believe it's ok to exchange an email between you and Clive Robinson. You are right though. But flip flop on me again, and you will find in me a most merciless "poet wanna be". I will post a lemerick (payback is a bitch) which will start like this:

Nick P was a Security Schizophrenic from Nantucket.
;)"

haha. Unexpected reaction. On certain topics the title seems accurate. On email, I agreed it would be nice. So would a forum or dedicated web site. (I'm feeling an idea brewing there.) However, as Clive is half the discussion, it is useless without his email or participation. The other statement isn't necessarily contradictory: summaries, conclusions or later models could be posted in public forums like this during relevant discussions. Options are entirely on this forum, partly on this forum, or entirely private. Again, I'd be fine dropping publicity if we got clive in on it. That's the real limiting factor.

WaelJuly 21, 2012 12:40 AM

@ Nick P

Again, I'd be fine dropping publicity if we got clive in on it. That's the real limiting factor.

Let's keep it public until someone complains. I doubt we'll go anywhere if we went private.

(I'm feeling an idea brewing there.)

I was hoping you "feel me" first ;)
I also can think of some ideas, but they will add to the work load of the Moderator.

WaelJuly 21, 2012 1:31 AM

@ David

Somehow I missed this:
As an off-the-wall suggestion, how about embedded in the journalist's elastic knee bandage
Can you elaborate a bit? Sounds interesting...

Clive RobinsonJuly 22, 2012 4:16 AM

@ Wael,

Please don't post large, rude or otherwise poetry lymerics or songs etc. Because as Nick P will confirm our host has told one or two of us off before for such behaviour (myself included). Which as it's his blog, I'm mindful to follow the rules.

Anyway back to the security question of C-v-P or whatever those discussing wish to call it.

Firstly please do not make the mistake of thinking very few people read the comments... Many "non-comenters" read this blog and as we know one or two peoples past comments have been cited in published papers etc. Further if you read academic papers you will find that many opinions expressed in them have come into line over the years with the thinking of people first expressed on this blog.

One of the reasons I post here is that not only many others can and do read here but, if people don't agree they can say so without fear of consequences that other more academic processes would stifle. Likewise they can also ask questions and give other viewpoints and in return for this freedom I say feel free to use the ideas etc I post, and if you do, cite me as a politness, Oh and if you ever meet me or Bruce buy us a drink (even if it's only a cup of tea) to say thanks.

Oh and with regards "blocking the blog" I generaly wait for the "on topic" thread traffic to die down or stop if the comment is effectivly becoming "off topic" and that way cause minimal disruption.

As for EMail one reason I don't use it in a more general way is that I have in the past (the old hotmail account amongst others), and I tend to end up having revolving EMail addresses due not to spam but people who for various reasons wish to activly hunt me out (I like my peace and quiet to think, oh and have family time etc...).

But now back to the subject proper. There is a reason why I treat C-v-P more as an ideas/talking point rather than a hard model, it's simply that it may well never make it as a long term model due to changes in the underlying technologies. To see why this might be ask yourself,

What will happen when we hit the buffers on the effective "doubling up" of system power every software revision cycle [1]?"....

Obviously this will as a minimum act as a bit of a "hard limit" on the attitude of software developers with fixed revision delivery cycles to the more outlandish features requests from Marketing. That is in the "single" CPU market they will have to start looking at writing code that is considerably more efficient. And also lacking those outlandish bells and whistles of marketing wish lists, which is where a lot of hidden vulnerabilities hang out [2].

In essence some of these insecurity issues which gave rise to the ideas of C-v-P might with a small probability (OK vanishing to nothing) be consigned to the dustbin of history (that is "if all" software developers actually start taking a proper engineering view point). However as we can see with "multiple core" hardware development the current favoured solution of the hardware manufactures is to find ways at almost all costs to maintain Charles Moore's observation (no it's not a law and never was except in the minds of journalists and their readers).

So my view is that the hardware manufactures will continue down the "multiple core" route either for some time or as an end objective to the problem. If the later then they will have to start giving up less well used CPU features at some point to make space for more cores "on the chip" [3]. This will in turn lead to a simplification of the CPU cores which we have already seen Intel do by going from pure CISC of the early IA86 line to a RISC core with a complex instruction interpreter wrapped around it. Likewise historicaly with many CPU familes the architecture of CPUs being switched to Harvard from von Neuman to alow for instruction pipelining and later for instruction and data caches, The von Neuman "joining of the data and instruction busses" happening on the periphery of the design almost as an after thought [4].

This gives rise to a difference between Nick P and myself simply because his view point is the pragmatic "work with what we've got", where s I'm trying to "crystal ball gaze" by trying to guess which way the industry will move and point out what can be done to improve security as part of the process.

However a fundemental difference of opinion lies between myself and Nick P over the future of software development. My view point is "code cutting" will always be more prevalant than security engineering and won't get resolved except by making the secure route not just cheaper but also more productive. Nick goes for the view point of tool chains will evolve to make more secure coding easier (which they will) but thinks that will be sufficient. Sadly the current market says otherwise, and I can not see that changing in my life time without external drivers such as "lemon Law" type legislation which is very undesirable as it puts limits on every one and turns development into a "closed shop" that inevitably becomes an "auditors race to the bottom" and thus fail (See PCI specs and rules and how the payment card industry applies the rules on what

Nick P appears to asssume that all code can be and importantly will be re-writen to make it more secure.

[1] : The doubling in system power with every software release is actually a bit odd as there is actually no real verifiable reason we can see for it. Charles Moore made an observation about transistor count doubling in a given time period and for some reason the ability of the industry to maintain that rate of progress continued for a couple of decades. However that is nolonger the case but other factors such as HD performance etc has allowed this effective "doubling up" on system power to continue.

[2] : The problem with marketing wish lists is that very few people ever use the functionality provided, so there is a lot of very redundant code in many applications (such as flight simulators in spread sheets ;-). Now generaly if the code is written in a cooperative way with the OS the code gets swaped/paged out of memory never to return thus conserving main memory. However if somebody uses the feature then it either stays put or gets paged back in. The problem is that although it might not normaly be in main memory it's bugs etc that cause insecurity are still very much part of the "attack surface". Also the less used a feature is the less noise the users make about it's bugs etc therefor the longer they persist in the code base the software house pushes out to users....

[3] : There are very real limits to what is currently possible on the "chip" due to the "bottle necks" of "off chip communications" and packaging limited by some of the laws of physics. And if you look at some modern "chip designs" you will find allsorts of techniques to mitigate the comms issues and some in the packiging designs. Some of them are fairly obvious such as on chip caches with techniques such as "write through, read from cache", some however won't work in multiple CPU systems if data has to be shared effectivly amongst the CPU's.

[4] : The need for this aspect of the von Neuman architecture is for a basic couple of reasons, Firstly so that code can be loaded into memory by the CPU it's self. And secondly to alow for code to evolve in memory one way or another. Both of which are requirments for non embeded OS's and quite a few more modern embeded systems. However both are security risks and can be mitigated in multi CPU systems simply by making on CPU do the memory managment as a well shielded process via a hypervisor etc.

[]

$$$

Clive RobinsonJuly 22, 2012 4:44 AM

@ wael,

I occasional say rude things about my mobile phone because it sometimes locks up or thinks the submit button hass been pressed (it appears to be a bug in the keyboard driver...).

Any way it's just done it again...

I was in the middle of edditing my above (mega) comment when it just decided to post...

So the bit between "(See PCI specs..." and the first note below it will not make any sense.

I will get back to it a little later in the mean time I'm going to make a relaxing cup of tea lest I decide to give this phone flying lessons ;-)

WaelJuly 22, 2012 6:06 AM

@ Clive Robinson

Thank you for the clarification, no more "poetry"... I have seen Bruce in 2003 in an RSA conference (I think) from far away, maybe one day we'll meet. I understand your email stance as well, and I agree with it.

It's too early in the morning for me to talk about the subject matter proper, so I'll try to talk to it later on. I am new to commenting on blogs, even though I have been following this one for a few years, so I imagine many more are just reading...

One question: I often see others say "@ soandso"
said rather than "soandso" said. I understand the "@" at the beginning of the line, but not when refering to or citing a person. Does it have to do with filtering an RSS feed?
.

DavidJuly 22, 2012 6:24 AM

@ wael

the format has been around for as long as I can remember (at least 18 months!) in a number of places, not just Bruce's blog. It's probably a simple variation of email addressing, but beyond that, I have no idea.

DavidJuly 22, 2012 6:26 AM

grrr... I used diamond brackets... which were promptly stripped...

it should have said, "the {@ name} format..."

WaelJuly 22, 2012 6:48 AM

@ David

I understand now. You are talking about wearable electronics and hiding them from adversaries. Not crazy, has been used before, and has a counter part from the distant past. I think Romans used to shave a messenger's head, write a message on it, wait until the hair grows, then send the messenger to the desired reciepient. Was not an effective method for "emergency" news or instructions.

Thanks for the clarifications about the formatting.

Clive RobinsonJuly 22, 2012 6:56 AM

@ Wael,

So to continue...

... turns development into a "closed shop" that inevitably becomes an "auditors race to the bottom" and thus fail (See PCI specs and rules and importantly how the payment card industry applies the rules on what looks very like a "the more money they make us the more lenient we will be" which is just another version of "To Big to Fail" thinking).

Another area of difference is that Nick P appears to asssume that all code can be and importantly will be re-written to make it more secure or have security wrappers placed around it. I don't think it will unless people are forced to do so.

Currently you can put a wrapper around insecure code by putting it in a Virtual Machine (VM) but that is fraught with problems. Most code attacks these days are based on using failings in the code input routiens to get the code to behave differently either by injecting malware or by causing the code to jump to some other point. Putting a Vanilla VM (VVM) wrapper around the code won't stop this, all it will do is maybe limit the damage to the VVM (doubtfull if you look at the history of "sand boxes"), which is of no use if the attacker is after the application data as the VVM will alow this to happen as it's seen as "normal operation". Worse some people will just say "we cann't re-write it because "We don't have XXX..." where XXX is any excuse that can be thought of [5].

I'm of the opinion (I've often said on this blog) that supporting any old legacy code is very very bad (there are exceptions) for a whole host of reasons including the various races for the bottom. And worse how long life embedded systems become very vulnerable to protocol fall back attacks initiated from a Man In The Middle attack on the communications paths [6]. Suprisingly if you have a check online various secure OS designs from back in the 1960's and 70's actually recognised this problem and this gave rise to the notion of "A Trussted Path" back to the kernel etc.

Thus I'm of the opinion that you should in effect force the software to be either re-written or that code should be written in such a way that it can be properly upgraded and supported. The easiest way to force a re-write is to nolonger support the old systems platform. The easiest way to write code that can be properly upgraded is with an appropriatly high enough level "scripting language". This has a secondary advantage that as the number of bugs in code appears to be language independent and based more on the actual number of lines of code in any particular function. Thus the number of bugs for any given function should drop with the higher level code. Further higher level code has other advantages in that it alows much faster code development also it is easier to security check both manualy and automaticaly.

Ther is another problem with the "use what we've got" philosophy it encorages an entrenched position to from that can easily turn into a monoculture. You've probably heard it befor "Monocultures lack hybrid vigor and make attacking much simpler" this has been seen in the past where "standards" were formulaic within an environment (healthcare), whilst the general level of security was raised it became brittle and when an attack vector was found the house of cards came tumbling down.

Another problem with "use what we've got" is almost invariably it produces a "top down" aproach that is from source code down the tool chain to executable code. Whilst this has many advantages it suffers from "bubbling up" attacks. That is if you get ownership of the system below the level of the tool chain and even the OS then you may well discover your boat won't float.

There are several facits to this issues but the two main ones to consider are in the current single large memory model the attacker has the ability to see and modify the memory and the OS and higher can easily be blinded to this. Think if you will what fun you can have with an insecure interface that alows Direct Memory Access (DMA) (of which there are a several) or worse one that is assumed to be secure that alows code to execute at the highest privilege levels [7].

All of this becomes even more fun when you realise that due to "prefered supplier" for major contracts there might be only a choice of one or two hardware suppliers (Say Dell being one) and that to optomise their inventory the motherboards are almost identical across the range and even the optional extras are from a very limited source of supply. Which is verging on a monoculture in large information rich organisations.

There are ways to deal with "bubling up" but it requires a significant change in the hardware design.

One such change needed is to properly enforce hardware based segragation of processes and have the communications mediated in such a way that it can be controled for side channels of various forms. One such trick is to encrypt all user derived data before it goes into an untrusted device, this almost entirely removess the idea of a "trigger word" based attack.

I could go on at length but one significant issues is that of the shared memory space (which is in effect the interior of the Castle) where atleast one vulnerable process knows the mapping to other processes and where other privaledged code can find out simply by examining the mapping tables in that process. The best the current single CPU architecture can manage is obsfication which whilst currently effective will probably cease to be in a short period of time (such is the nature of software based attacks).

Thus you are effectivly forced into the multi-CPU or core configuration with the ability to have hard memory and CPU segregation built in from scratch (via an independantly controled MMU). You also remove the need for the von neuman architecture cludge and all of a suden things get a whole lot simpler.

Another thing that would be desirable to get rid of is preemptive task switching in the way it is currently done as this can leave data hidden in CPU registers etc if not properly implemented[8].

Any way enough for now I've just discovered I have an urgent issue to deal with that requires me to pack a few things :-(

[5] : As an example of this I still have a code base going back to the late 1980's early 90's that I still support. Basicaly the hardware it runs on has not yet been deemed "at end of life" even though it's not repairable now so the Availability figures are looking very bad due to MTTR being effectivly infinite (this "you cann't get the parts" is one of the problems NASA had with the space shuttle systems). Despite gentle proding the system owner will not make the required investment to have the hardware replaced and the code ported to a new platform and importantly re-written from the very early version of C (modified Small C Compiler from Dr Dobbs) that implemented a custom scripting version of BASIC (don't ask it's to long ago).

[6] : There are ways legacy code especialy in embeded systems can be properly supported. However it's an entire subject in it's own right to see why Google for my comments about framework standards and embedded systems such as "Smart meters" and "medical implants" that are expected to have 20-50year life spans.

[7] : As an example the HD interface is almost universaly trusted and in many cases alowed to do DMA as it significantly improves performance. Both ends of the link have CPU's and almost invariably there is no mechanisum to check the state of the CPU on the HD. Worse data gets written to the HD almost raw, thus user input can be seen by the HD CPU. If the CPU has bugs then it may well be possible to excercise them to advantage. But there is also the issue of the supply chain, the HD may already have the malware on it which has a "matched filter" and it is simply waiting for a trigger of a couple of thousand bits in length to get through the filter and start the fun. This fun could simply be to generate a random key and start encrypting the hard drive transparently, then on a second trigger simply forget the random key and goodbye data. Obviously there should be backups right? but how current? and how do you stop the attack again? Or how about it recognises certain (effectivly) hard coded places where the kernel code is stored or other important code and inline injects a nice bit of malware or more usefully changes some of the system configuration information...

[8] : The paper on encrypting RAM I pointed you to the other day clearly shows this issue and as noted not all the registers are privileged code ring protected...

Clive RobinsonJuly 22, 2012 7:09 AM

@ Wael,

As far as I'm aware the at symbol has no real significance other than it makes a reasonable visual sign post if you are skiming through a large post etc, and I think it originated befor (Ctrl-F etc) text searching became commonly used in browsers.

It also helps if people use it where they don't "cut-n-past" a persons name or it is done imperfectly by the browser for some reason (accented charecters and non ISO Latin charecters can cause this but shoulden't these days, the browser I use has that anoying habit of translating some charecters to their percentage number equivalent when cutting-n-pasting part URLs).

Nick PJuly 22, 2012 7:20 PM

@ Clive Robinson

"Nick P appears to asssume that all code can be and importantly will be re-writen to make it more secure."

"Another area of difference is that Nick P appears to asssume that all code can be and importantly will be re-written to make it more secure or have security wrappers placed around it. I don't think it will unless people are forced to do so. "

I appreciate your revised statement. I scoffed at the first. ;)

The revised statement and VMM stuff is semiaccurate. First, the VAX Security Kernel showed VMM approach can be highly secure if designed to be ground up. Second, there's other wrapper techniques that have solved different problems in the past. I'm flexible about how I wrap. Lastly, there's the possibility of extracting security critical functionality into robust components, leaving legacy on its own system or in deprivileged VMM's. The last is what some companies and academics have been doing recently.

The rest of your post seems accurate enough.

Nick PJuly 22, 2012 7:23 PM

@ Wael on @

It's a contemporary usage of @, inspired partly by email no doubt. We've been doing it here for years. Like Clive says, it's a visual cue. One recent use I have for it is to scan the "last 100 comments" looking for @ (our names). As I'm using multiple machines right now, it helps my lack of live bookmarks.

Clive's changed his a bit, though. I used to pick at him b/c he would say @NickP instead of Nick P when referring to me in messages. It made more sense as an attention grabber. ;)

WaelJuly 22, 2012 10:58 PM

@ Clive Robinson, @ Nick P


I believe, and previously stated, both of your approaches are needed and are complementary.

We need to have a TCB oriented system that addresses most of the known weaknesses and we need to have the "monitor", however it is implemented, to cover the unknowns. This somewhat covers our lack of awarness of all attack vectors. Nick P is emphasizing control, and Clive Robinson is emphasizing covering the lack of awarness (unkowns).

Regarding short term and the long term. Nick P's approach, more or less, addresses the short to medium term. and Clive Robinson's, somewhat, addresses the medium to long term path. It's hard to bypass evolution or "short cut it".

My proposal in a nut shell was: What elements do we need to achieve "Security"? I listed some: Awareness, Control, Ease of use, the concept of the Owner, etc... Once we know these elements, and "security" was agreed on and defined, then we could look at the dynamics between those elements. How does lack of awarness affect security? can we compensate for that by more control? If we add more resources towards control, what effects would that have on the rest of the parameters. Resources (implementation wise now) are limitted and shared among these "elements of security" may vary from one implementation and use case to another. Once those dynamics were understood (the high level model), then we can go a level down and understand the limitations of HW and SW implementations. It is at this stage we will encounter what Nick P and Clive Robinson are talking about. This was my approach.

Leave a comment

Allowed HTML: <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre>

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc..