Friday Squid Blogging: Dissecting a Squid

This was suprisingly interesting.

When a body is mysterious, you cut it open. You peel back the skin and take stock of its guts. It is the science of an arrow, the epistemology of a list. There and here and look: You tick off organs, muscles, bones. Its belly becomes fact. It glows like fluorescent lights. The air turns aseptic and your eyes, you hope, are new.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Posted on July 6, 2012 at 4:58 PM155 Comments

Comments

Petréa Mitchell July 6, 2012 5:06 PM

The pixilated camouflage the US Army has been using for the last 8 years… doesn’t actually work. Allegedly there were tests on a variety of patterns, but some highly placed person decided on pixilation before the tests were completed because the Marines had it.

Not fully explained in the story is how the Marines wound up with a pixilated camo pattern which, presumably, doesn’t hide them any better than any other soldier.

Petréa Mitchell July 6, 2012 5:09 PM

A lot of people discovered this week that giving Facebook “access” to their smartphone contacts meant it could change their contacts. Which apparently it did, and then lost some of the e-mail which was sent to it as a result. Here’s a good roundup.

(For those looking to leave Facebook at this point: the center of gravity is reportedly shifting towards Tumblr and Twitter.)

Petréa Mitchell July 6, 2012 5:22 PM

New candidate for the biggest financial fraud in history: the LIBOR fixing scandal. Here’s an overview, and a more detailed explanation of what’s going on.

The tl;dr version is that a critical bit of the world financial machinery relies on banks estimating what it costs them to borrow money and then reporting it on the honor system. You can guess the general outline of the scandal from there. Allegations include misreporting for both self-preservation (honest numbers would have indicated how bad the situation was for banks that were in trouble) and for collusion (e.g., helping out someone who needed the rate to be low or high on a particular day for a deal to be more favorable). The second article also takes a look at proposed fixes.

Clive Robinson July 6, 2012 5:36 PM

OFF Topic:

Bruce,

I don_ know if you have seen this site,

http://safeman.org.uk/

But it contains a history of safes and safe breaking in the UK. It also has an attached site,

http://peterman.org.uk/

Which has a potted history of UK “petermen” who were safe crackers who ended up using explosives of various types.

kingsnake July 6, 2012 5:39 PM

The “So You Want to Be a Security Expert” article put the Byrds in my mind so that I couldn’t get them out …


So you want to be a “security expert”?
Then listen now to what I say
Just get a Schneier book
Then take some time
and learn how hack
with your system locked tight
And your router a-light
It’s gonna be all night

Don’t sell your soul to the Company
Who are waiting there to buy vaporware
And in a week or two
If you crack the net
The feds will tear you apart

The price you paid for your riches and fame
Was it all a strange game?
You’re a little insane
The money, the fame, the public disdain
Don’t forget what you are
You’re a “security expert”!


Not my best work, but the best I could come up with in about 15 minutes. 🙂

kingsnake July 6, 2012 5:40 PM

Hmmm … separate lines in the comment box got smooshed together. Oh well …

Wael July 6, 2012 6:07 PM

@ Kingsnake

For some reason I have the same problem. Blog is not prose / poem / limerick (11221) friendly.

Clive Robinson July 6, 2012 6:09 PM

@ Petréa Mitchell,

Allegations include misreporting for both self preservation … and for collusion…

In your comment about LIBOR you forgot to mention (now ex) head of Barclay’s Bank PLC one “Bob Diamond” who was called to testify in front of MP’s (who had stiched him up to force him to resign).

Well to their horror he briefly let the “cat out of the bag” and gave evidence that strongly suggests that on atleast one occasion the manipulation of LIBOR was at the behest of the previous (Labour) UK Government…

I have a feeling that this story is “going to grow legs”…

@ kingsnake,

Hmmm … separate lines in the comment box got smooshed together. Oh well …

I notice it’s been doing it for about a weak now, and it’s very anoying as it looks like we’ll have to resort to “list tags” which are just a pain 🙁 so I guess the question is,

“What’s the Moderator been upto?”

NobodySpecial July 6, 2012 10:42 PM

@Clive Robinson – “calls may be recorded for security and training purposes”, hope he kept the tape!

Nobodyspecial July 6, 2012 10:57 PM

@Petréa Mitchell – the Marines pixellated grey camo works. The army wanted a modern pixelated computer-ish modern camo as well.

But the army corporate colors are green and brown – so it got green+brown pixelated camo which didn’t work.

andrews July 7, 2012 3:27 AM

@Petrea Mitchell and Nobodyspecial – the Marine Corps woodland(green) MARPAT camis are slightly superior to the older camis. The green digital camo works better in the dark while wet. Which happens more often then you might think. The other camo patterns tend to look like a dark black blob. A large reason the USMC changed patterns was because we wanted to distinguish ourselves from the other services. The “enemy” also understands the differences between the army and Marines.

Clive Robinson July 7, 2012 3:35 AM

ON Topic,

Having seen Bruce’s,

This was suprisingly interesting

I thought I’ll give it a read in the morning. So there I am munching my breakfast with one hand and scrolling down the article on the mobile with the other.

No worries I thought I’ve “dissected” a few squid “for the cooking pot” in my time so I’m not going to read about anything I’ve not actually seen and in some cases enjoyably eaten (lightly battered and deep fried is nice, or slowly stewed and then baked in a loaf of bread etc 🙂

Then I got to the bit about licking the pus of the back of a cockroach… and for some reason my breakfast, that upto that point I’d been enjoying, suddenly became distinctly unappetizing…

Tom July 7, 2012 4:04 AM

Yesterday there was a interesting documentary about airport security on belgian (dutch) national television.
http://www.canvas.be/programmas/terzake
(the episode of 06/07/2012 starting at 21:25 min).
They showed rather detailled how easy it was to get weapons on an airplane in France, also on intercontinental flights to the USA (around 33:00 min). When they confronted the TSA whith this information I found their reaction rather hypocryte…
The conclusion of the documentary was that it is too costly to have adequate screening. And that most of the screening done is security theatre to give us peace of mind.

Unfortunately most of it is in dutch but ppl interested in airport security should give it a try :o)

rob July 7, 2012 4:28 AM

In UK, thousands of people held up in traffic queues for several hours, a major motorway was blocked in both directions and a full-scale turnout by anti-terrorist police, and according to some reports, the military, because someone was using an eCigarette. This story was all over the media but in the nature of 24-hour news it is quickly disappearing. Serious People on the radio this morning were saying that the response ‘was proportionate’ and that we must all be on the lookout for ‘terrorist threats’. The threat was reported by another passenger who phoned the police on their cell-phone.

Not difficult to envisage a couple of terrorists, one with an eFag and one with a mobile phone creating havoc without actually doing anything illegal or disapproved of by the authorities.

Wael July 7, 2012 9:08 AM

@ Clive Robinson

“Then I got to the bit about licking the pus of the back of a cockroach”

I hear you. Reached that point too, but luckily after my breakfast. I kept telling myself it’s not pus, it’s guacamole… Didn’t help much. Drank some pink stuff — a generic brand of Pepto Bismol(R)

More on the article… I did not think equipment would freeze at the depth squids live at!

Petréa Mitchell July 7, 2012 10:01 AM

Clive Robinson:

The second article does touch on the possible collusion of British regulators (and other national ones) but the allegations on that point are so vague as of yet that it didn’t seem necessary to include in the summary.

Moderator July 7, 2012 11:52 AM

so I guess the question is, “What’s the Moderator been upto?”

Blundering about like a bull in a china shop, apparently.

I think once this comment triggers a rebuild, the missing line breaks will reappear above.

dbCooper July 7, 2012 12:58 PM

Subverting airport security via ground crew method, seems to have worked for about six weeks in Omaha, NE…….

“Court documents obtained by the KETV NewsWatch 7’s I-Team detailed allegations made by an FBI special agent, which show that Foster pretended to be a United Airlines employee at Eppley Airfield for six weeks starting in April 2012. Foster is accused of accessing secured areas and computers at Eppley Airfield, according to the court documents.”

http://www.msnbc.msn.com/id/48093487/ns/local_news-omaha_ne/t/man-accused-impersonating-airline-employee-eppley/

Anon July 7, 2012 1:06 PM

http://bash.org/?949560

Bash.org is a quotes website, usually quotes off IRC involving some form of impressive stupidity. In this case, impressive stupidity, USB magstrip reader, credit card, and IRC.

Clive Robinson July 7, 2012 2:55 PM

@ Wael,

I did not think equipment would freeze at the depth squids live at!

In this respect “freeze” has a couple of meanings, the first being the litteral temprature “freeze” as in water going solid. The second simply meaning “to stop” moving/working from the mechanical terminology which if I remember correctly derived from that of biological meaning as in “frozen to the spot/rigid”, which in turn was derived from the. firsst meaning where a side effecto of water freezing is it stops moving in the usual way…

You also need to remember that ~32ft of water is the equivalent of one atmosphere of preasure, so it mounts up quite quickly as you descend into the depths. And with respect to this the second thing to remember is explosives have been and still are used to generate preasure waves to “cold weld” metals together and if you get the preasure right soot turns to diamonds (with a little extra thermal help).

Thus one problem with all underwater equipment used at reasonable depth is how to operate it from/at what is effectivly a low preasure point.

Whilst a hollow ball of steel with sufficient thicknesss will not crush in most depths the minute you make a hole in it for a camera to look out of you start getting problems unless your optical material has simmilar properties to the steel or you take other precautions. Likewise drilling holes to take electrical or mechanical signals in and out to operate the equipment. Then there is the issue of maintanence hatches etc…

Then ontop of that there are thermal issues of working at depth, for instance batteries realy do not like the cold their capacity can quickly drop below 20% of that at room temprature when below some technology dependant figure (ni-cads and lithium battery packs used in high quality “TV Cameras” for instance are down to about 30% at tempratures of arctic autumn aand spring as wildlife documentors have found out.

Then the electronics it’s self can be very tempreture sensitive, for instance “voltage refrence” sources used for A-D converters can easily be susceptible to tempratures even a few degrees outside of their operating design (0-40C for the majority of consumer grade equipment). Remember that a few years ago hackers found that various “secure electronics” features could be bypassed by putting the electronics in a domestic freezer (aprox -18C) overnight. Something the original designers had not considered.

Outside of consumer grade equipment are the “industrial” and “military” temprature grades but equipment in these temprature brackets are either custom built or inordinately expensive (have a look at the price of ordinary civil UHF 2 way radios and then those that are Mil temprature spec it will make your eyes water).

Having designed equipment for Mil, Oil/ Chemical industry and Fast Moving Consumer Electronics (FMCE) I can assure you the price differential is in many cases justified.

As an example, in FMCE it is not unusual to see the base bias on a transistor being “current bias” as this saves the PCB space/costs of a resistor and improves battery life. However for “outdoor use” in winter the circuit will not work because current bias is way to temprature sensitive, so you have to have the extra resistor and much higher current of “voltage bias”. Likewise the choice of capacitors in oscillator circuits. If you look at the basic “inverter circuit” oscillator for quartz crystals at 32Khz, you will actually find that the capacitors and resistors are usually selected to make the inverter an RC oscillator close to or at 32KHz. The reason for this is so they actually start up and thus excite the crystal sufficiently close to it’s desired resonant frequency such that it “pulls-in” and then the significant change in the quartz crystal impedence in the circuit causes the quartz resonantor to become the frequency selective component not the RC time constant. If the capacitors are to temprature sensitive then the RC oscillator frequency may not get close enough to the quarts crystal frequency for the impedence change to happen… Similar issues apply to other oscilator circuits and tuned amplifier circuits. Likewise some “spring coil” inductors. A good circuit design is such that any change in charecteristics with temprature are known and the appropriate temprature coefficient cappacitors are used in a way such that the resonant frequency remains as constant as possible over the desired temprature range. Even things like “bees wax” used to reduce “microphonics” in inductors has physical temprature coefficients that can change the electrical charecteristics of a circuit…

Clive Robinson July 7, 2012 4:13 PM

@ Petréa Mitchell,

… but the allegations on that point are so vague as of yet…

Not sure how your local politicos/civil servants go about this sort of thing.

But in the UK the wording as given by Bob Diamond is exactly what you would expect in as a “politicaly inspired communique” from the likes of a “Civil Servant” (who are supposandly politicaly neutral) to a business/industry chief.

Which is why when Bob passed it on there was a “jump to it” attitude to comply.

It is acting correctly on such political hints where recomendations for Knighthoods from politicos to the awards and honours commities arise from.

The reason such phrasing is used through an intermediary is to allow for “plausable deniability”. When we see the offending Civil Servant appear before the enquiry he will use the line that “to much was read into the otherwise innocent statment they had made” and it’s an almost certain bet the politicos asking the questions will say “just so” and give him a nice comfortable time. There might be a faux pretence at serious corss questioning for the journalists but nothing that the politicos know the civil servant won’t be easily able to refute/rebuf.

Such is the game these people play. We know this because we have seen it all before countlesss times and also that we know the politicos used the intemediaries of “The Governor of The Bank of England” and the head of the FSA to send exactly the same sort of message to the other directors and major shareholders of the bank to cause Bob Diamond to resign.

The last time we saw this sort of game played out big time was over Iraq and the “doddgy dossier” where Dr David Kelly made the mistake of pointing out the report was at variance to the known facts and shortly there after was found dead under a tree with his wrists slit, and the head of the BBC was forced into a position where he resigned. Oh and the head of that enquiry became known as “Lord Whitewash” because he did exactly what his political masters wanted which was that they should be exonerated. And yet another reason to call the then PM “Teflon Tony”.

Clive Robinson July 7, 2012 4:59 PM

@ Moderator,

Blundering about like a bull in a china shop apparently

Err no, and I’m sorry if that’s the impression I gave.

I’ve sometimes been asked why I don’t have my own blog and my answer has fairly consistantly been,

1, There is a lot of work involved with getting original content together.
2, Keeping ontop of the related admin functions including those related to security is also a major issue (that gets worse the more visability/popularity a site has).

So no I don’t envy the task nor would I condem anyone who works hard to provide such resourcess to others.

Also I’ve been caught out a number of times with functionality changes on software upgrades etc so I’m aware that “smooth running” carries a significant overhead in testing of things that don’t make into the manual, release note or upgrade/patch info. Some I’ve even brought onto myself by making a Maintanence Upgrade of software I had written myself some years previously (one particularly embarising one was ‘=’ instead of ‘==’ in a section of code that did low level memory managment as part of garbage collection).

Further I had the misfortune to once work for a company that had an online product that had so many conflicting configuration options that changed almost weekly that even the “programmers” could not tell you from day to day what might be effected. It was so bad that the “customers” were not alowed to change things, it was a job for the Tech Support Staff who received no training or support from the programmers. The programmers also had an attitude that “testing was for wimps”, needless to say it was not a happy place and the company appears to have reorganised it’s self virtualy out of existance…

Ronnie July 7, 2012 10:01 PM

TrustChip (by KoolSpan) was mentioned in the July/August 2012 Technology Review.
http://www.technologyreview.com/tomarket/428250/secret-connections/
“No ordinary memory card, the TrustChip can upgrade any phone to make super-secure encrypted calls and data transfers—which would usually require expensive specialized mobile devices. Making encrypted connections requires the phone on each end to have a TrustChip installed in its memory slot. Apps can then route calls and messages via the chip and its 32-bit encryption processor. The product is aimed at organizations, like security services and banks, that worry about eavesdropping. ”

http://www.koolspan.com/trustchip/

KoolSpan was previously mentioned (in 2005)
http://www.schneier.com/blog/archives/2005/03/the_failure_of.html

Clive Robinson July 8, 2012 9:54 AM

@ Ronnie,

TrustChip (by KoolSpan)…

What worries me is the lack of information to decode what,

Apps can then route calls and messages via the chip and its 32-bit encryption processor.

Means in reality.

There is also the point that this device is not using the smartphone in the way intended and as such I can think of a number of attack vectors.

For instance how does the app ensure a secure channel exists between the microphone and the SD card and from the SD card to the earpiece/speaker. From what I can tell of the current leading phone OS’s this is not going to be easy unless you have “root privileges” which you can only get by “jail breaking” the phone…

Also how does the app stop the phone supplier from downloading an I/O Device driver “shim” that effectivly puts a “T-Piece in the pipe work” thus copying the sensitive data on to some other hidden app or other hidden comms channel that is either “real time” or “store and forward” such as the CarrierIQ software did with keypress and other data…

So untill a lot of other detailed technical information on the design is released by Koolspan on which a sensible evaluation can be performed I for one will treat the product as “unknown” at best which in turn means I won’t be using it any time soon (if ever)…

Clive Robinson July 8, 2012 10:16 AM

@ Wael,

With regards the C-v-P issue lets take a sideways look for a moment.

As Nick P has pointed out you can get a measure of security with various software tool chains to build the software for the Apps and OS.

Though with the way web browsers etc are used these days they need to employ exactly the same techniques that OS’s do for user seperation/security for individual window seperation/security.

That is as users nolonger “login to run a single app” the security has had to rise into the application level or the presentation apps need to sink down into the OS securit layer depending on your view point.

However there is still the issue that once malware either put into the software during design or injected subsiquently via flaws has access, some or all of the RAM and Secondary storage such as the HD etc become available.

Whilst HD’s can be encrypted there is for most implementations the issue of RAM and the encryption keys.

Have a look at these two papers,

1, TRESOR – http://www.usenix.org/event/sec11/tech/full_papers/Muller.pdf

2, CryptKeeper – http://tastytronic.net/~pedro/docs/ieee-hst-2010.pdf

And have a think about how tresor would improve CryptKeeper.

Whilst it won’t produce a fully encrypted RAM it will certainly reduce the attack surface considerably.

Moderator July 8, 2012 12:36 PM

I’m sorry if that’s the impression I gave.

It wasn’t. My bull-in-a-china-shop comment was prompted by realizing I’d created the problem by “fixing” something that wasn’t, in fact, broken.

Wael July 8, 2012 5:33 PM

@ Clive Robinson,

The deepest sea area was reached (or descended to – what is the antonym of “reach” in that context anyway?) by a manned submarine with cameras and electronic sensors. Search for “Deapsea challenger”. As for transistor biasing and heat compensation, and material properties under temperature and pressure variations, that can be designed too, although I have not encountered any pressure limitations on solid state devices. I am a HW engineer by education and early career (descrete components, BJTs, FETs,…). I mainly worked on the analog side with high frequency (at the time) waveguides, microwaves, microstrip and slot antennae, Smith charts, so I “dig” what you say… Oh the good old days, when men were men, (ok, and women were women) and people just wrote their own device drivers 🙂 Been stuck with software and firmware — what you call “code cutting” — since then…

As for C-v-P… So we can still talk about that. I promised Nick P I will drop that analogy, but I need a way out. Hmmmm! I see it as a model, not an analogy. Yea! That’s it!. So out with the analogy. C-v-P is a model we can talk about, although you almost confused me when you referred to the Castle as a Fortress in a recent post. Stay tuned… Maybe we can sneak-in an iteration or two while Nick P is on vacation 🙂

Clive Robinson July 8, 2012 8:48 PM

@ Wael,

I am a HW engineer by education… …been stuck with software and firmware since then…

Yup me to but I moved on from “software engineering” because the bosses wanted the software equvivalent of “junk food tastless pap” and I wanted to make “healthy enjoyable food”. And when it came to the choice of “taking the money and run” or the “ethical approach”… well lets just say I walked… into a different career (which I’m probably going to do again soon).

[Imagine if you can a conversation between me and an MD of a fairly large organisation, he was bemoning the deficiencies of Micro$haft products and how it was costing the company $X, I (somewhat peved as he had button holed me at a social function that was not work related) simply said “What do you expect, they are simply doing to you as their customer, what you are doing to your customers” lets just say it was not career enhancing].

Any way the issue with preasure is mainly two fold, the first is the effects of static preasure and the second significant changes in preasure. For instance a JEDEC TO-33 style package with glass base seals on the likes of many 1-5W transistors, will fail at quite moderate static preasure when compared to that which mil subs fairly routinely have on their preasure hulls. But if you look at the likes of the cables they change in preassure causes them to expand and compress and has similar life shortening effects to bending (as well as changing their charecteristic impeadence sufficiently to cause return loss issues). And obviously the changes in cable diameter due to preasure directly effect the use and design of sealing glands into equipment housings.

Thus external sensors on submersible are effectivly special/custom designs and in some cases it’s a custom package that is machined to take a standard part, ie some low cost ROV motors are internaly actually the same as some bilge pumps just in different housings, and many cameras are low light security style cameras in specialised casings whilst feedback systems on actuators are often custom designs including such extras as water ingress detectors.

The reliability of ROV style submersibles has a strong corelation not just to depth of operation but also the number of decents / ascents and is why some systems you can only rent and come with a compleat set of spares,tools and technicians along with the operators. When I was working in the Oil / Chemical industry I did some design on top side, head end equipment and dabled in the electronics side öooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooothe guys doing the design for the ROV etc were a fatalistic morose lot who worked on the asumption it was all going to go horribly horribly wrong the minute you went to sea. For them success was measured in sleep time (or the lack there of) when off shore. I suspect that things have improved a bit in the past twenty years or so due to improvments in materials science etc but a look at job adds for ROV / submersible technicians tells a few stories if you can read between the lines.

Ronnie July 8, 2012 9:12 PM

@ Clive Robinson
Thanks for the analysis – I thought it was interesting but didn’t delve into it too deeply. Maybe Bruce can use it as fodder for The Doghouse.

Nick P July 8, 2012 9:50 PM

@ ronnie and clive

I second clives complaints with koolspan. I swear i debunked them with many of the same points a while back on this blog but couldnt find the link. Might have found the criticisms too redundant to post. 😉

Anyone wanting smartphone encryption talk to OK Labs or INTEGRITY Global Security. Both claim to have solutions that utilize microkernal platforms to isolate untrustworthy main OS from security critical code. Id assume a smartfone based solution is insecure though.

One can, however, make a “mobile” semisecure comms solution pretty easily. It’s mobile in the sense that it takes up little space. Red-black design. Untrusted fone or laptop is fine for Black transport layer. Red is two logical units: interface/voice/text part & security (e.g crypto) part. Basic design is run VOIP with careful compression over IPSec/TLS on hardened OpenBSD wired to Black device with nonDMA & careful protocol. Both parties need the device. Far from perfect, but more secure than any smartfone solution & supplier neutral.

Ronnie July 8, 2012 11:15 PM

We need an official Tor discussion forum.

I didn’t see this issue mentioned in Roger’s latest notes post, so for now, mature adults should visit and post at one or both of these unofficial tor discussion forums, these tinyurl’s will take you to:

** HackBB:
http://www.tinyurl.com/hackbbonion

** Onion Forum 2.0
http://www.tinyurl.com/onionforum2

Each tinyurl link will take you to a hidden service discussion forum. Tor is required to visit these links, even though they appear to be on the open web, they will lead you to .onion sites.

I know the Tor developers can do better, but how many years are we to wait?

Caution: some topics may be disturbing. You should be eighteen years or older. I recommend you disable images in your browser when viewing these two forums[1] and only enabling them if you are posting a message, but still be careful! Disable javascript and cookies, too.

If you prefer to visit the hidden services directly, bypassing the tinyurl service:

HackBB: (directly)
http://clsvtzwzdgzkjda7.onion/

Onion Forum 2.0: (directly)
http://65bgvta7yos3sce5.onion/

The tinyurl links are provided as a simple means of memorizing the hidden services via a link shortening service (tinyurl.com).

[1]: Because any content can be posted! Think 4chan, for example. onionforum2 doesn’t appear to be heavily moderated so be aware and take precautions.

DNSCrypt for Linux, Windows, Mac (from opendns.com)

“In the same way the SSL turns HTTP web traffic into HTTPS encrypted Web traffic, DNSCrypt turns regular DNS traffic into encrypted DNS traffic that is secure from eavesdropping and man-in-the-middle attacks. It doesn’t require any changes to domain names or how they work, it simply provides a method for securely encrypting communication between our customers and our DNS servers in our data centers. We know that claims alone don’t work in the security world, however, so we’ve opened up the source to our DNSCrypt code base and it’s available on GitHub”

https://www.opendns.com/technology/dnscrypt/

https://github.com/opendns/dnscrypt-proxy/blob/master/README.markdown
https://github.com/opendns
https://blog.opendns.com/2012/05/08/dnscrypt-for-windows-has-arrived/
http://techcrunch.com/2011/12/05/dnscrypt-encrypts-your-dns-traffic-because-theres-always-someone-out-to-get-you/
http://www.h-online.com/security/news/item/DNSCrypt-a-tool-to-encrypt-all-DNS-traffic-1392283.html
http://blog.opendns.com/2012/02/06/dnscrypt-hackers-wanted/
https://www.linuxquestions.org/questions/debian-26/dnscrypt-930439/

Petréa Mitchell July 9, 2012 11:53 AM

Autolykos:

Well, they do keep saying the Army needs to adapt more to fighting in urban environments.

Petréa Mitchell July 9, 2012 11:59 AM

andrews:

A large reason the USMC changed patterns was because we wanted to distinguish ourselves from the other services. The “enemy” also understands the differences between the army and Marines.

Wouldn’t it be better to all wear the same uniform and let the enemy go nuts wondering, then, rather than helpfully pointing out who they should shoot at first?

WiskersInMenlo July 9, 2012 1:10 PM

Check out this video on YouTube for a squidliceous musical interlude.

http://www.youtube.com/watch?v=1pJPnZvFSy5o&feature=youtube_gdata_player

Pulsating Pigment Cells of a Dead Squid Are More Beautiful Than You’d Think
Chromatophores are muscle-controlled pigment-filled cells that allow cephalopods to blend in with their surroundings or even communicate with others. Now, you can see the cells expanding and contracting up close in this mesmerizing video Michael Bok, a graduate student at the University of Maryland, filmed of a dead Longfin Inshore squid.

Nick P July 9, 2012 1:48 PM

@ ronnie

Thanks for the links. I might visit them if I dare put Tor on my machine again. Fact is, I don’t trust it. Tor has had numerous real and proposed weaknesses. There’s also a large academic community focused on finding more. Personally, I like the Freenet design a bit more, but it’s Java. (wth were they thinking?) There’s better (cheap & free) alternatives to Tor that a a bit more involved, but faster & more trustworthy.

I’ve been thinking about setting up a little Tor VM just for visiting Onion sites, not anonymity per se. Put a proxy listener in there, too. Then, just redirect my browser through it when its loaded. Might do something similar for Freenet.

Wael July 9, 2012 2:17 PM

@ Clive Robinson

Read the first paper… Second one, glossed over.

“Whilst it won’t produce a fully encrypted RAM it will certainly reduce the attack surface considerably.” — True.

Important phrase to be noted: “Reduction of the Surface of Attack” which is one defense (defence for you) strategy!

Nick P July 9, 2012 3:04 PM

I’m sure most of you have heard of Apple’s Siri. I’ve told iphone addicts that there were quite a few Siri-like products out there. Including one I was considering using from an AI lab many years ago, although I forgot which it was (maybe MIT). Siri was less invention & more good implemenatation, integration & marketing. Great product, no doubt, just less credit to Apple for the concept than they claim.

Well, unsurprisingly, Apple is getting sued for Siri by (gasp) a Chinese firm. It already gave $60 mil in cash for using the name iPad. Now, this company seems to want either cash or to simply block its competitor in China.

http://www.techweekeurope.co.uk/news/china-apple-siri-lawsuit-85398

I figured this would be the right blog to ask a funny question: anyone else see the laughable irony in a Chinese company suing over intellectual property abuse? 😉

Wael July 9, 2012 4:51 PM

@ Nick P

” anyone else see the laughable irony in a Chinese company suing …”

In some countries, “NDA” stands for New Data Available 🙂

Wael July 9, 2012 5:55 PM

@ Nick P

“I’m sure most of you have heard of Apple’s Siri. I’v …”

Well, unsurprisingly, Apple is getting sued
for Siri by (gasp) a Chinese dude
It already gave $60 mil in cash
for using the name iPad in a flash
He wants a block in China or some cash for food

🙂

GIMP July 9, 2012 7:58 PM

@ Nick P

“I might visit them if I dare put Tor on my machine again. Fact is, I don’t trust it. Tor has had numerous real and proposed weaknesses. There’s also a large academic community focused on finding more.”

I don’t trust it either but it works well for its stated goal.

Most any code has had numerous real and proposed weaknesses, and it’s in corporate and government’s favor to drill holes in any privacy/security tool(s) and subvert them.

Here was one ugly incident:

anonymous [dot] livelyblog [dot] com/2012/04/10/linux-bug-compromises-tor-users-makes-list-of-all-sites-the-user-has-visited/

“Personally, I like the Freenet design a bit more, but it’s Java. (wth were they thinking?) There’s better (cheap & free) alternatives to Tor that a a bit more involved, but faster & more trustworthy.”

Free net? What people say is it’s slow and filled with a lot of illegal content. That is certainly not for me!

What are the other (better) free alternatives to Tor which are faster and more trustworthy? (and not grey or black hat related) And are they open source? Been subject to a lot of peer review? Have no history of back doors? Are they legal for all to use? Do they have a large user base? The Tor Metrics page fluctuates around 500k clients. Without a large user base, you don’t blend into the noise of others very well, your activities stand out more and this damages your privacy/security.

Please post links here to “better” free services.

Nick P July 9, 2012 9:55 PM

@ gimp

Hint: most revolve around wifi hotspots and covert use of them. In practice, this concept has worked well for around a decade now. It doesnt rely on esoteric mathematics either.

Autolykos July 11, 2012 4:48 AM

@Nick P: That’s actually far less anonymous/secure than you might think. Depending on the setup, even other users on the same WiFi can see what you do. And you’ll always have to trust the guy who set up the network.
With Tor, the weaknesses get studied, published and fixed. With some random schmuck’s WiFi – probably not.

Ronnie July 11, 2012 12:02 PM

http://arstechnica.com/tech-policy/2012/07/op-ed-tsa-should-follow-the-law/

A year ago this coming Sunday, the US Court of Appeals for the DC Circuit ordered the Transportation Security Administration to do a notice-and-comment rulemaking on its use of Advanced Imaging Technology (aka “body-scanners” or “strip-search machines”) for primary screening at airports. (The alternative for those who refuse such treatment: a prison-style pat-down.) It was a very important ruling, for reasons I discussed in a post back then. The TSA was supposed to publish its policy in the Federal Register, take comments from the public, and issue a final ruling that responds to public input.

So far, it hasn’t done any of those things.

So on Monday, I started a petition on Whitehouse.gov. It says the president should “Require the Transportation Security Administration to Follow the Law!”

The petition says:
Defying the court, the TSA has not satisfied public concerns about privacy, about costs and delays, security weaknesses, and the potential health effects of these machines. If the government is going to “body-scan” Americans at U.S. airports, President Obama should force the TSA to begin the public process the court ordered.

Getting 25,000 signatures requires the administration to supply a response, according to the White House’s petition rules.

The response we want is legal compliance. The public deserves to know where the administration stands on freedom to travel and the rule of law. While TSA agents bark orders at American travelers, should the agency itself be allowed to flout one of the highest courts in the land? If the petition gets enough signatures, we’ll find out.

Nick P July 11, 2012 1:36 PM

@ Autolykos

“That’s actually far less anonymous/secure than you might think. Depending on the setup, even other users on the same WiFi can see what you do. And you’ll always have to trust the guy who set up the network.
With Tor, the weaknesses get studied, published and fixed. With some random schmuck’s WiFi – probably not.”

I said it “revolves around wifi hotspots and covert use of them.” I didn’t say using wifi hotspots was the method. I’ve intentionally left parts out because (1) the methods have worked for years and (2) the obfuscation helps. In case you’d like to guess at it, here’s some of the threats my schemes try to counter:

  1. IP tracing
  2. [Meaningful] browser-level profiling
  3. Remote attacks on OS or browser
  4. Persistent malware infection
  5. Analysis of internal network traffic
  6. Fake AP’s
  7. Wireless signal tracing to source computer.

The more things you want to counter, the more cumbersome and costly it gets. However, little about the countermeasures are black boxes: you can be pretty sure each does what it’s supposed to do. Anonymity software less so.

(Although, Tor is one of the best options for free, anonymous web surfing if you decide to go that way. Just have to take some additional precautions and be OK with SLLLOOOOOWW access.)

Nick P July 11, 2012 6:00 PM

@ Clive Robinson

So, about that method of sending you a message other than this blog… how’s that been coming?

I have a suggestion for you to save you (err, me) time on solving this one. I know you have an email address you haven’t posted. So, how bout you set up a 2nd one that’s public & use a whitelisting scheme to only pull/forward/whatever messages from addresses you recognize. So, we can get a semi-private message to you and still not know your private address.

A free web site, online message submission form, etc. could be used to do this as well. Might make it easier to script the whitelist. I haven’t done web programming in a while, but I might throw something like it together anyway. Quite a few potential re-uses down the road. Maybe even make it mobile friendly haha.

Many potential hosts
http://www.free-webhosts.com/free-php-webhosting.php

Wael July 11, 2012 6:27 PM

@ Nick P

“bout that method of sending you a message other than this blog… ”

Why don’t you two eat some of your own dog food and do a manual DH key exchange on this blog ?
Don’t worry about an Active MiM attack …

Or perhaps the moderator can facilitate a method for private replies, a script that allows two parties to do that DH key exchange, or even an email service at a nominal fee (with no advertisements).

@ Moderator:
O großer Moderator, make that happen 🙂

Nick P July 12, 2012 12:05 AM

@ Wael

Ah, but how do I know it’s actually his key? What he’s said is Googleable & people can impersonate him. Bruce has his email & the mod prolly has his IP. Both are better for authentication & I have no reason to think they’d lie to me. He or I have suggested having one send his email address to my registered email, but that didn’t happen for reasons I can’t remember. I’d rather not burden them anyway. After all, if Clive really wanted we could have pulled this off by now.

Hence, my newest proposal. This is the 21st century. That Clive doesn’t have a public email for these situations is… he should have one. Throw in some whitelisting & have him only check it when he’s expecting a message, then it is quite convenient. Or a web page thing if he doesn’t like that. And it probably won’t get autodeleted for inactivity.

Hey, I’m trying on my side. Clive has plenty to say to people on many blogs. The kinds of people that might want to talk to him or use his expertise productively might need privacy. Or just say what they feel like saying with no restrictions or possible deletions. One-on-one medium, at least. Hence, he should have a way to do that easily [like the rest of us].

Wael July 12, 2012 12:30 AM

@ Nick P

“Ah, but how do I know it’s actually his key?”
There is a reason I said don’t worry about Active MiM attack 🙂

Wael July 12, 2012 6:18 PM

@ Nick P

There is a reason I said don’t worry about Active MiM attack 🙂

Strange! You did not ask. Here it is anyway:

There are a couple of kinks introduced to this “problem” that changes the dynamics a bit.

So normally DH-M is immune to Passive MiM attacks, but vulnerable to Active MiM attacks — Your concern. One kink is the abilty of Alice to ask Bob, and vice versa, to do something “visible” on the blog, with two consequences:
1- Active MiM attack can achieve a DOS at best
2- Passive MiM attack can also achieve a DOS — Somehting a normal use-case DH-M is immune to (I think)

Passive here means:
1- Read-only of the protocol communication on the Blog (does not hand out any prime numbers to either Alice or Bob)
2- Ability to write to the Blog as an imposter (causing a denial of service)

And active means
1- Participation in the communication protocol as an imposter (pretends to be Alice to Bob, and Bob to Alice)
2- Ability to write to the Blog as an imposter (causing a denial of service)

So, Alice and Bob agree on a key
They don’t send any confidential stuff yet.
Alice sends Bob a message on the Blog asking for the hash of the key and Bob Does the same

If there is an Active Eve in between, the hashes will not match. So a MiM — Or WiM in this case, since Eve is a feminine name, will not succeed.

Once Bob and Alice do not hear any complaints from either, then they are ok. Complaints would take the form:

Alice -> Bob on the blog:

@ Bob
Here is my prime number: 3

Alice –> Bob:
@ Bob
Hey! That was not me, I did not post that number. Freakin’ Eve is at it again …

Or alternatively Passive Eve can post:
@ Bob
Eve -> Bob:
Hey! That was not me, I did not post that number. Eve is at it again … (She doesn’t want to curse herself)

Also, Passive or Active Eve can post a bogus hash to casuse a DOS (inability for Alice and Bob to exchange a key) and that is where the other kink lies …

However, I may act as Eve should this contrived communication take place, because I want to read what you guys are talking about 🙂

I did not verify my thoughts here much, so it all maybe broke …

Nick P July 13, 2012 1:25 AM

@ Wael

As it sometimes is, it was a hurried reply. Without analyzing your reasoning (please don’t get offended) & appreciating the time you put in, I have to say what is the point in all that if the other party is unwilling to even give a public email? (Much less time/effort/mentally consuming by comparison…)

Clive always seemed to be a bit eccentric or unusual. No problem with that, of itself. However, those types of people do have a hard time bringing themselves to do the minimum of social expectations, huh? 😉

Wael July 13, 2012 1:57 AM

@ Nick P

No offense taken. Was just reading through old posts and came across one from 2010 about funny questions and answers. Saw your long reply ( I am Clive Robinson Biatches…) and thought it was pretty funny. It also proved your point that it is easy to impersonate some characters here… because when I started reading that post I was thinking. Hmm! I am already lost, and I am only in the first
sentence, and there are 10 more pages to go. It’s gotta be Clive 😉 I was shocked to see it was you 🙂

Sometimes chatting over private channels is boring,
and gives you a false sense of privacy — You know, you might as well make it easier for them® and write in the open. Kiss privacy goodbye 😉

Nick P July 13, 2012 8:09 PM

@ Wael

LOL! I needed that. Totally forgot about that. Hope you saw Clives reply, as it was the exact reaction i was looking for. Well, the first part: i just saw the 2nd one and now i feel guilty for not continuing the discussion after the poor guy typed all that stuff on a tiny phone. Ill have to give him his reply in the near future.

Nick P July 13, 2012 10:11 PM

@ Wael edit

Oops. I felt guilty too soon. I went back over that thread to make sure I missed replying to Clive. He had read something someone else wrote, assumed it was me, replied & dude disappeared. Explains why “my” statements looked alien to me. I’m in the clear, for once haha.

David July 13, 2012 10:19 PM

@Nick P

speaking of Clive, where is he? Haven’t seen a post from him in about a week.

Wael July 13, 2012 10:56 PM

@ Nick P, David

No worries Nick P. You can feel guilty another
day 😉

@ David,
I was also wondering where Clive Robinson is.
Maybe Nick P can ping him for us on Clive’s
Private email 🙂

Nick P July 14, 2012 1:43 AM

@ concerned of Clive Robinson

His last post was July 2012 Google says. He’s recovered from far worse. If you must worry prematurely, pray for him. Otherwise, just don’t. That geezer will show up to answer some questions or make some smart dude look uneductated. He will if he is still alive, as he enjoys that stuff too much. Mark my words. 😉

Wael July 14, 2012 6:00 PM

@ Nick P

Well, when should we start getting concerned? 🙁
I don’t mind Clive making me look uneducated as long as he is Okay. What is the longest he abstained from showering us with his wisdom?

Nick P July 14, 2012 9:08 PM

@ Wael on Clive

I cant remember how long it was. He has serious medical issues and spends time in the hospital. Id say give him some prayers and wait a week or so.

Clive Robinson July 14, 2012 9:24 PM

@ Wael, David, Nick P,

I’m still alive and have returned today from lodging with the medical profession once again. You know you are “not well” when you are not only on first name terms with Drs, Nurses and Porters at your local hospital but also their children and in some cases grand children from having said hello to their parents in the street (the nice thing is they don’t ask “how are you feeling”).

@ Nick P,

or make some smart dude look uneductated

Ouch… I look on it more as “broadening their outlook with a different view point”. For one thing I’m certain there are a lot brighter people than me around, I just appears that “I’ve got around” a bit more than they have, so like “any old dog” I’ve learnt a few tricks in my time 😉

I know you are occasionaly amused / dismayed by Journalists and their antics when it comes to security. But whilst we tend to think of it academicaly or in a detached sort of way sometimes their lack of ITSec genuinely costs lives,

http://www.cjr.org/feature/the_spy_who_came_in_from_the_c.php?page=all

And to be honest, I don’t think I’m up to totaly locking down a consumer grade laptop computer / mobile phone beyond the capabilities of a type three opponent if it can even be done (which I doubt). I even doubt many type three agencies are capable of it either unless they have a significant “in” with the manufacturers involved.

I’m also well enough aware that such a device if found on you (and it very probably will be) is going to attract rather more attention than you would want. Likewise specialised harware that is potentially secure is going to raise an even bigger stink than ripe Brie in the full heat/glare of the midday Mediterranean summer sun.

Back in the old days of “fieldcraft training” they used to advise you “to keep yourself clean and unfragrant” so that your “back story / cover” would not be compromised or the “officers” of the opposition “smell you out”. Thus you kept hard technology to a minimum, and instead relied on what was “in your head” and where required additional field support personnel via appropriate cut outs and dead letter boxes etc.

Modern journalists don’t have the luxuries of resources time or support today, and the modern TV Hard News requirments make it a lot worse than it used to be.

In this particular case of the filming a few simple precautions would have gone a long way, such as head scarfs etc to hide identity before recording video, using an “anonymous room” to film in and not voice recording interviwees voices onto digital media but old fasioned analog tape, then burning the tape after transcribing word for word so it could be “voiced in” by an actor at the production stage. Whilst the first bits are fairly easily done the issues of recording/transcribing the voices of the journalist and the interviewed person need time resources and skill that are not “field available” these days in 24Hour News.

One advantage for these forign level three agencies is that the countries for whom they work have “jumped over POTS” and gone straight to fully digital mobile phone technology which makes intercepting and recording data almost trivial and alows statistical analysis of call paterns etc sufficiently easy that use of digital phone technology should be considered “suicidal” at almost any level no matter what precautions you take (a point that was not lost on OBL, and even he paid the price of insufficient communications security in the end).

It’s a hard problem and is getting harder by the day as new tracking technology becomes rapidly available (at a very profitable price) to level three agencies the world over and it is not helped by the likes of Chinese and Israeli telecoms hardware provider companies building in the tracking facilities from the silicon upwards.

Wael July 14, 2012 9:38 PM

@ Clive Robinson

The first thing that came to my mind is Holly S..t, my prayers have been answered.

@ Nick P
“as he enjoys that stuff too much. Mark my words. ;)”
Your words were marked. You are correct, Sir 🙂

Wael July 14, 2012 10:13 PM

@ Clive Robinson

“I’m also well enough aware that such a device if found on you (and it very probably will be) is going to attract rather more attention than you would want. Likewise specialised harware that is potentially secure is going to raise an even bigger stink ”

Intresting, seems like steganograohy is going to gain more importance than cryptography in some situations.

Clive Robinson July 14, 2012 11:58 PM

@ Wael,

Speaking of re-reading comments in posts etc, I realised I’ve not answered,

As for C-v-P… So we can still talk about that. I promised Nick P I will drop that analogy, but I need a way out. Hmmmm! I see it as a model, not an analogy. Yea That’s it!. So out with the analogy. C-v-P is a model we can talk about, although you almost confused me when you referred to the Castle as a Fortress in a recent post. Stay tuned… Maybe we can sneak-in an iteration or two while Nick P is on vacation

First off I’m not sure Nick does “vacation” in the normal sense, I get the vague feeling “alligator wrestling” is his idea of a nice quite time 😉 [1]

But more to the point yes in many respects it is a model. The reason for the use of the names well a couple of reasons. Firstly somebody has used the title “Cathedral and the Bazaar” and it’s a catchy title so Castle -v- Prison is name wise following in the footsteps as it were. Likewise we used to talk of “Bastion Hosts” years ago which at the end of the day a Bastion is a hardened and defended point which both C&P’s are (so the naming is semanticaly relivant).

Secondly and more importantly it relfects the mind set of system design.

But this is where the fun starts… we use the word “malware” to cover a multitude of sins. As a very lose definition it is “software that causes a system to perform actions not intended by it’s owner”. The reality of malware is there are a few basic types,

1, Software injected from outside during operation.
2, Software added during design and build.
3, Firmware that makes a system untrustworthy.

As an “overly general rule” the IT security industry currently tends to think only of the first two types, and assume that the hardware system we own is inherantly trustworthy. Academically however we know that a single “Turing Engine” cannot be 100% trustworthy, simply because the reality is it cannot test it’s self reliably and thus when viewed as a “black box” it can lie to us and we won’t know. The US Military amongst others has woken up to this unplesant little truth about COST equipment an talks about “supply chain” poisoning.

Thus we have a significant problem, how do you build trustworthy systems on untrustworthy hardware. An English language idiom of what is apparently a futile task is “To build your Castle on shifting sands”. Which is a little odd because we have known since Henry the VIII’s time that you can build Castles on a lot worse such as water… a Naval vessel is at the end of the day a floating Castle through which “power is projected”. We also know from the Napoleonic era that “floating hulks” can also become “Prisons” as well.

From those outside looking inwards C&P’s look very much alike irrespective of what they are built on in that they are designed to keep out the “uninvited”.

Thus to type 1 malware the systems are no different externaly.

However when you get inside a Castle it is a lot different from it’s exterior. Castles tend to be built on the notion that those “invited” in are trustworthy and the internal defensive measures are still directed at the “uninvited” not the “invited”. Further that for those invited the environment within the Castle should be relativly comfortable.

When you get into a prison one of the first things you notice is that the defenses are not aimed just at the “uninvited” but very much if not more so at the “invited”. That is invited or not the prison considers all to be extreamly untrustworthy. A prison uses strong segregation / separation and minimal functional environments as it’s basic mechanism to ensure the securrity requirments.

Thus to type 2 malware the systems are very different and as such even invited software finds the system hostile to all but the owners requirments.

Now the problem with type 2 malware is it comes as part of the package and the only limitations are in most cases absolutly minimal to non existant. Even development tool chains won’t stop this if the attacker can get the malware functionality included into the design specification as part of say the “run time test environment”. The most obvious recent example being the CarrierIQ test environment…

The problem is actually applications are developed to be loaded to far down the stack. That is usually the end result is a compiled program of machine executable instructions that sit down way below threshold at which the code can sensibly conveniently or efficiently be monitored.

One of the ideas behind CvP is that the majority of programers cannot be trusted to write secure code. As Nick P has frequently pointed out there are many tool chains that produce code that whilst not perfect is way way way ahead of the run of the mill code cut in most app shops.

Without getting into the politics of it those that can code securly are a very very very scarce resource and finding a person with all the requirments is about as likley as finding a diamond stuck between a hens teeth.

Thus sensibly you should try and leverage their talents across as wider front as is possible. Back many years ago this problem was solved by the simple extent of employing them to write the OS and code libraries onto which single use apps would run. Unfortunatly the world has moved on and OS level security is far from being sufficient with apps like web browsers being effectivly the equivalent of an insecure OS like CP/M and DOS with a single memory space in which every user task runs at the same privileged level and little or no attempt is made to seperate task information from other tasks.

Thus the follow on idea was to provide “re-usable code”, not in the current insecure form of a code library but as a seperate task that is in it’s own secure environment and has it’s input and output “piplined” to other tasks. Thus it is similar to *nix shell scripting but with much stricter segregation and monitoring.

This way the development tool chain would stop well above the executable code level in the stack it would be at the level of secure tasks. Some scripting languages like Perl have similar design philosophies but the approach has been to go “monolithic” for efficiency reasons. This makes effective security monitoring next to impossible.

This is because one of the major inefficiencies in any CPU environment is “task switching” that is whilst the CPU might have a relativly well defined and efficient “context switch” from background normal operation to foreground interupt handeling, extending beyond this to tasks is releativly inefficient. It also opens a whole lode of security risks at the same time.

Thus eliminating task switching would add greatly to the efficiency and security of the design.

Further “well found” tasks don’t actually require the heavy weight resources modern high transistor count CPU’s bring or the complex instruction sets that come with them. Infact much of the time much of a modern CPU is idle which is actually quite inefficient.

So the idea was to use many light weight RISC CPU’s and not to task switch them. The CPU would live in it’s own virtual world behind the Memory Managment Unit, which unluke current designs would not be controled by the CPU it’s self but by a hypervisor. Thus the MMU becomes the prison walls within which the CPU is jailed. It’s resources would be stricctly controled by the hypervisor and as previously discussed each taks would get the minimum of required resources and be subject to signiture analysis.

So the prison model is designed to work like a massivly parallel system of CPUs with small tasklets running on each CPU which pipeline there results not to each other but to and from the CPU hypervisor.

However although this would provide a significantly secure environment to help prevent type 2 malware it does not tackly type 3 malware as currently described. However it does allow it tob much more easily facilitated than the single heavy weight CPU systems.

I’ll stop at this point with “any questions?”

[1] For those not up on certain English/US idioms there is a saying which in the UK starts with “Sometimes when you go to drain the swamp…” and referes to the problem of being side tracked from your objective. Normaly most engineers etc know what you mean from just the first few words and you don’t need to say the whole phrase to get a knowing nod. The US version is shorter and more to the point with “When you’re up to your neck in alligators, sometimes you forget that your mission is to drain the swamp” (this is the polite version the more common usage uses another lower part of the anatomy than the neck with the further unsaid idiom of things being a considerable pain there 😉

Nick P July 15, 2012 12:41 AM

@ Clive Robinson on journalists & opsec

Good to see you, buddy. And I see you’re jumping past the personal straight to interesting topics, as usual. 😉

Nice article. I might have to share that with a few people. I agree that modern types could learn plenty from the older folks. I’m not totally sold on dodging digital, but it’s admittedly way easier to subvert things these days. Much harder to secure modern methods, as well. Yet, Anonymous and Wikileaks show it can work in practice if you do it right. (For a while, at least haha.)

“And to be honest, I don’t think I’m up to totaly locking down a consumer grade laptop computer / mobile phone beyond the capabilities of a type three opponent if it can even be done (which I doubt). I even doubt many type three agencies are capable of it either unless they have a significant “in” with the manufacturers involved.”

Agreed.

Wael hit on one of the more obvious points with steganography. I’m skeptical about stego, though. My MO about this stuff, along with others, was to just put stuff on media or computers that’s easy to hide. There’s books to help with the hiding, we both know. Also, if I used encryption, I’d try to make it where it takes a remote, extra key whereever possible. This is to resist rubber hose cryptanalysis. (Maybe we should call it fingernail cryptanalysis, to be more accurate, eh?)

I’d also rather stego the SYSTEM. You recently reminded me of the AES scheme that avoids RAM. It gave me a different idea, though. We know there’s plenty of black boxes in a COTS desktop or laptop. We know there’s many chips that have their own code & memory. How bout we modify one to hide the stuff? Of course, might have to do COMPUSEC on the main system to prevent subversion. Let’s say it was unlikely they’d subvert the system or it was protected well. Then, they just grabbed it and started looking for stuff. Hide sensitive info in an onboard chip that’s supposed to be there, maybe with extra functionality (GPU comes to mind), and modify firmware of main CPU to retrieve it during a certain key sequence. Main point of using black boxes & main CPU onboard memory is that it should be harder to extract w/out uncommon expertise. What you think?

“Modern journalists don’t have the luxuries of resources time or support today, and the modern TV Hard News requirments make it a lot worse than it used to be.”

That makes it sound pretty inevitable. I don’t like that. I think they could do way better with a good approach and hardly any extra resources. Let’s take a specific example: voice recording on tape, transcribing, and burning it. So, how pressed is the journalist for getting word out from a source? Does he really have no time to anonymize it? (I doubt that.) More likely, he only has so many resources. So, what to do? Well, he could record the conversation onto a TrueCrypt volume with random strong passphrase (written down). He could physically transcribe it all, or use speech-to-text by reading it, then burn the paper with the key & delete the volume. (RAM disk if it’s a short conversation.) Gotta make sure no extra copies are made in background by apps, but if not this is decent for making sure nothing is there to pick up.

In that example, we can also make purpose-built voice recorders, Raspberry Pi-like devices, whatever. I’m sure this isn’t the only example where it would cost virtually nothing & take an acceptable amount of time/energy to maintain OPSEC. So, I think they could do way better if taught how & provided reasonable methods.

“gone straight to fully digital mobile phone technology which makes intercepting and recording data almost trivial and alows statistical analysis of call paterns etc sufficiently easy that use of digital phone technology should be considered “suicidal” at almost any level no matter what precautions you take (a point that was not lost on OBL, and even he paid the price of insufficient communications security in the end).”

You seem right on, there. There’s not a single “mobile security” solution the two of us haven’t mostly shot down. It’s a ground up thing & ARM/SOC-style COTS just can’t be trusted. I’ve designed and posted “mobile” (read: you can carry it) solutions in the past. However, it might take some convincing to get journalists to do it. It also draws too much attention when it sticks out like a sore thumb.

“It’s a hard problem and is getting harder by the day as new tracking technology becomes rapidly available (at a very profitable price) to level three agencies the world over and it is not helped by the likes of Chinese and Israeli telecoms hardware provider companies building in the tracking facilities from the silicon upwards.”

They’re not trying to make it easy on us.

Nick P July 15, 2012 12:57 AM

@ Clive Robinson on CvP

“However when you get inside a Castle it is a lot different from it’s exterior. Castles tend to be built on the notion that those “invited” in are trustworthy and the internal defensive measures are still directed at the “uninvited” not the “invited”. ”

“A prison uses strong segregation / separation and minimal functional environments as it’s basic mechanism to ensure the securrity requirments. ”

The first quote is the problem I have with the analogy. There’s certainly “trusted” code in a system designed in my model. However, most parts of the OS are subject to security enforcement & POLA. The trusted components are developed according to rigorous processes that are unlikely to result in critical bugs. So, you’re second quote applies to my model quite well, especially as those goals were part of the security kernel approach that inspired my preventative view.

I think the prison analogy is fine for your system, but the castle one stretches to thin trying to cover the alternative. Also, to be clear to readers, I’m not expecting the average programmer to do this: the base platform will be done this way & the other developers leverage it. How? Well, there’s many options and I leave it open. Only the TCB needs to use the rigorous development methods & past projects show that part can be pretty small.

Wael July 15, 2012 3:49 AM

@ Clive Robinson, Nick P

I’ll stop at this point with “any questions?”

Nope! Just proposals:
Moving from an analogy to a usable model… The model should to be simple and ideal, or perfect. It should be the simplest construct possible. Prisons and Castles share some attributes; they both present an interface or a boundary between two separate domains or worlds. They both have zero or more inhabitants. From the outside as you have said, they look alike. But they also serve a different purpose.

C: Keep uninvited people out
P: Keep invited people in

C: Protect insiders from outsider
P: Protect outsiders from insiders

C: People enter by invitation
P: People enter by force when they break a protocol (law)

C: People leave at will
P: People must meet certain conditions to leave

   C: Has a king
   P: Has a Warden maybe with fewer powers than a king in a castle

   C: Invited people are equal citizens
   P: Inhabitants are not equal; different sentences

   C: People are equal when they leave
   P: People may leave on parole -- probed periodically to monitor behavior, 
        with occasional visits from a parole officer – a hypervisor again?

   C: People inside are good
   P: People inside are bad (but maybe not in the model, if applied to data, as opposed to code)

Would it make sense to propose the following mapping?

1- Hardware in a castle and a prison is the building materials; bricks, doors, windows…
2- Software is the people in the castle or prison
3- Firmware is the Helpers of the Warden or King?
4- Warden is the hypervisor, so is the King – Who runs a Castle anyway?

Does that make sense, or am I having a pipe-dream at almost 2:00 AM?

Wael July 15, 2012 3:58 AM

@ Nick P, Clive Robinson on journalists & opsec

“I’m skeptical about stego, though. ”
Steganoraphy: The science/art of hiding the existence of information.
Cryptography: The science of hiding the meaning of information.

If hiding the meaning of the message causes problems by attracting unwelcomed attention, then the rational course is to hide the existence of the message – in my view.

David July 15, 2012 5:07 AM

@Nick P, Clive & Wael

I’m not sure who raised it first… but the mention of stego along with the idea of hiding extra resources in a chip that was supposed to be there (I think someone suggested the GPU) led me down some kind of mental path to hardware stego – hiding the computing hardware in plain sight.

Yes the GPU is a pretty good place to put it, but that’s only useful for code and storage. We need to develop better methods of hiding the actual compute infrastructure.

I seem to recall a discussion around wafer-thin and flexible circuitry (I know the digital SLR manufacturers have been keen on this for a while) but perhaps this might be an opportunity to pull the ICs out of their black packaging and find more subtle places to put them.

As an off-the-wall suggestion, how about embedded in the journalist’s elastic knee bandage (please don’t hold me to that one… it was just intended as a thought provoker).

Clive Robinson July 15, 2012 9:13 AM

@ Wael,

Criky there’s a lot to reply to… So,

Intresting, seems like steganograohy is going to gain more importance than cryptography in some situations.

Yes and no, stenography is almost the perfect description of “security by obscurity” in the informaion world and by and large does not work due to signal to noise issues. If you think of it as just information we have ways to show stego at around 1 bit in 2^10 and in only mildly different cases around 1 bit in 2^20. The reason is there is “noise” and there is “random” and they are very far from being the same as analysis with FFT’s and FWT’s shows up quite easily and other statistical systems show.

Most “noise” is actually due to cyclical interferance of one form or another be it regular wave forms (think mains hum and clicks from motor noise) or iregular waveforms with predictable shape. Most “random” signals are neither cyclical or have predictable shapes so can be spotted as being different therefore suspicious. Thus getting good information stego is as hard as getti.ng compleatly unbiased random from physical sources (they are if you think about it the flip sides of the same problem). If you hunt back about twenty or thirrty years you will see that the likes of the NSA, GCHQ, et al were advertising for applied mathematicians in the “signals analysis” area and this is just one reason for them (another being digging out “odd traffic” / “odd behaviour” from large data sets as the basis of more refined traffic analysis).

To try to make stego work you need to do four basic things,

1, Find a carrrier channel with sufficient bandwidth to hide the final stego covert channel.

2, Apply strong encryption to the covert channel data to make it appear totaly random.

3, Apply reversable mapping techniques to turn the encrypted data from “random” to “noise”.

4, Add the covert “noise” to the carrier data in a way that appears natural on analysis, but still alows the recipient to strip it out reliably.

Whilst stage 2 is fairly easy and stage 1 is a numbers game and thus similarly easy theoreticaly (but not practicaly). Stages 3 and 4 are not, for a start both are going to involve significant “data expansion” which reflects back onto stage 1. Further stage 4 is multidimensional real noise both adds and multiplies to the carrier signal and due to channel limitations it effects not just the amplitude and time domains but also the frequency/phase/sequency domains. This requires a complex convolving process which likewise requires a complex deconvolving processs just to recover the expanded covert encrypted data to get back to stage 3.

Using hidden hardware as a stego system is likewise a difficult task especialy with COST systems where the level three adversary has access to the system for as little as a few moments (think about the cat and mouse games between customs and drug runners and what has evolved.

First simply weighing the device might give notice that additional hardware has been added. Then a simple visuall analysis after using a screw driver to open the case will show up most changes like the addition of extra hardware. That is simply tipping the PCB’s through the light will show frozen flow marks in the solder etc from the manufacturing process which soldering in additional or replacment items will visualy disrupt very visually (as any end of line PCB quality control inspector will tell you). In some respects this is like the raised printing used as a security feature in printing bank notes and other security documents.

Further low level radiation sources and photo multiplier systems can produce an image like an X-Ray that can be “overlaid in a blinker box” against a known standard for the COST make and model which will show any differences up as a flashing or different coloured signal.

A similar “blink box” system can be used for system software such as the OS and Apps (think forensic hashes etc). Especialy if the manufacturer has kindly provided a test port or worse a DMA based port of some kind such as FireWire.

Stego is a hard problem. One solution is to write the “next must have game” (such as Angery Birds or Hungry Shark) and include the stego function within it. Say an “etch-a-sketch charades” game for networked mobiles etc.

Hence my comments about having an “in” with the manufacturers.

Life would become much easier for Journalists, dissidents and spys etc if FDE with remote key managment became standard because then they would be no different from the norm. However I can not see most Governments alowing this to happen and would probably roll out “The Four Horsemen Ot The Internet” (terrorism, kiddy porn, drugs etc) as an excuse to legislate against it’s use.

Wael July 15, 2012 1:08 PM

@ Clive Robinson

Yes and no, stenography is almost the perfect description of “security by obscurity”
And spread spectrum is almost the perfect implementation of steganography. Will look like noise to Mr. Fourier if he is not aware of the channels and randomness ala CDMA type spreading.

Nick P July 15, 2012 3:02 PM

@ Wael on stego

I don’t know what the webster definition of stego is. Perhaps it’s close to the definition you gave. I’m using the common usage version. When people say “stego”, they usually mean tools that hide secret information in a publicly visible type of information. I’ve seen this done in pics, videos, audio, fake spam emails, network packets, etc. I’ve done it in other media, too.

So, hiding things is certainly a solution or preferred approach. I corroborated that in my post on the matter. However, when most talk stego, the above is what they mean. And detection methods on that are always getting better. It’s usefulness will depend on group tareting you. Physical concealment of memory or computing hardware might have fewer issues.

Nick P July 15, 2012 3:06 PM

@ Clive Robinson

“One solution is to write the “next must have game” (such as Angery Birds or Hungry Shark) and include the stego function within it. Say an “etch-a-sketch charades” game for networked mobiles etc.

Hence my comments about having an “in” with the manufacturers.”

Might be a game or popular app. Can do a binary or bytecode modification, too. The main risk is the modified app will be on the filesystem. A very easy countermeasure is comparing its hash to a known legit copy.

We might need to further define enemy capabilities. Many of these groups are just vacuuming up information, intercepting packets, etc. They aren’t doing the above check and might not be doing subversion of the asset (again, depending on the group). So, perhaps defender-supporting organizations should try to map out their capabilities & change the advice to fit the likely risks.

Clive Robinson July 15, 2012 4:22 PM

@ Nick P,

Sorry for the time delay, one downside of lodging with the medical profession is the use of “sleep inducing” chemicals whilst they might solve one problem, they do have side effects one of which is the “Hangdown effect” where the stuff ends up destroying your sleep rythm and you end up napping all the time. You end up (in the words of the song) looking “hungdown and brungdown”.

So back to the chase,

Also, if I used encryption, I’d try to make it where it takes a remote, extra key whereever possible. This is to resist rubber hose cryptanalysis. (Maybe we should call it fingernail cryptanalysis, to be more accurate, eh?)

The remote extra key(s) are essential as a “proof” of “zero knowledge” by the person/employee carrying of the hardware. However as we have previously discussed it needs some spatial as well as time based ellements as well as a duress key to be effective and the key needs to be “shared” as several partial keys across many jurisdictions.

Preferably there also needs to be a series of unknown steps the equivalent of “cut outs” and “dead letter boxes” of old style field craft such as say having to use the likes of Google to search for a series of chained sites compleatly outside of the control of either the carrier or his employing organisation (ie openish blog sites or social networking sites). Such that watching the actions of the carrier this time will provide “zero future knowledge” that can be used as a watch point or predictor of this employee or other employees. All of which requires one heck of a lot of discipline on behalf of the carrying employee and their employer (knowing how to do something and actually doing it properly in practice are poles appart in abilities and discipline…).

Oh as for “fingernails” did you ever see “Running Man” where they torture him by drilling holes in his teeth with a dental drill?

Apparently (according to a book writen by a defector in the cold war) a form of it was used as a field interrogation technique. You would grab some happles conscript soldier tie him to a tree etc and start filing down his teeth with a large “Railways bastard file” (so called because it’s 18inches long and has 13teeth per inch). only after he started screaming almost incoherently did you start asking questions like “what is your name?” and “what is your serial number?”… just plain nasty and very probably effective for local tactical info against conscripts etc… So how about “Dental cryptoanalysis”? It definetly puts a new meaning into “getting your teeth into the problem” 😉

With regards,

Main point of using black boxes & main CPU onboard memory is that it should be harder to extract w/out uncommon expertise. What you think?

From discussions with RobertT and info discernible from recent US DoD project requests we know that detecting “Fake Chips” or “Backdoored Chips” is verging on impossible by direct examination. And that poisoning of a chip design done at the “macro” level via “test hookups” put in at the foundry is more than easily possible. However as I said to Wael above replacing a chip or adding a chip on an existing PCB leaves fairly obvious “re-work tell-tails” to a trained goods inwards quality control inspector. This is esspecialy true of the new lead free solders that actually leach copper off of the PCB and thus make re-work fraught if not impossible in many cases. Worse the likes of Apple now nolonger provide sockets by which you can upgrade your hardware and this is a trend I expect to see continue as more and more systems become “commodotized” in the likes of set top boxes, games consoles, thin clients, smart phones etc.

So to do it effectivly you are going to either need an “in” with the manufacturer or know how to make a very convincing fake iPhone etc (not impossible apparently the Chinese are currently doing this which is just one reason Apple are building a production plant in the US in Texas).

Which brings us onto custom gadgets such as your sugestion,

… also make purpose-built voice recorders, Raspberry Pi ike devices, whatever. I’m sure this isn’t the only example where it would cos virtually nothing & take an acceptable amount of time/energy to maintain OPSEC So, I think they could do way better if taught how & provided reasonable methods

The technological idea is sound unfortunatly there is some very human issues to contend with.

Firstly and foremost whilst some individual journalists might be concerned about what happens to those they interview, the “hard news” managment attitude is they are a fully expendable infinite resource to be exploited at the minimum cost and fastest time to air. And as the journos themselves are only to aware these days they are likewise fully replacable as well by some “blond anchor girl” and a bunch of low paid (or not paid interns) Internet trawlers all sitting safe behind their desks in corparate America juicing out on their “skinny late with quad shot of esppresso” and energy drink meals.

The mantra of such managment is “efficient production” which means in reality the lie of “shareholder value” or the reality of max profit in minimum time to get this quaters figures and the bonus attached and thus the possability of moving up. One asspect of this is “fast technology churn” where the latest gizmos are purchased and the journos are just given them and told without training that they have to be 10% (or whatever) more efficient. In this sort of environment security is effectivly a non starter because it does not have a “bottom line book value” that can be massaged to improve their end of quater figures. So rather than being seen as an “enabler” to trust and future stories it’s actually seen as “lock in” to a journalist and thus a dead end. Worse it’s not in the Advertisers interests to have journos with protected sources, for the advertisers corperate lawyers and image makers it’s all about their transparent control of the product to not negatively impact the projection of their “one true image”. This extends down to the likes of playlists on radio stations where advertisers dictate not just what gets black listed but white listed as well.

It sounds almost unbelivable I know, but when you’ve worked in the broadcast industry you realise that the “F**k Button” that bleeps out or silently drops spoken content is used not just for “naughty words” but Advertiser protection as well you start to understand the ethos behind it.

The dirty little secret of this came out big time in the UK with a well known manufacture of choclate bars, breakfast cerials and baby milk formular. They had been censured by amongst others the World Health Organisation for pushing baby milk formular into various third world markets where it’s use had the measurable net effect (through the use of dirty drinking water to make up the formular) of increassing the risk of infant mortality. Well the company concerned sponsored a well known “Reality TV” show and were horrified to hear what they considered the contestants bad mouthing them (ie in “reality” discussing the companies dirty little practices). As you can imagine the company had a major “hissy fit” and “the toys got thrown out of the pram”, unfortunatly the production company went a little over board to protect the revenue stream and it became so obvious it became public knowledge and was reported in major news papers etc (which had the perverse effect of getting the papers new revenue from the company in effect to buy them off and shut them up).

But it gets worse, many of the agencies responsible for advertising and placing stories etc have clients who feel the need to be “loved” one such person (amongst many such) is the now deceased President / Dictator of Libya (Gadaffy) who for many years has been very free with the money to “build his humanaterian image” and many organisations (Universities, Influential NGO’s, News Organisations) have along with various elected officials (including if published reports are to be believed ex UK PM Tony Blair) have been quite visable recipients of the largesse of Syrian influance / money. Obviously such back channels involving both monetary / positional gratification and humiliation potential are very powerfull motivators and it would not be unknown for people to protect their own personal position by aquiessing to “information requests” from the sponsoring organisations… Thus the journalists get it not just from the the level three adversaries of the countries concerned but potentialy from certain face saving individuals in their employing organisations…

Then there are other issues such as cost / usability / conveniance / replacability / maintanence / etc etc you will get told when you try to encroach on somebody elses “paid for” patch with the big broadcast organisations (can you afford to take “purchasing managers” out to dinner at trade shows or invite them to hospitality events at major sporting events or product promotions in exotic locations, with free transport, accomodation and endless supplies of “goodies” the cheapest of which would be vintage champagne?).

I’ve seen also first hand when working in the telecommunications industry those working for “government apointed” regulators responsable for verifying a manufactures facilities for compliance with required National and International standards be not just “wined and dined” but given “special rates” to buy goods for their homes and cars to drive around in but also given “company” in case they feel lonely during the extended visits.

Clive Robinson July 15, 2012 7:22 PM

@ Nick P,

The main risk is the modified app will be on the filesystem. A very easy countermeasure is comparing its hash to a known legit copy

Hence the reason you have to write the original version so the hash is valid.

It is however based on the assumption that even level three adversaries don’t have the capabilities to check each and every app/game in existance. And even if they do check the most popular 0.1% by then having it on your phone does not make you a spy just a slightly suspect “game player”.

Thus it’s the same as having an “in” with the hardware manufacturer only cheeper and possibly much easier to hide (after all how do you background check what appears too be a paranoid games writer secretly working on the next “big thing” in their back bedroom?).

Mind you if I was to try to do such a thing I would try to place myself or my code writter in with the app writers at the OS organisation and get it in the standard release as nicely “signed code” [1] and thus above “gold standard”…

With respect to,

So, perhaps defender-supporting organizations should try to map out their capabilities & change the advice to fit the likely risks

This is a catch 22 problem 😉

To get the money to do the enumeration, you need to have paying custommers, and they are most likely to be the very people you are enumerating…

It’s similar to the “Security workaround / patch problem” as an attacker being the first to get your hands on a patch alows you to work it backwords to find the vulnerability so you can exploit it for either a short window for those who patch often or much longer for those who don’t patch for a long time or at all (the majority).

@ Wael,

And spread spectrum is almost the perfect implementation of steganography. Will look like noise to Mr. Fourier if he is not aware of the channels and randomness ala CDMA type spreading

Yes and no Spread Spectrum for “Low Probability of Detection” (LPD-SS) be it Direct Sequence (DS) or Frequency Hopping (FH) works on “code gain” that is the spread bandwidth is X times the data bandwidth and thus you have a coding gain of ~X. If the SS transmitter is sufficiently far from the intercept ECM / Surveillance receiver then it will appear to be below the thermal / atmospheric noise floor. However this is rarely the case even with the likes of the US GPS Military signal or other satelllite signals. If your intercept station has sufficient gain in it’s antenna and knows where to point it then the SS signal is nolonger below the thermal / atmospheric noise floor. On a spectrum analyser it’s basic charecteristics (type, chip rate, coded bandwidth) are usually fairly obvious to the practiced eye. Worse for DS-SS the spreading code usually has to be linear and it has been shown that you only need to know 2M bits of the code to work out the entire linear sequence. As such DS-SS is going out of fashion for LPD these days, it’s more frequent use being CDMA and shared bandwidth/service especially in the likes of the ISM bands and analog TV bands.

Even when the spread signal is below the noise floor there is another issue with LPD-SS systems, which is “receiver sync / lock up”. That is the receiver has to have some way of syncing the deconvolution code with the convolution code at the transmitter or it will not receive anything. For various reasons (drift, doppler) accurate clocks don’t work very well nor do third party transmitted refrences.

There are a couple of practical methods used to obtain initial lock, one is to use a higher energy beacon signal with a short M-Sequence code [2] and or low chip rate, another is to use a burst signal that acts as a lead in pilot. The choice is generaly dependent on if transmission is continuous or short duration such as in two way PTT systems used for voice comms. Either way the beacon acts like a “flashlight in the dark” and can be fairly easily found with modern high bandwidth IQ receivers and backend DSP as used in high end military ECM / Surveillance receivers. Further it’s not just FFT’s or FWT’s that are used in such receivers these days there are more complex mathmatical systems used to find the charecteristics of the signals and dig them out from the noise or other interferance. The point being is once a part of the transmitted signal has been recognised it can be used as a sync signal and even the use of simple averaging on the IQ memory will bring up more charecteristics and bit by bit the original signal reconstructed and analysed. For instance take FH-SS the charecteristics of each transmited envelop charecteristics for each frequency hop are almost identical and this can be used to tell them apart from other signals in the area. This can be improved by using two or more directional antennas and with modern electronicaly steared / synthersized antennas this can be done almost in real time.

Early PTT SS systems actually used SAW matched filters to provide a sync pulse that had an ERP of many times the coding gain, and in some cases 10db up on the equivalent non spread ERP channel. The more modern version is to use high speed A-D converters and DSP matched filters at the receiver as this can be programed like a SelCall system.

The overal point being that although the chip sequence may not be deducable and thus the data not recoverable by the surveillance receiver the beacon signal indicates both that a transmission is happening and from what direction, and in some cases even the range.

In the intel game you don’t need to know what has been said to hang a person just that they have talked to another suspect. Which is why anti-traffic analysis technology is perhaps more important than data security technology such as encryption these days (TOR developers please take note).

[1] and people wonder why I have no faith in signed code or code reviews etc etc 😉

[2] There are a large number of M-Sequence codes that can be used, the earliest were “Gold Codes” and “JPL Ranging Codes” even simple PN-Sequences, with the more esoteric Walsh-Hammerad, Kasami and even Barker codes being used for other functions within SS systems. To be a good contender for SS use a sequence does not have to actually be an M-Sequence it actually needs few charecteristics other than strong auto-corelation, and for DS-SS it generaly has to be linear to be balanced to avoid offsets in the receiver. Thus you could use the output of a modern crypto function such as a stream generator or block cipher in CTR mode for FH-SS as long as you have some method to sync the receiver with the transmitter ( http://www.netlab.tkk.fi/opetus/s38220/reports_97/kettunen.pdf ).

Wael July 15, 2012 7:24 PM

@ Nick P on Stego

“stego”, they usually mean tools…
We are basically saying the same thing. The subtle difference is that you are emphasizing “tools” as opposed to “information”. That difference does not affect the discussion. I also meant by Stego exactly what you mentioned… Distribute the information in a bunch of movies, JPEGs, etc (for data at rest) and use spread spectrum / frequency hopping or other techniques for transmissions (Data in transit). It still is STO (Security Through Obscurity). We have stigmatized STO a lot, and seems we both agree, that it has a place under some circumstances. I remember reading a couple of good books about these subjects by Simon Sing: “The science of Secrecy” and
“The Code Book: The Science of Secrecy from Ancient Egypt to Quantum Cryptography” They were very light weight, but contained some interesting history and information.

Wael July 15, 2012 7:33 PM

@ Clive Robinson

Please define “level three adversaries”. Is that a funded organization / major government; the IBM classification?
1- Script kiddies
2- knowledgeable insider
3- Funded organization (could use a team of class 2 as well)?

Clive Robinson July 15, 2012 10:33 PM

@ Wael,

With regards the level of adversaries it’s changed with time a little since The IBM / Ross J Anderson definition as evidenced by the likes of Stuxnet DuQu and Flame. But losely,

Level 1, is an individual or small team that are constrained by very limited resources and have no knowledge of the systems other than that which is available to any other external entity.

Level 2, is an individual, team or organisation that is limited in resources but have access to an increased level of knowledge over that which is available to any other external entity.

Level3, is an organisation or agency that is not limited by resources and has access to any information that is available to the system developers and access to related information that is effectively the result of classified research.

It is important to note that the above definitions are (unlike the IBM / RJA definitions,) based on a three dimensional graph.

The dimensions being,

A, The number of skilled human resources.
B, The avaialability of resources that are not human and not target system related information.
C, The level of target system related information available.

In reality the three levels should be further sub-clasified. That is I don’t expect the Government intel agencies of some countries to be on par with that of other countries, and in fact I actually expect some private research organisations to be better than many Government intel agencies, likewise some criminal enterprises.

What has become clear in this modern world is effectivly unlimited bureaucratic budjets don’t buy “smarts” capable of doing cutting edge research. Likewise they don’t buy the sorts of information criminals generaly have little trouble obtaining via simple theft or breaking, entering and taking away etc.

That is “free enterprise” gets results that reliable incomes and pensions don’t. Which is why we hear about “exploit brokers” and the likes getting government work / contracts / money.

However until some industry luminary comes up with a more up to date set of generaly accepted catagories I’m stuck with other peoples aproximations hence I still try and squeeze the multidimensional model into three levels 🙁

Wael July 16, 2012 12:24 AM

@ Clive Robinson

On “”level three adversaries”
Thank you sir. Now we can talk on the same wavelength.

I’m stuck with other peoples aproximations hence I still try and squeeze the multidimensional model into three levels 🙁 AND (this is the polite version the more common usage uses another lower part of the anatomy than the neck with the further unsaid idiom of things being a considerable pain there 😉

Yup! That’s gotta be a pain in the neck, and some have a lower opinion than that 😉
Another polite version…

RobertT July 16, 2012 2:49 AM

@CliveR
“From discussions with RobertT and info discernible from recent US DoD project requests we know that detecting “Fake Chips” or “Backdoored Chips” is verging on impossible by direct examination. And that poisoning of a chip design done at the “macro” level via “test hookups” put in at the foundry is more than easily possible”

Having a third party modify a chip at the foundry level without the original design company knowing the modification occurred is possible BUT it is definitely not easy.

The problem is that the “fab” does not have the original database that created the chip, so it is difficult to verify that modifications do not effect the overall chip performance. Any significant changes in performance, especially bugs, will attract a lot of attention and reveal the changes / unexpected cell hook-ups.

Most chip designers would be VERY confused if the onchip metal hookups in a certain area were different to the expected, however they would never suspect that a TLA was involved.

It is worth mentioning that the final chip design company does not necessarily have access to the whole chip database. This is the case when the final chip uses IP blocks that are licensed as so called “hard macros”. Microprocessor cores such as ARM9 or MIPS are very often provided as black box modules. So it is possible for someone at ARM (for example) to add spying structures that Qualcomm (the cell phone chip designer) includes in their database and the foundry then selectively hooks-up.

The real trick is to distance oneself from the database modification, so that even if it is discovered, someone else gets to spend their golden years at the big house.

BTW: From my experience very few security engineers understand that on chip LSFR’s and related pseudo random sequence generators can actually enhance data extraction because they act in exactly the same way as a DSSS radio, so you get the same processing gain. I recently saw an anti DPA structure that used a simple LSFR to randomize the clock. they couldn’t believe it when I externally locked to the LSFR sequence using a correlator and then extracted the DPA signature.

Clive Robinson July 16, 2012 8:13 AM

@ Robert T,

LSFR’s and related pseudo random sequence generators can actually enhance data extraction because they act in exactly the same way as a DSSS radio, so you get the same processing gain

I’m not quite sure where the idea originated but certainly CCITT used “whitening” in their V-modem specs so that the “energy would stay in the mask”. It was certainly subisquent to that that Far Eastern PC-Mainboard manufactures added the LFSR to the main CPU clock to “spread the energy” across the frequency band so it would meat the EMI / EMC masks with less (quite expensive) decoupling components.

From a series of experiments I found that although the autocorrelation function was not optimal (unless all from the same manufacture) it would alow something close to a ten times range increase on picking up a PC’s signals to reconstitute thinks like keyboard data. Even better it worked against the “sea of noise” theory that a PC for “secret use” could be hidden without TEMPEST shielding in amongst a whole load of other PC’s “sea of noise”…

And of course the increased distance further allowed several corelated antennas to be used to give even better rejection of the other PC’s…

Some of them “young guns” have not yet learnt to not rush in where us oldun’s know not to tred 😉

Nick P July 16, 2012 3:22 PM

@ Wael and Clive

I still don’t like the classifications. I say, we take a page from Common Criteria’s book. Orange Book used to talk about how secure a system was by putting it in certain categories. The categories had both assurance increasing techniques & security features. This led to tons of problems I won’t go into. (Helpful, occasionally humorous read would be Shaefer’s “If A1 was the question, what was the answer?”)

Common Criteria changed the situation. I’m going to simplify it here. The big change was separation of a security classification into the security target and evaluation assurance level. The security target talked about capabilities, features, etc. The evaluation assurance level was concerned with required steps to establish confidence in those claimed capabilitise. There were also standardized terms that could be used by third party evaluators & that could “augment” a given rating.

Alright, now our current scenario. We need to create a new way of classifying general and specific capabilities. Clive had some good metrics. How much inside knowledge do they have of target organization? Of their countermeasures? How much funding? How much time to RE security measures, map out the organization, plant moles, etc?

That’s just the basics. It doesn’t really say as much about actual, long-term capabilities, though. That’s counterintuitive to most classifications but accurate for this reason: outsourcing. Organizations with the cash can outsource many levels of attack & there are happy mercenaries at most levels of traditional classifications. Additionally, the black hat markets have matured to the point that there are many groups with different specialties able to work together to accomplish goals easily.

So, a company might be hit in many counter-intuitive ways. An intelligent, but non-technical, insider can circumvent DLP using step-by-step instructions on the web written by cutting edge expert. Type 1-2 attackers can target high value assets by leasing or contracting capabilities of organizations with Type 3 technical capability. Additionally, it’s hard to distinguish between TLA’s, commercial & criminal markets in technical capabilities of both surveillance and offensive tools. Far as a defender is concerned, any of these groups could cause grave damage if they chose.

Any ideas on a better way to do the classification? And especially one that takes into account the decentralized nature of the modern battleground?

Wael July 16, 2012 4:53 PM

@ Nick P, Clive Robinson

I am not too strongly opinionated about definitions – so long as we agree on the meaning. That way we won’t talk across each other. I have thought about this as well a few years back. I will try to recollect my thoughts and get back to you later on…

Wael July 17, 2012 5:10 AM

@ Nick P, Clive Robinson on C-v-P

“Moving from an analogy to a usable model…”
Had an epiphany…
Castle represents: Complete awareness !!!
Prison represents: Total assured control !!!

Each has an owner inside, protecting (confidentiality, integrity, Availability, and accountability) assets.

Assets are people.

@ Nick P, Will give you my thoughs on categorization later. My skull is getting heavy…

Nick P July 17, 2012 3:00 PM

@ wael

My stuff derives from the old security kernel models. In those, there is no monitor really: data is forced (controlled) to do certain things and the assurance is put into that. Unusual stuff, if monitored, is logged for admins.

So, nice try again on making that analogy work. 😉

Wael July 17, 2012 3:29 PM

@ Nick P

Dude! The Analogy graduated and became a Model. You are such a tough customer 😉
Seriously though, this is a different discussion than the classifications thread. That… I still owe you.

Wael July 17, 2012 9:23 PM

@ Nick P

Clarification request:

In those, there is no monitor really: data is forced (controlled) to do certain things and the assurance is put into that. Unusual stuff, if monitored, is logged for admins.

You see, data does’t act of it’s own free will! It’s acted upon by something that’s passive-voiced in your statement. Please activate the passive voice…

Nick P July 18, 2012 1:38 AM

” The Analogy graduated and became a Model. You are such a tough customer ;)”

But of course! What other type could muster the will to try to parry Clive all this time? 😉

“Castle represents: Complete awareness !!!
Prison represents: Total assured control !!!”

Well, see, that’s the thing. The point of the B3/A1 class security kernels, later capability systems & recent SKPP-based platforms is [basically] “total assured control” of resources & information flow. (where it really matters most) The real backbone of Clive’s plan combines extreme POLA & signature-based monitoring (awareness?) that works to the function level. So, the representation seems a bit backwards on the surface. It’s one reason why that castle bothers me.

If it helps, I have some attributes of what he calls castles that might lead to a better term. It’s focus is mitigating by design. It also tries to contain resultant damage. Logging of critical events of some sort was in past designs, although what to log isn’t agreed on. The TCB is minimal (as possible), non-bypassable, and always enforces security policy. Correct by construction in concept, development, deployment & production. (There’s more but this is more than 99+% of current systems.) I’ve extended it conceptually to allow for monitoring (behavioral, signature, hash change) & recovery-based architectures. The “system” may be on a chip, local, a mix of trusted/untrusted, or totally decentralized. The important thing is the aforementioned traits are present. (Note: web 2.0 style stuff is excluded mostly because we’re still trying to wrap our minds around getting it done right, much less securely)

(Note: Enforcement, configuration, design, etc. was where the real security value was in my model. Monitoring was a just in case that didn’t work thing. Might have a greater value these days. I talked like mine’s security was passive b/c, how the kernel/user mode works, it ends up that way: user stuff runs, tries something privileged, gets an automatic check by the kernel, passes/fails. How active does that sound? Other measures might be aggressive, but the main technique was reactive. They’re gradually getting more proactive, hence me adding monitoring and recovery-style stuff over past year or so.)

Personally, I think the castle model fails to capture this stuff because it talks about physical things & we’re talking about information. This is similar, perhaps, to the DRM debate where the RIAA tries to use many physical “theft” analogies to describe illegal downloading of songs. It is left to the rational person to figure out how to equate stealing 30,000 CDs from a store & “copying” 30,000 songs. Managing data is quite different from people in a building. (Who copies people or stashes them steganographically in business files? 😉

Note: I noticed that Clive compared the two, saying castle trusts insiders whereas prison is supposed to contain insiders (distrust). I think that’s inaccurate: the “trusted” subjects/portions of stuff in my model are supposed to be incredibly vetted, kept to a minimum, and communicate in restricted ways with other things. Translated, they’re assured through a rigorous process, denied any extra resources, and use limited communication/sharing capabilities. To me, it just seems like two different ways (or degrees) of distrusting insiders. Castle analogy takes another hit.

So, you can keep saying it that way if you want. One advantage is people googling Castle vs Prison on this blog will find our old discussions, as it seems you were digging through some. We actually intended that to happen early on. I’m just concerned that the model might limit the way people think about the actual thing, which is quite a bit different. The things it does hit on correctly are that these designs take quite a bit of resources up front & they’re fortified (hence, me saying fortress once).

Wael July 18, 2012 4:24 AM

@ Nick P

I hope I don’t irritate you with this. I will try to make it a little amusing, just in case…

Models serve to simplify real world objects, phenomena, situations, etc. In this case I am talking about constructing an abstract model of security. I don’t understand what B3/A1, Orange Book, WEB5.0, Kernels, Hypervisors, Programming languages, CPUs CISC or RISC, Common Criteria, Quantum Mechanics, clocks, encryption, protocols… I don’t even know what POLA (or other security principles you did not talk about) is.

So why do we need a model? Quite frankly, I don’t see one. I see security failures and breaches that keep happening over and over. And you have some security people saying “buffer overflow”, “Race condition”, “side channel”, DPA, or “implementation problems”, and my favorite catch-all phrase “security hole / attack vector” or lets run Coverity or Clocwork for static analysis. Oh, let’s sign this code, maybe if we use a “stronger hash”, or have an immutable boot block. Oh! TCB will do it! How about a small verifiable Micro Kernel, that’s gotta be it! But by far, the toughest one is what Clive Robinson threw at me, saying Axioms can’t be relied on — that will take me some time to recover from. What I see is “best practices”, B3/A1, Orange book other links you and Clive Robinson mentioned – which are all good. But the big picture is missing. I will give you an example.

If I gave you two devices, and asked you, which one is more secure? How would you go about that? Threat modeling and attack trees? (Bruce talked about that, I think, and I admit I do not fully grasp that yet)… We can talk about that in the “classifications” thread, but it is related to this discussion as well.

Once we construct that model, we can talk about security principles, implementations, B3/A1, etc. and the other lower level details.

Maybe we should take a different approach: If I were to ask you to build a hypothetical general purpose computer system, but it needs to be absolutely secure, guaranteed that no one can break it, where would you start? Remember this is only on paper. Start from scratch [1].

Do you “feel me”, Nick P? ☺
I stole that from Samuel L. Jackson from one of his movies when someone asked him. But I remember that his answer was, yes! “I feel you” 😉

[1] I once took a class in the math department with a world-class known Austrian origin professor. I ended up dropping it, but I learned more things there than in other classes that I did ok in. I learned three things:

1- During a proof of a proposition that I had to do on the whiteboard, I started by saying, From Gauss’s formula, we can… He stopped me there. He said, “Who is Gauss?” I told him the German guy! Karl Friedrich Gauss? He said, “I don’t know him”. I stopped there and could not finish the proof, and went back to my desk thinking: This guy is strange! How could he not hear of Gauss, especially that he (the prof) is Austrian? Of course I realized shortly after that he did not want me to take other people’s word for granted, without understanding how they arrived at their formula, even if it was Gauss. He wanted me to derive that formula before I used it to prove the proposition.
2- I also learnt that it is Okay to go backwards when you are given something to prove. I always did that before this class, but always felt guilty that I “cheated”. It’s like going from the end of the maze to the beginning. But it’s ok. He said that most mathematicians that came up with elegant proofs did that. They went backwards, or did a lot of messy unorthodox things. Once they reached what they wanted, they cleaned up everything and presented it as if they went in the sequence they made public.
3- I remember something funny he said. He was giving an overview of a problem. And said, I will give you a heuristic argument (before the proof), something that an engineer or a physicist would call a proof ☺

Wael July 18, 2012 11:50 AM

@ Nick P (C-v-P long discussion)

Perhaps we should take this discussion off-line. Seems no one is interested short of you, Clive Robinson, O W, and RobertT.
Post your white-listed-general-purpose email address and I will reply to you. I don’t have a white-listed email address.

Nick P July 19, 2012 10:29 PM

@ Wael

Now, I’m finally seeing where you’re coming from. You said you want an abstract model of security? And you aren’t familier with kernels, POLA, or the TCB concept? These aren’t buzzwords a la antivirus or “AES-256.” These are among tidbits of knowledge in the security domain that professionals use to improve the security of their systems. We have tools, principles, some abstract models, best practices, etc. They each do their part in the field (s). They are much like the body in that it’s better to have each part of the whole than a single part. Quite frankly, it isn’t a basic engineering discipline or typical math class: the truth of the models, the ability to connect them to reality, etc. is all very different and more difficult, esp as malicious intelligence exists in the system. i.e. “programming Satan’s computer”

But abstract models for general purpose security… We can start with Bell-Lapadula for confidentiality and Biba for integrity. Military use-cases somewhat comply. Early orange book systems had to use that as the security model. It failed due to necessary components (e.g. certain drivers or regraders) having to work at multiple levels, difficulty mapping to reality, etc. Then you have Clarks Wilson for integrity, Chinese Wall for distrusting users on the same PC, etc. I didn’t see any of these classroom models pan out into real systems. Most recently, we have the separation kernel model that looks to be the isolation version of security kernels. Its profile, Separation Kernel Protection Profile, has led to the production of several complying products: INTEGRITY-178B, VXWorks MILS, and (supposedly) LynxSecure.

Yet, MILS/SKPP has its critics. Academics like Bell, co-inventor of Bell-Lapadula, said it’s essentially equivalent to the MLS problem. They also point out it was rushed by government & unproven by researchers first. So, we have very few useful models to work with for security in general. (starting to see why I wasn’t using them? 😉 The longest-lasting, real-world model is the ring model for access & integrity. Today, systems just use 2 of those states (kernel & user), but the division helps reduce critical bugs. Another model is the capability model where access is granted based on ownership of a token/capability. Other models can technically be built on a capability system. A number of real-world systems use capabilities for access enforcement to minimize their security-critical parts. The hypervisor/VMM models have become popular, yet the concrete implementation often breaks their properties.

“If I were to ask you to build a hypothetical general purpose computer system, but it needs to be absolutely secure, guaranteed that no one can break it, where would you start?”

I’d tell you that it might be impossible. Here’s the security problem as NSA’s Brian Snow brilliantly summed it up:

“The problem is innately difficult because from the
beginning (ENIAC, 1944), due to the high cost of
components, computers were built to share resources
(memory, processors, buses, etc.). If you look for a
one-word synopsis of computer design philosophy, it
was and is SHARING. In the security realm, the one
word synopsis is SEPARATION: keeping the bad
guys away from the good guys’ stuff!”

“So today, making a computer secure requires
imposing a “separation paradigm” on top of an
architecture built to share. That is tough! Even when
partially successful, the residual problem is going to
be covert channels. We really need to focus on
making a secure computer, not on making a computer
secure – the point of view changes your beginning
assumptions and requirements!”

I’ve posted projects that are trying to do the ground-up redesign he speaks of. DARPA’s Clean Slate program, with projects like Tiara and SAFE architecture, are examples. However, that’s got unknowns written all over it. Additionally, they are combining some of the best principles, approaches & specific defense techniques from the past to do the heavy lifting. Only certainty is that the ground up portion will reduce covert channels & the effect of legacy. However, eliminate most legacy architecture & covert channels, then you still have to be able to run many legacy apps on the thing. Not happening: such a redesign requires them to be totally redone. Hence, my middle path: security engineering that decomposes systems into components & allows incremental assurance work. You can have isolated apps running directly on the secure whatever, you can use VM/wrapper for legacy apps, you can… pick & choose your level of security & compatability.

There’s nothing magic here. There’s no model that you will look at and think, “why, that’s exactly what I’ve been wanting all along!” (That maps to an equivalent concrete model…) If you post it, people will shoot holes in it from design, implementation, or legacy standpoints. A model is only as good as the implementations it allows. Good security must be done bottom-up from available hardware & top down from good designs, models, whatver. It’s a complex orchestra of stuff.

So, I’ll have to think a bit on what to say next. I’m tempted to send you one of the classroom presentations I found & sent a newcomer recently. It nicely summed up the INFOSEC problem, security models, exempar systems of the past, some recent issues, etc. Honestly, if you’re talkin security ground up, it takes a broad array of knowledge. Security kernels were designed against formal models & in conjunction with many other things. In contrast, just switching from C/C++ to a managed language can immunize an individual app against many security breaches with no knowledge of security. It’s a strange field. You must pick what you want to accomplish in it & then acquire the necessary knowledge/experience. That varies.

Nick P July 19, 2012 10:38 PM

@ Wael & others on email

Well, we have tried this before in the past. Two things come to mind. The first is that Clive’s approach is half the discussion & he’s always preferred to talk here, never share an email, etc.

The other thing is that the conversation taking place on relevant and general Schneier threads is the only reason you two are in it. Much of our previous discussion on many topics was here presumably so that others would read it & be inspired to solve real world problems. This is, admittedly, a space hog of a topic that doesn’t seem to go away. A private forum or discussion group would be better for most of the conversations to save space for others on this blog. (See point 1, though.)

Wael July 19, 2012 10:46 PM

@ Nick P

Good to hear from you again bud. You are tougher than I thought 🙂 I was worried about flooding this forum with a subject that seems unending…

Maybe I will talk about the other stuff when I land…

Wael July 20, 2012 2:45 AM

@ Moderator
Please be kind enough to delete my post with the email.

@ Nick P
The other thing is that the conversation taking place on relevant and general Schneier threads is the only reason you two are in it. Much of our previous discussion on many topics was here presumably so that others would read it & be inspired to solve real world problems.

Now you made me feel bad. You led me to believe it’s ok to exchange an email between you and Clive Robinson. You are right though. But flip flop on me again, and you will find in me a most merciless “poet wanna be”. I will post a lemerick (payback is a bitch) which will start like this:

Nick P was a Security Schizophrenic from Nantucket.
😉

Nick P July 20, 2012 12:25 PM

@ Wael

“Good to hear from you again bud. You are tougher than I thought 🙂 I was worried about flooding this forum with a subject that seems unending…

Maybe I will talk about the other stuff when I land…”

“Now you made me feel bad. You led me to believe it’s ok to exchange an email between you and Clive Robinson. You are right though. But flip flop on me again, and you will find in me a most merciless “poet wanna be”. I will post a lemerick (payback is a bitch) which will start like this:

Nick P was a Security Schizophrenic from Nantucket.
;)”

haha. Unexpected reaction. On certain topics the title seems accurate. On email, I agreed it would be nice. So would a forum or dedicated web site. (I’m feeling an idea brewing there.) However, as Clive is half the discussion, it is useless without his email or participation. The other statement isn’t necessarily contradictory: summaries, conclusions or later models could be posted in public forums like this during relevant discussions. Options are entirely on this forum, partly on this forum, or entirely private. Again, I’d be fine dropping publicity if we got clive in on it. That’s the real limiting factor.

Wael July 21, 2012 12:40 AM

@ Nick P

Again, I’d be fine dropping publicity if we got clive in on it. That’s the real limiting factor.

Let’s keep it public until someone complains. I doubt we’ll go anywhere if we went private.

(I’m feeling an idea brewing there.)

I was hoping you “feel me” first 😉
I also can think of some ideas, but they will add to the work load of the Moderator.

Wael July 21, 2012 1:31 AM

@ David

Somehow I missed this:
As an off-the-wall suggestion, how about embedded in the journalist’s elastic knee bandage
Can you elaborate a bit? Sounds interesting…

Clive Robinson July 22, 2012 4:16 AM

@ Wael,

Please don’t post large, rude or otherwise poetry lymerics or songs etc. Because as Nick P will confirm our host has told one or two of us off before for such behaviour (myself included). Which as it’s his blog, I’m mindful to follow the rules.

Anyway back to the security question of C-v-P or whatever those discussing wish to call it.

Firstly please do not make the mistake of thinking very few people read the comments… Many “non-comenters” read this blog and as we know one or two peoples past comments have been cited in published papers etc. Further if you read academic papers you will find that many opinions expressed in them have come into line over the years with the thinking of people first expressed on this blog.

One of the reasons I post here is that not only many others can and do read here but, if people don’t agree they can say so without fear of consequences that other more academic processes would stifle. Likewise they can also ask questions and give other viewpoints and in return for this freedom I say feel free to use the ideas etc I post, and if you do, cite me as a politness, Oh and if you ever meet me or Bruce buy us a drink (even if it’s only a cup of tea) to say thanks.

Oh and with regards “blocking the blog” I generaly wait for the “on topic” thread traffic to die down or stop if the comment is effectivly becoming “off topic” and that way cause minimal disruption.

As for EMail one reason I don’t use it in a more general way is that I have in the past (the old hotmail account amongst others), and I tend to end up having revolving EMail addresses due not to spam but people who for various reasons wish to activly hunt me out (I like my peace and quiet to think, oh and have family time etc…).

But now back to the subject proper. There is a reason why I treat C-v-P more as an ideas/talking point rather than a hard model, it’s simply that it may well never make it as a long term model due to changes in the underlying technologies. To see why this might be ask yourself,

What will happen when we hit the buffers on the effective “doubling up” of system power every software revision cycle [1]?”….

Obviously this will as a minimum act as a bit of a “hard limit” on the attitude of software developers with fixed revision delivery cycles to the more outlandish features requests from Marketing. That is in the “single” CPU market they will have to start looking at writing code that is considerably more efficient. And also lacking those outlandish bells and whistles of marketing wish lists, which is where a lot of hidden vulnerabilities hang out [2].

In essence some of these insecurity issues which gave rise to the ideas of C-v-P might with a small probability (OK vanishing to nothing) be consigned to the dustbin of history (that is “if all” software developers actually start taking a proper engineering view point). However as we can see with “multiple core” hardware development the current favoured solution of the hardware manufactures is to find ways at almost all costs to maintain Charles Moore’s observation (no it’s not a law and never was except in the minds of journalists and their readers).

So my view is that the hardware manufactures will continue down the “multiple core” route either for some time or as an end objective to the problem. If the later then they will have to start giving up less well used CPU features at some point to make space for more cores “on the chip” [3]. This will in turn lead to a simplification of the CPU cores which we have already seen Intel do by going from pure CISC of the early IA86 line to a RISC core with a complex instruction interpreter wrapped around it. Likewise historicaly with many CPU familes the architecture of CPUs being switched to Harvard from von Neuman to alow for instruction pipelining and later for instruction and data caches, The von Neuman “joining of the data and instruction busses” happening on the periphery of the design almost as an after thought [4].

This gives rise to a difference between Nick P and myself simply because his view point is the pragmatic “work with what we’ve got”, where s I’m trying to “crystal ball gaze” by trying to guess which way the industry will move and point out what can be done to improve security as part of the process.

However a fundemental difference of opinion lies between myself and Nick P over the future of software development. My view point is “code cutting” will always be more prevalant than security engineering and won’t get resolved except by making the secure route not just cheaper but also more productive. Nick goes for the view point of tool chains will evolve to make more secure coding easier (which they will) but thinks that will be sufficient. Sadly the current market says otherwise, and I can not see that changing in my life time without external drivers such as “lemon Law” type legislation which is very undesirable as it puts limits on every one and turns development into a “closed shop” that inevitably becomes an “auditors race to the bottom” and thus fail (See PCI specs and rules and how the payment card industry applies the rules on what

Nick P appears to asssume that all code can be and importantly will be re-writen to make it more secure.

[1] : The doubling in system power with every software release is actually a bit odd as there is actually no real verifiable reason we can see for it. Charles Moore made an observation about transistor count doubling in a given time period and for some reason the ability of the industry to maintain that rate of progress continued for a couple of decades. However that is nolonger the case but other factors such as HD performance etc has allowed this effective “doubling up” on system power to continue.

[2] : The problem with marketing wish lists is that very few people ever use the functionality provided, so there is a lot of very redundant code in many applications (such as flight simulators in spread sheets ;-). Now generaly if the code is written in a cooperative way with the OS the code gets swaped/paged out of memory never to return thus conserving main memory. However if somebody uses the feature then it either stays put or gets paged back in. The problem is that although it might not normaly be in main memory it’s bugs etc that cause insecurity are still very much part of the “attack surface”. Also the less used a feature is the less noise the users make about it’s bugs etc therefor the longer they persist in the code base the software house pushes out to users….

[3] : There are very real limits to what is currently possible on the “chip” due to the “bottle necks” of “off chip communications” and packaging limited by some of the laws of physics. And if you look at some modern “chip designs” you will find allsorts of techniques to mitigate the comms issues and some in the packiging designs. Some of them are fairly obvious such as on chip caches with techniques such as “write through, read from cache”, some however won’t work in multiple CPU systems if data has to be shared effectivly amongst the CPU’s.

[4] : The need for this aspect of the von Neuman architecture is for a basic couple of reasons, Firstly so that code can be loaded into memory by the CPU it’s self. And secondly to alow for code to evolve in memory one way or another. Both of which are requirments for non embeded OS’s and quite a few more modern embeded systems. However both are security risks and can be mitigated in multi CPU systems simply by making on CPU do the memory managment as a well shielded process via a hypervisor etc.

[]

$$$

Clive Robinson July 22, 2012 4:44 AM

@ wael,

I occasional say rude things about my mobile phone because it sometimes locks up or thinks the submit button hass been pressed (it appears to be a bug in the keyboard driver…).

Any way it’s just done it again…

I was in the middle of edditing my above (mega) comment when it just decided to post…

So the bit between “(See PCI specs…” and the first note below it will not make any sense.

I will get back to it a little later in the mean time I’m going to make a relaxing cup of tea lest I decide to give this phone flying lessons 😉

Wael July 22, 2012 6:06 AM

@ Clive Robinson

Thank you for the clarification, no more “poetry”… I have seen Bruce in 2003 in an RSA conference (I think) from far away, maybe one day we’ll meet. I understand your email stance as well, and I agree with it.

It’s too early in the morning for me to talk about the subject matter proper, so I’ll try to talk to it later on. I am new to commenting on blogs, even though I have been following this one for a few years, so I imagine many more are just reading…

One question: I often see others say “@ soandso”
said rather than “soandso” said. I understand the “@” at the beginning of the line, but not when refering to or citing a person. Does it have to do with filtering an RSS feed?
.

David July 22, 2012 6:24 AM

@ wael

the <@ name> format has been around for as long as I can remember (at least 18 months!) in a number of places, not just Bruce’s blog. It’s probably a simple variation of email addressing, but beyond that, I have no idea.

David July 22, 2012 6:26 AM

grrr… I used diamond brackets… which were promptly stripped…

it should have said, “the {@ name} format…”

Wael July 22, 2012 6:48 AM

@ David

I understand now. You are talking about wearable electronics and hiding them from adversaries. Not crazy, has been used before, and has a counter part from the distant past. I think Romans used to shave a messenger’s head, write a message on it, wait until the hair grows, then send the messenger to the desired reciepient. Was not an effective method for “emergency” news or instructions.

Thanks for the clarifications about the formatting.

Clive Robinson July 22, 2012 6:56 AM

@ Wael,

So to continue…

… turns development into a “closed shop” that inevitably becomes an “auditors race to the bottom” and thus fail (See PCI specs and rules and importantly how the payment card industry applies the rules on what looks very like a “the more money they make us the more lenient we will be” which is just another version of “To Big to Fail” thinking).

Another area of difference is that Nick P appears to asssume that all code can be and importantly will be re-written to make it more secure or have security wrappers placed around it. I don’t think it will unless people are forced to do so.

Currently you can put a wrapper around insecure code by putting it in a Virtual Machine (VM) but that is fraught with problems. Most code attacks these days are based on using failings in the code input routiens to get the code to behave differently either by injecting malware or by causing the code to jump to some other point. Putting a Vanilla VM (VVM) wrapper around the code won’t stop this, all it will do is maybe limit the damage to the VVM (doubtfull if you look at the history of “sand boxes”), which is of no use if the attacker is after the application data as the VVM will alow this to happen as it’s seen as “normal operation”. Worse some people will just say “we cann’t re-write it because “We don’t have XXX…” where XXX is any excuse that can be thought of [5].

I’m of the opinion (I’ve often said on this blog) that supporting any old legacy code is very very bad (there are exceptions) for a whole host of reasons including the various races for the bottom. And worse how long life embedded systems become very vulnerable to protocol fall back attacks initiated from a Man In The Middle attack on the communications paths [6]. Suprisingly if you have a check online various secure OS designs from back in the 1960’s and 70’s actually recognised this problem and this gave rise to the notion of “A Trussted Path” back to the kernel etc.

Thus I’m of the opinion that you should in effect force the software to be either re-written or that code should be written in such a way that it can be properly upgraded and supported. The easiest way to force a re-write is to nolonger support the old systems platform. The easiest way to write code that can be properly upgraded is with an appropriatly high enough level “scripting language”. This has a secondary advantage that as the number of bugs in code appears to be language independent and based more on the actual number of lines of code in any particular function. Thus the number of bugs for any given function should drop with the higher level code. Further higher level code has other advantages in that it alows much faster code development also it is easier to security check both manualy and automaticaly.

Ther is another problem with the “use what we’ve got” philosophy it encorages an entrenched position to from that can easily turn into a monoculture. You’ve probably heard it befor “Monocultures lack hybrid vigor and make attacking much simpler” this has been seen in the past where “standards” were formulaic within an environment (healthcare), whilst the general level of security was raised it became brittle and when an attack vector was found the house of cards came tumbling down.

Another problem with “use what we’ve got” is almost invariably it produces a “top down” aproach that is from source code down the tool chain to executable code. Whilst this has many advantages it suffers from “bubbling up” attacks. That is if you get ownership of the system below the level of the tool chain and even the OS then you may well discover your boat won’t float.

There are several facits to this issues but the two main ones to consider are in the current single large memory model the attacker has the ability to see and modify the memory and the OS and higher can easily be blinded to this. Think if you will what fun you can have with an insecure interface that alows Direct Memory Access (DMA) (of which there are a several) or worse one that is assumed to be secure that alows code to execute at the highest privilege levels [7].

All of this becomes even more fun when you realise that due to “prefered supplier” for major contracts there might be only a choice of one or two hardware suppliers (Say Dell being one) and that to optomise their inventory the motherboards are almost identical across the range and even the optional extras are from a very limited source of supply. Which is verging on a monoculture in large information rich organisations.

There are ways to deal with “bubling up” but it requires a significant change in the hardware design.

One such change needed is to properly enforce hardware based segragation of processes and have the communications mediated in such a way that it can be controled for side channels of various forms. One such trick is to encrypt all user derived data before it goes into an untrusted device, this almost entirely removess the idea of a “trigger word” based attack.

I could go on at length but one significant issues is that of the shared memory space (which is in effect the interior of the Castle) where atleast one vulnerable process knows the mapping to other processes and where other privaledged code can find out simply by examining the mapping tables in that process. The best the current single CPU architecture can manage is obsfication which whilst currently effective will probably cease to be in a short period of time (such is the nature of software based attacks).

Thus you are effectivly forced into the multi-CPU or core configuration with the ability to have hard memory and CPU segregation built in from scratch (via an independantly controled MMU). You also remove the need for the von neuman architecture cludge and all of a suden things get a whole lot simpler.

Another thing that would be desirable to get rid of is preemptive task switching in the way it is currently done as this can leave data hidden in CPU registers etc if not properly implemented[8].

Any way enough for now I’ve just discovered I have an urgent issue to deal with that requires me to pack a few things 🙁

[5] : As an example of this I still have a code base going back to the late 1980’s early 90’s that I still support. Basicaly the hardware it runs on has not yet been deemed “at end of life” even though it’s not repairable now so the Availability figures are looking very bad due to MTTR being effectivly infinite (this “you cann’t get the parts” is one of the problems NASA had with the space shuttle systems). Despite gentle proding the system owner will not make the required investment to have the hardware replaced and the code ported to a new platform and importantly re-written from the very early version of C (modified Small C Compiler from Dr Dobbs) that implemented a custom scripting version of BASIC (don’t ask it’s to long ago).

[6] : There are ways legacy code especialy in embeded systems can be properly supported. However it’s an entire subject in it’s own right to see why Google for my comments about framework standards and embedded systems such as “Smart meters” and “medical implants” that are expected to have 20-50year life spans.

[7] : As an example the HD interface is almost universaly trusted and in many cases alowed to do DMA as it significantly improves performance. Both ends of the link have CPU’s and almost invariably there is no mechanisum to check the state of the CPU on the HD. Worse data gets written to the HD almost raw, thus user input can be seen by the HD CPU. If the CPU has bugs then it may well be possible to excercise them to advantage. But there is also the issue of the supply chain, the HD may already have the malware on it which has a “matched filter” and it is simply waiting for a trigger of a couple of thousand bits in length to get through the filter and start the fun. This fun could simply be to generate a random key and start encrypting the hard drive transparently, then on a second trigger simply forget the random key and goodbye data. Obviously there should be backups right? but how current? and how do you stop the attack again? Or how about it recognises certain (effectivly) hard coded places where the kernel code is stored or other important code and inline injects a nice bit of malware or more usefully changes some of the system configuration information…

[8] : The paper on encrypting RAM I pointed you to the other day clearly shows this issue and as noted not all the registers are privileged code ring protected…

Clive Robinson July 22, 2012 7:09 AM

@ Wael,

As far as I’m aware the at symbol has no real significance other than it makes a reasonable visual sign post if you are skiming through a large post etc, and I think it originated befor (Ctrl-F etc) text searching became commonly used in browsers.

It also helps if people use it where they don’t “cut-n-past” a persons name or it is done imperfectly by the browser for some reason (accented charecters and non ISO Latin charecters can cause this but shoulden’t these days, the browser I use has that anoying habit of translating some charecters to their percentage number equivalent when cutting-n-pasting part URLs).

Nick P July 22, 2012 7:20 PM

@ Clive Robinson

“Nick P appears to asssume that all code can be and importantly will be re-writen to make it more secure.”

“Another area of difference is that Nick P appears to asssume that all code can be and importantly will be re-written to make it more secure or have security wrappers placed around it. I don’t think it will unless people are forced to do so. ”

I appreciate your revised statement. I scoffed at the first. 😉

The revised statement and VMM stuff is semiaccurate. First, the VAX Security Kernel showed VMM approach can be highly secure if designed to be ground up. Second, there’s other wrapper techniques that have solved different problems in the past. I’m flexible about how I wrap. Lastly, there’s the possibility of extracting security critical functionality into robust components, leaving legacy on its own system or in deprivileged VMM’s. The last is what some companies and academics have been doing recently.

The rest of your post seems accurate enough.

Nick P July 22, 2012 7:23 PM

@ Wael on @

It’s a contemporary usage of @, inspired partly by email no doubt. We’ve been doing it here for years. Like Clive says, it’s a visual cue. One recent use I have for it is to scan the “last 100 comments” looking for @ (our names). As I’m using multiple machines right now, it helps my lack of live bookmarks.

Clive’s changed his a bit, though. I used to pick at him b/c he would say @NickP instead of Nick P when referring to me in messages. It made more sense as an attention grabber. 😉

Wael July 22, 2012 10:58 PM

@ Clive Robinson, @ Nick P

I believe, and previously stated, both of your approaches are needed and are complementary.

We need to have a TCB oriented system that addresses most of the known weaknesses and we need to have the “monitor”, however it is implemented, to cover the unknowns. This somewhat covers our lack of awarness of all attack vectors. Nick P is emphasizing control, and Clive Robinson is emphasizing covering the lack of awarness (unkowns).

Regarding short term and the long term. Nick P’s approach, more or less, addresses the short to medium term. and Clive Robinson’s, somewhat, addresses the medium to long term path. It’s hard to bypass evolution or “short cut it”.

My proposal in a nut shell was: What elements do we need to achieve “Security”? I listed some: Awareness, Control, Ease of use, the concept of the Owner, etc… Once we know these elements, and “security” was agreed on and defined, then we could look at the dynamics between those elements. How does lack of awarness affect security? can we compensate for that by more control? If we add more resources towards control, what effects would that have on the rest of the parameters. Resources (implementation wise now) are limitted and shared among these “elements of security” may vary from one implementation and use case to another. Once those dynamics were understood (the high level model), then we can go a level down and understand the limitations of HW and SW implementations. It is at this stage we will encounter what Nick P and Clive Robinson are talking about. This was my approach.

Clive Robinson June 21, 2014 10:40 PM

@ Wael, Nick P, and others,

Bearing in mind this thread started a year before Ed Snowden’s revelations, any one want to comment on how the above comments have held up been proved etc etc?

Nick P June 21, 2014 11:01 PM

@ Clive

In short, my methods would’ve stopped or limited many software attacks. The NSA, having co-developed those, knew this and focused plenty of effort underneath the software. The physical separation, non-DMA, etc techniques I promoted in other threads would’ve beaten many of their other attacks. Yet, they were quite cumbersome and expensive to use in practice. Clive and I also acknowledged EMSEC threat. The TAO issues my methods didn’t address adequately are attacks on chip and firmware, along with highly sophisticated peripheral attacks.

My new methods are able to address most of those issues along with reducing the labor on secure software development by putting heavy lifting on hardware. Multi-year, $25+ mil projects become months to a year at hundreds of thousands to $2 mil initial development. That ignores the cost of developing the supporting SOC’s, of course. I estimate that at several million to several tens of millions. However, the same SOC (esp FPGA prototypes) could be reused in many use cases where insecure chips would’ve been used. Whether economical or not, it should have plenty longevity potential.

I’m also promoting a fall-back to old school methods of doing things such as client-server, timesharing, and console apps wherever possible. The combination of lower complexity and decades of experience dealing with their issues give greater potential to secure these. It’s consistent with my old rule that “tried and true is better than novel and new.”

Wael June 21, 2014 11:47 PM

@Clive Robinson,

Bearing in mind this thread started a year before Ed Snowden’s revelations…

The comments have proven to be true. Does not sunrise me a bit. It’s expected from NSA and their counter parts in every country — there is a reason they’re called spooks…
What surprised me is the level of cooperation between these “counter parts”…
I’ll watch a couple of “Twilight Zone” episodes and elaborate later… Need a break 🙂

@Moderator: A constant string for the question adds a marginal barrier to spam, but a step in the right direction.

Wael June 22, 2014 1:33 AM

@Clive Robinson, @Benni (see footnote [2])

Funny! I choose a relevant episode by chance! (Is anything random?)
In the words of Rod Serling…[1]

You walk into this room at your own risk, because it leads to the future; not a future that will be, but one that might be. This is not a new world: It is simply an extension of what began in the old one. It has patterned itself after every dictator who has ever planted the ripping imprint of a boot on the pages of history since the beginning of time. It has refinements, technological advancements, and a more sophisticated approach to the destruction of human freedom. But like every one of the super states that preceded it, it has one iron rule: Logic is an enemy, and truth is a menace.</cite

The chancellor, the late chancellor, was only partly correct. He was obsolete, but so is the State, the entity he worshiped. Any state, any entity, any ideology that fails to recognize the worth, the dignity, the rights of man, that state is obsolete. A case to be filed under “M” for mankind—in the Twilight Zone

So no, surprises me not. I am thinking next step is control — if we’re not already there…
Maybe I had better stick to areas that I “think” I understand, like security and our fun discussions on “securing” systems and communications… (and limericks, of course[2])

[1] After watching the episode on TV, I decided to transcribe some of the text. Then decided to see if someone already did the work. Everything is on the internet — amazing! Hard work pays off tomorrow; laziness pays off today 🙂 It’s on youtube too, but I won’t give the link. If you care to watch it, you’ll find it.
[2] @Benni
You are German, what do you think of this master piece? — Pass it on to your buddies 🙂

Wael June 22, 2014 2:37 AM

@Clive Robinson, @Nick P
So, to continue on the technical stuff…
Once those dynamics were understood (the high level model), then we can go a level down and understand the limitations of HW and SW implementations.

The physical separation, non-DMA,

The dynamics, Nick P, are not well understood. How do these two mechanisms fit in the picture? What principles do they map to, and what final goals do they achieve? Keep in mind that I am not debating their viability.

Wael June 22, 2014 1:42 PM

@Clive Robinson, @Nick P,
What elements do we need to achieve “Security”? I listed some: Awareness, Control, Ease of use, the concept of the Owner

Clearly our systems are insecure due to lack of the stressed “elements”.

Clive Robinson June 22, 2014 3:08 PM

@ Wael, Nick P, and others,

Security is a human concept, which differs depending on viewpoint and even language spoken [1]. It is often like safety, a thing that is difficult to describe, without waving your arms and mumbeling “you know…”. It is thus not particularly amenable to charecteristics or traits let alone algorithmic enumeration.

Thus in selecting Elements we should start with human traits, such as humility, knowledge, honesty etc. That is first investigate human limitations and advance from there.

Many years ago I had the job of explaining limitations of computing to those of a quasi-beaurocratic mind set and the first hurdle was getting them to understand that “electronic brains” were not comparable to “human brains”, despite the expectations of AI belivers and the misconception of the Turing Test.

I used a simple acronym of “RIP-ICE” to act as a starting point.

Where,

Computers worked with,
R = rules
I = information
P = processes

Humans understood,
I = integrity
C = communications
E = entities

That is Computers used a series of Processess to apply rules to information that was presented at the process input from a previous process (source) and the resulting processed information was delivered to the output to the next process (sink) in the chain.

Humans however understood that entities that were “persons legal or natural” that might or might not follow morals be they codified or accepted in functioning society, if they did they were considered to have integrity and thus could be communicated with as “Trusted” individuals.

The discussion would then move forward to what “Trusted” ment and why in what we now call ICTsec it usually had what was infact the opposit meaning of what humans normaly assumec it to have. This would then lead through the CIA triad, to try to get them to realise what was involved as a minimum for Security.

Even at the best of times I felt like Sisyphus, knowing that nomater how well I’d pushed the understanding to a higher plane I would find in short order it had rapidly resumed it’s starting state and the effort would begin again 🙁

It’s why I now liken the idea of a “Security” process to that of a “Quallity” process, and endevor to show why.

[1] Some languages lack seperate words for “security” and “safety” and have just one word (See French for instance). Obviously this has an effect on the way people think about both safety and security.

Wael June 22, 2014 4:54 PM

@Clive Robinson,

Security is a human concept,…

Agreed! Need to start with the “higher plane” rather than the other way round. Your OT regarding “Rational Thinking” comes across as a prelude to this post — I tend to agree with the assessment as well. Very few people are “Rational thinkers”, and I don’t imply that I am one. The collective thought process of us discussing subjects on a blog, hopefully should converge to a “Rational Thinking” end goal.

Nick P June 22, 2014 7:14 PM

@ Wael

“The dynamics, Nick P, are not well understood. How do these two mechanisms fit in the picture? What principles do they map to, and what final goals do they achieve? ”

I’m glad you picked the easiest ones. 😉 The paradigm that confidentiality and integrity is based on is separation, per NSA’s Brian Snow. The paradigm of most computers is sharing of resources at many levels for efficiency. So, each part of the system that such sharing puts at risk might need an isolation technique.

First, the DMA. The problem here is that devices with DMA have full access to memory. The various secure kernels implement isolation via mechanisms like segmentation and address spaces. The existence of the DMA breaks the model as the device, although its driver is isolated, can read any confidential data and violate integrity of critical code. Eliminating DMA devices in favor of something such as programmed I/O or DMA w/ IOMMU knocks this risk out by applying isolation to devices.

Second, the physical separation. The main goal of A1-class efforts was enforcing strict isolation and control over the machine. Each resource in the hardware and software had to be analyzed to prevent unauthorized sharing or modification. Risk areas even included processor caches, onboard timers, and peripheral firmware. The simplest way to eliminate much of this attack surface is to put each logical partition on a separate device connected by simple, non-DMA devices as described above. Each also validates input (and possible authenticity) from the others. Now, you just have to hunt for injections and covert channels at the interface point. You might also use a simple guard to handle the interface. Such approaches are so much easier as you can ignore more aspects of the system.

Now, I’ve moved the analysis further down. We’ve had many strong approaches to secure design of operating systems, protocols, application units and so on. We’ve also created type systems to create robust code. The problem is that existing computer architectures don’t allow typing, compartmentalization, etc by design or do it slowly. Supporting basic, flexible primitives at the hardware level gives the secure system designs something to leverage without loss of productivity or performance. Then, the software layer can do most of the rest thanks to its newly improved integrity. I also look into firmware protection, I/O coprocessors, and so on to eliminate the other risk areas entirely or partly.

All in all, I’m still applying my same framework of looking at things layer by layer, component by component, and system by system. I’m just focusing intensely on the hardware as years of studying decades worth of work into secure systems told me the hardware is the main problem.

Wael June 22, 2014 10:18 PM

@ Clive Robinson,

which differs depending on viewpoint and even language spoken

There is more to come. I failed to locate a speech by Feynman that supports this statement and adds another interesting and educational dimension to it.

@Nick P

I’m glad you picked the easiest ones. 😉 The paradigm that confidentiality and integrity is based on is separation, per NSA’s Brian Snow.

I like your reply. It’ll take me sometime (Déjà vu) to formulate a response in the context of this old (and almost forgotten[1] ) thread so I don’t repeat what has already been discussed…

@ Mike the goat, @ Figureitout
Your MIPS & ARM discussions fits well in this thread. Just lose the horn before you come, you’ll need to type a lot 🙂

[1] Right now I’m having amnesia and déjà vu at the same time. I think I’ve forgotten this before. — Steven Wright

Clive Robinson June 22, 2014 10:34 PM

@ Wael, Nick P, and others reading along,

Lets look at Nick P’s statment,

I’m just focusing intensely on the hardware as years of studying decades worth of work into secure systems told me the hardware is the main problem. studying decades worth of work into secure systems told me the hardware is the main problem.

Which is something I agree with and have done for just about as long as I can remember which proffesionaly is now over a third of a century.

The problem is “hardware” is used by most these days as an abstraction to tidy away something they don’t wish to think about objectively. And for various reasons marketing departments and design engineers have done their best to encorage this, presenting the likes of MIPS etc as some figure of merit, much like 1950’s advertising used to imply that the figure of merit for womenhood was the ability to cook the perfect baked beans on toast for little Jonnies tea, along with ensuring the pipe and slippers were ready for Jonny senior when he drove home in his Ford Edsel…

The truth is hardware like software has it’s own “stack” and the relations between each layer are again hidden behind abstraction and usually cover up glaring issues.

Take Nick P’s mention of DMA it works below the CPU layer on which the software designer of the OS has built the OS. The problem is like IO it’s one or two abstractions down, and thus is almost but not quite out of sight… Which makes controling it hard.

There is nothing inherantly wrong with the ideas behind DMA in the same way there is nothing inherantly wrong with malloc and friends. Used correctly with the right form of control it does a very efficient job out of sight of the programers stream of concious effort.

But that’s the rub “Whilst the cat’s away the mice will play” and mice leave droppings all over the place…

The issue is “control” or the lack of it, and further what sort of control you are given.

If you look at DMA like malloc it is dependent on lower layers, and in the process of “specmanship” things have been left out. And it is these missing items of control where problems arise.

In both DMA and malloc one glaring ommission from the security asspect is a working “clear” before memory is released into the general pool for reuse. Thus both do end runs around security unless the application level programer four or five layers of abstraction above makes the concious effort to clear the memory at that level…

As you are probably aware the penalty in time is eye watering, and thus obviates the use of DMA in the first place…

If you look far enough back in computing history you will find that the very early forms of memory did have a clear function, but it quickly got the specmanship treatment and was in the name of efficiency “optomized out” in all but the CPU registers, and in some designs even there, which is why you see the likes of “XOR acc,acc” in assembler code to do what should have been “CLR acc”. In an almost Kaffkeresque way some CISC systems brought back clearing ranges of memory back into CPU instruction sets, but it could only be done in a grossly inefficient way compared with doing it down in the lower hardware layers where it should have been.

Thus specmanship has removed control for what are essential security tasks not just at higher layers but at all layers, and the cost of putting the required functionality back at many layers up –not just the hardware stack but the software stack that sits atop it– is immense. And as a result causes conflict at the human level, where marketing push managment to push the software coders to get fast times in functionality so that “usability” at the HCI level is unaffected. The casuality of such conflict is almost always that which the end user does not see untill it is way way to late and appears in the TV news as “XXX has lost ten million credit card records” or similar, or worse does not see at all as the company folds because it has been undercut by foreign competitors.

Like Defence, you never know when you are spending enough on security, you only find out usually to late when you have not spent enough and your country is invaded or your company lost.

Unfortunatly in both cases there are many charletans who will take money with promises of defence but at best only deliver illussions.

The real solutions are to go back and correct mistakes of half a century ago, but I don’t think you can sell the idea to those who hold the purse strings…

Thus you have to “throw the baby out with the bath water” down in the hardware layers and get rid of very usefull functionality such as DMA to remove the security weaknesses it has as a side effect of years of specmanship.

At the higher software layers of malloc atleast it is possible at some considerable cost to add back in the missing functionality required for security. But you can bet it won’t happen for all but one or two applications, thus users will use insecure apps to do security related activities and “the ship will be lost for the sake of a halfpence of tar”.

To see why, think editors/wordprocs with auto save etc, they will make heavy use of malloc or equivalent, when auto saving they will hide the likes of “undo history” into files or fragments of deleted information at the end of buffers etc. Users will use what they are familiar with for “productivity” thus they will use the same app to write a stationary request as they do for writing a highly confidential memo relating to IP or other security related matters. And the editor/wp will happily during auto save leave fragments in plain text all over memory which will end up on the hard drive, where ever it might be. And trying to fix the problem with FDE only works when the user is not using it which means malware gets to see plaintext just as much if not a lot more than the user does.

Thus trying to correct previous security faults with a top down approach is almost certainly going to fail at some point, then it’s just a numbers game as to if and how badly you get hurt…

Thus we are in a world of hurt where the required bottom up fixes just won’t happen due to “market forces” and top down fixes won’t work due to “market forces”. Thus the only solution is to start again if and only if “market forces” alow you to…

Figureitout June 23, 2014 1:00 AM

Wael RE: MIPS && ARM
–You silly, that wasn’t me; but I am working w/ on an ARM-based chip at the moment…Beyond the technical aspects of a secure chip (watchdog timers, how many clocks, peripheral support, interrupts or not, something completely freaky and different..), and what features should or shouldn’t be included (for testing purposes too, what kind of safety beyond fuses and ferrite beads should be included to protect against stupidity leading to piles of blown and bricked chips..), I’m just at a loss thinking operationally how this can realistically come together…Too many holes it seems. Not that I’m giving up, never, just the more I think on it..just seems impossible…and something stupid will break the security.

Clive Robinson
–What about studying how calculators delete memory everytime you “clear” the screen and in a graphing calc you can factory wipe the memory too. Also, I’ve heard Microchip PIC’s offer better protection (for some embedded code) than what I’m seeing on Atmel AVR’s (security bit can be flipped in Atmel Studio and protection is off). Going to look into what PIC’s do in that respect. Of course, if someone malicious has you chip hooked up to a programmer they probably already put a gun to your temple…

Gerard van Vooren June 23, 2014 1:31 AM

The response of Theo de Raadt about BoringSSL is positive. He also thinks that both projects are no direct competitors because Google’s project is more focused on breaking with backwards compatibility.

@Benni. Interesting to see that the ROP feature is still in the code. Maybe you could create a bug report and send it to the Chromium team?

[1] http://marc.info/?l=openbsd-tech&m=140332790726752&w=2

Clive Robinson June 23, 2014 7:32 AM

@ Figureitout,

First off, you are not going to produce a secure computer first time out the gate. That’s not to say it won’t be a lot more secure than other things around, it probably will, but even the best of engineers usually have a make measure and refine cycle that goes around a few times befor certifing engineers don’t find problems that are of concerne.

As an example, you have an output line, you need to de-couple and bandwidth limit and then re constitute as a digital signal. There are various ways to do this the simplest way is an opto coupler followed by a lowpass or bandpass filter, followed by a circuit to sharpen up the edges and adjust the levels again. So far so good, only the filter gives rise to time delays which with analog components change with all sorts of circuit conditions including the power supply state, earth currents and a few others as well that can cause information to leak through the digital signal restoration circuit. So you not only need to change the circuit away from being a simple level detection with hysterisis but a window comparator with more complex than usual state change, you need to reclock with a biphase signal and shift register to remove information from the edge of the transitioning signal…

In amongst this you will need independant power supplies with their own filters, and electrical shielding to prevent cross coupling by capacitive or inductive means as well as ensuring signals don’t get leaked by ground current faults etc etc…

It’s a fun game with a learning curve steeper than a fast logic rise time, however people do achive the objective given a few turns of the cycle.

Figureitout June 24, 2014 12:11 AM

Clive Robinson
–Appreciate the advice as always; sounds slightly above what I’m capable of yet, but I can nearly see it; probably will try to add it on after a simple build.

I know I won’t, I expect failure, things to go wrong, always, something always to improve.

Also, speaking of strongly filtered/shielded power supplies, I just witnessed the other day the sheer power of mother nature and how she can make you feel like an ant…Lightning bolt blew power supply boards and 1 computer and some other equipment; and one pair of pants (kidding). I’ve never seen nor had that happen to me before…Now I’m assuming no one has that kind of firepower at their disposal (lol…right…?), but it would be interesting to see what could withstand it head on. And I’m pretty sure the noise I hear on a board is an inductor so I’ve got an ear out for that too.

Wael June 24, 2014 1:07 AM

@Clive Robinson, Also starring @Nick P
Continuing on your:

which differs depending on viewpoint and even language spoken

And my reply:

There is more to come. I failed to locate a speech by Feynman that supports this statement and adds another interesting and educational dimension to it.

Took me a surprisingly long time to find it! I believe this is the reason we sometimes have a hard time understanding each other. Our communication is also affected by how each of us represents information internally (how our neurons get wired during the learning process.) Little wonder security means different things to different people, and as you would normally add, at different times!

You see, I am thinking of a construct that is represented as a blue box in my head. I tell @Nick P “models” to represent the hypervisor “thing” in my head. He translates it to something that looks orange inside his skull, and then tells me Orange book, A1 — then we spend two years talking about the same thing from a different perspective. I’ll never convince him that blue is orange, and the reverse is true.

Last (but not least[1]) We’re even! I found a genius who supports your idea, just like you found that bloke who agrees with me on “randomness” 🙂

[1] once upon a time some one was presenting some material in a meeting. He listed in descending order a few items. When he came to the last one, he said: “last, but not least” blah blah blah. I thought, wait a second! They are in descending order of significance! The last item must be least! But didn’t want to open a can of worms…

Clive Robinson June 24, 2014 2:24 AM

@ Wael,

First though I will have to convince everybody else that my shade of puce is the same as his shade of deep and meaningful rose red 😉

It shouldn’t be that hard it just takes a little time….

As I’ve said here once –or twice– befor, if N people view an event there will be N+1 realities, where the +1 is what realy happened but nobody could see due to their limited view point. Which is why you can slowly talk the N people around to a new reality that did not actually happen but in their minds matches their memories of the event, that you have slowly changed 😉

Wael November 29, 2015 1:33 PM

Summary: Castles vs Prisons Aka C-v-P

Contention points: @Nick P rightly claimed that the comparison between his recommendations and @Clive Robinson’s idea of a Prison architecture isn’t valid and is misleading because both architectures aren’t pure. Both what Nick P and what @Clive Robinson’s ideas and recommendations contain both Castle and Prison constructs.

My position (just keep in mind that this is an evolving “thing”):

  1. Define what you mean when you say you want to develop a “Secure” system
  2. Identify where security weaknesses arise (Concept, architecture, implementation, etc … (Womb-to-Tomb security, Soup-to-Nuts security, or a Inception-to-Decommission security)
  3. Know your weaknesses
  4. Know your weapons (the ongoing joke was the ever expanding Arsenal)
  5. Know your enemy and the tools they use, but when designing your system, don’t wear an attackers’s hat. Generally speaking (meaning there are exceptions, and using exceptions as pillars isn’t the best design philosophy), you could be a very accomplished black hat but a lousy security designer. The reasons were given earlier, the idea was to defend against classes of attacks, rather than instances of attacks.
  6. know what principles of security need to be adopted for a system to achieve the defined security characteristics (confidentiality, integrity, availability — at the highest level of thinking.) But also target other “Security” issues, for example adware, etc…

So rather than saying the following:

  1. Lets increase the key size
  2. Drop passwords and use “Biometrics”
  3. Use 2fa or mfa (Two factor authentication and Multi factor authentication)

I am proposing we come up with a model that implements the security principles we know, for the purpose of enumerating them, and not necessarily exhausting, here is a list:

Using models isn’t a new thing. We use models and approximations all the time. Analogies are also meant to clarify an unknown concept to some audience with a known concept. For example, water flowing through pipes is often used to clarify electricity flowing in wires. Transformations from one domain to another is also another tool that sometimes simplifies calculations or aids in visualizing some aspect of a system under consideration. An example is transformation from time domain to frequency domain or vice-versa. These concepts are used in various fields.

The models that were discussed previously were the Castle and the Prison. These are their initial characteristics:

Castle: Serves to protect objects on the inside from events on the outside
Prison: Serves to keep objects inside the prison from leaving without due process

The objects above operate on data assets or “information”. This information can be data or keys. At the highest level, a Castle is a model of intrusion prevention and detection; a Prison is a model of data leakage prevention and detection. These two models can be used as building blocks to achieve certain system security characteristics — but they aren’t sufficient to represent the concepts, principles and branches of security (authentication, authorization, accountability, etc…) Other constructs are needed.

In addition to the rudimentary “acting in data objects” the Prison can actually act on the components rather than the simple data objects. A CPU, a memory range, a controller, … can also be “imprisoned”, and that’s what I find novel from a conscious design perspective, even if such implementations were used or proposed in the past. How to imprison components and why is a discussion that took place in the past without reaching a satisfactory conclusion.There are a few concepts that were also proposed: The warden, probabilistic security, voting mechanisms among a few other related rules of thumbs, factual axioms, and “security principles” that weren’t clearly defined or exhaustively listed.

Here are some security principles / rules of thumbs / axiomatic truths:

  1. Least Privilege
  2. Least Authority
  3. Check at the gate
  4. Default deny
  5. Trust no one
  6. Fail hard
  7. Fail fast
  8. Fail safe
  9. Segregation of roles
  10. Separation of duties
  11. Even if you have to trust, do verify
  12. Reduction of the surface of attack
  13. Expansion of search space
  14. Defense in depth (an old one and not sufficient! Nowadays attacks are mounted in
  15. depth, width, and height)
  16. Avoid being a target
  17. Keep tight lips, eyes and ears wide open (comes from the security definition)

What I was proposing is a methodology of design — not a comparison between two architectures because both architectures are needed as I mentioned on more than one occasion.

The general,idea is:
0- Define the ideal Models than construct a security ecosystem
1- Model->Pattern->Principle->Security level desired is achieved
2- Verify through Pen testing, data flow diagrams and threat modeling
2- Complement with OPSEC user / design manuals

@Nick P, @Clive Robinson,

Does this make sense, or am I still having a pipe-dream? If it’s the latter, then I’ll drop this subject altogether and revert to my abnoxious jokes 🙂 Then again, maybe this topic was discussed enough and reached its end of life and we can move on to other topics. But I still have a feeling it will pop up once in a while. Your call, I’m ok either way!

I’m not correcting any spelling or grammar mistakes…

Clive Robinson November 29, 2015 10:38 PM

@ Wael,

Some things you left of the list, that are easy to add,

2+ Least Resources.
8+ Fail Long.
12+Least Complexity for security.

However there are also more general things to consider, lots of them…

For starters the fundemental viewpoint on which the models are built upon… From some of your unrelated comments I’m guessing you have done some electronic system design and thus modeling of unknown “black box” and “two port” systems.

The important idea is that you find all the system ports be they nominally input or output but IMPORTANTLY consider them not just bi-directional but ALSO assuming all ports are “coupled” in some way and thus having an effect on all other ports unless specificaly issolated in some way (such as a circulator and dump load etc in RF design). Further that all parts of the model are also “generators” / “radiators” of information normaly treated as “noise” “loss” etc, but which have security implications.

All of these have their analogs in digital systems hardware and software, something that rarely if ever gets taught to “CompSci” students or even mentioned in passing in “testing” or “security” course module classes.

It might be that the realisation that most basic computer instructions are atleast “five port” and each port is also two Shannon channels is thought to be either “beyond the average student” or “to advanced / specialized for the course” (which is odd considering most physics and electronics graduates understand it implicitly or with minor prodding).

But we also need to consider time in a broader sense of not just “What is deemed secure today is not tomorrow” but also why. In part it’s because “Technology Advances” as well as “Knowledge Advances”. Whilst “knowledge” advances in unpredictably odd ways “technology” tends to be more predictable due to a number of reasons not least being “realisation time” and the fundemental laws of physics etc.

Thus do we set our models and reasoning in what is concreate today or what is foreseeable. It’s an asspect of CvP that gets overlooked. Right from day one I stated the opinion that the future was parallel and that was implicit in my reasoning. Current systems are slowly becoming multi-core but the designers try their best to make the system “look and feel” single core for programmers who have trouble thinking in terms of more than one thread of execution at a time. Thus Castles are “current” to limited future single core context switching paradigm whilst Prison systems started as “future” massively multiple core non context switching designed to be fully scalable in a parallel architecture.

Whilst it is possible to emulate a prison system in a castle system you don’t get “sweet spot” advantages.

For examole somebody mentioned a potential “Russian OS” to replace MS-OSs the other day. One of that OS’s stated functions is in effect to implement a version of the prison idea using what the designer called “objects” –and I call tasklets– where the sole security idea mentioned was to not use pointers outside of the objects –I use MMUs and “letter box” buffers to stop memory access issues–.

An interim technology advance @Nick P mentions is “tagging”. It’s actually been around for longer than the IAx86 architecture and it’s a technology that almost always gets sidelined. The reason is not hard to find and it’s in effect political not technical. Put simply, it offers no “normal” performance advantage for an increased cost in technology, that few will ever use. Thus it never gets past the marketing department, who figure it has no buyer premium thus is a bottom line cost without benifit.

But also marketing and managment know it’s going to be a “productivity nightmare” in the application development industry. Thus it will be “turned off” by default, and the reason that the OS and applications if they build in the ability to enable it will “see a performance hit”… So a “tough sell” at best even in these more “security conscious” times.

In general I’ve nothing against tagging, it can be built into the “cell” on chip local memory without issue and the MMU can be easily augmented to deal with core system memory. Thus on chip it’s a “real-estate issue” which due to other considerations such as “heat-death” the extra space it takes up would probably be not used anyway (it’s why we are seeing on chip memory growing very quickly).

The problems with tagging start when you “go off chip” with extra pins and memory columns or segregation to support it. The problem with tagging is three fold granularity and basic container types, effecting the actual implementation choice and thus the side effects.

Currently we have tagging information in “page tables” for read write and execute function limitation and the real system core memory remains untagged. Which leaves open a security hole in that it’s possible to change a page table contents due to over complexity in the IAx86 design giving unexpected “Turing completeness” in the interaction between state machines. The first logical step to close this sort of hole means tagging the actuall memory not it’s self on either a region or word basis. This means you need to add additional tagging bits to the memory either in the address decode logic or attached to each memory word by adding tag columns thus widening the memory. But using region tagging has issues but mainly lacks fine granularity, needed for complex data types.

Both methods might be fine for a very limited range of data containers –types with access metadata– but can rapidly get out of hand and cause problems. Worse they both have another potential security hole, based on tag interpretation. Whilst access metadata –read write execute etc– is generaly a below CPU level issue, data types are a CPU and above CPU issue and impinge well into the software layers. Thus in effect you have to have “data types” “baked in” or subject to software interpretation on a case by case basis. That is when not baked in, tag 0xB9 might mean one data type to one application and another data type to a different application. This effectivly opens another security hole via page table tricks.

But on the assumption you do “bake in” you then get into “complex data type” issues that reach up through the tool chain right up to the medium-high level program level. That is if you have a complex data type how do you tag the constituent parts and check their consistancy overall?

A simple example is a very long integer, if it’s unsigned all the component word types can be unsigned int no mater how long the compound type is in words. The same is not true for signed ints where you need to differentiate the Most Significant Word from the other words. This has a knock on effect in how the likes of shifting takes place etc… Similar issues with floats and complex numbers apply.

Basically any compound word type beyond an unsigned int has side effects and due to the likes of endien-ness, word size and boundry alignment reaches up the tool chain often violating the “Principle of least surprise” to programmers and also has all sorts of side channel issues”.

The only way around this is to split your programmers into two types those who work at sufficiently high level that they are above the issue, and those who are sufficiently knowledgable to work in the issue area layers. It’s just one of the reasons the Prison model advocates “scripting” applications from pre-writen tasklets -v- “secure programing” of the tasklets and below in high-level or below languages.

The advantages of scripting tasklets together is it is very much faster to produce an application –it’s a well understood RAD process– but the application is not in general efficient, –which from a security persprctive is good– which overall means “faster to market” beter “code reuse” which managment want to hear, but not good on “specmanship” scales, which marketing might not want to hear. However “specmanship” is becoming less and less relevant these days where users can and do find response times too short under certain circumstances these days and more than adequate under most other circumstances.

There is quite a bit more to add especialy with the idea of tasklets being designed to work under the security hypervisor that arbitrates what resources the “cell” has and what the tasklet can see or do in the way of communicating to other tasklets including the OS.

As you note it’s easily enough for a book.

Least Resources
The prison is designed to give any task/object the most minimal environment to do it’s job in. This means that malware does not have “room to hide”.

Fail Long
The thing about “side channels” is you can not prevent them, they exist whenever you communicate information in any form in any way. The way to deal with them is to stop down their bandwidth such that their ability to transmit data is as small as you can make it. The reciprocal of bandwidth is time, thus on detecting an error “fail long” this makes the bandwidth drop to as close to zero as you can get at the time. In addition you want this long time to be unpredictable so it needs to be actualy “fail long + random time” where the random is say 1-2 times long.

Least Complexity for Security
This is a conciquence of the number of relations between parts of a system and it sounds simple but is actually quite complex. Beyond trivial operations (base Arithmetic and Logic) you need a minimal number of combined operations to accomplish any given task. The question is how many and how you arange them to give minimal complexity for security. Operations tend to cluster in function or communication. Thus you can see minimal tasks that communicate and what you want to stop down is what goes on in the way of communication. Not just in “wanted connections” and “unwanted connections” but also in “both directions”. That is you are looking for segregation/seperation functional units with minimal communication that is monitored IN BOTH DIRECTIONS, that is not just wanted information flow but errots and exceptions as well.

Error/Exception channels
One thing most programers do is think about how things go right not how they go wrong. That is they do minimal error and exception handling at best. The result is easy side channels for atackers to “reach back into the system” due to “transparancy”.

Efficiency-v-Security
This is something many people either “do not get” or “ignore” for a variety of reasons. As a rule of thumb the more efficient a system is the more useful the side channels are to an attacker. That is the system becomes “transparent” in both directions. Often in secure systems “transparency” is only considered in the “left to right” or forwards direction thus “reachback” or “reverse transparency” attacks happen as a sub class of “injection” or “susceptibility” attacks.

Wael November 30, 2015 1:27 AM

@Clive Robinson,

Some things you left of the list,

It’ll take some time to integrate all of this in a coherent manner. I’ll comment on the easy ones now.

you have done some electronic system design

Yes, a while back and forgot most of it in addition to not keeping unpto date on new advancements in the field.

Further that all parts of the model are also “generators” / “radiators” of information normaly treated as “noise” “loss” etc, but which have security implications.

And there is the “shield” component or model.

It might be that the realisation that most basic computer instructions are atleast “five port”

I’m not sure I understand!

The only way around this is to split your programmers into two types those who work at sufficiently high level that they are above the issue, and those who are sufficiently knowledgable to work in the issue area layers

This is already done today. There are application programmers, systems programmers and embedded programmers …

thus on detecting an error

Need another model for the “Error detection control” and an accompanying model for the “policy engine” and an additional model (think for diagraming) for cutting off communication (similar to a fuse) as well as a quarantining “control”.

Figureitout December 1, 2015 12:20 AM

Wael
–Why put this on a 2.5 year old thread…Anyways I suppose make it interesting by suggesting any chips you can use to implement your ideas and technically lay out some ground work for an implementation…We can talk and talk and talk but you have to walk eventually. If they don’t exist then what is needed? The security principles I think can be tucked under “OPSEC” after years of study of “full spectrum digital security”, even digital security needs physical security so, it’s full operations…

Somewhere along the way the cost of this “mindset” of “lifestyle” needs to be mentioned b/c it is huge; and if being owned and potentially even corrupting your work (on an owned PC) doesn’t irk you much then it may not be worth it at all.

Clive Robinson
but the designers try their best to make the system “look and feel” single core for programmers who have trouble thinking in terms of more than one thread of execution at a time.
–You say “parallel is the future” but do you think that it may introduce way WAY more bugs and security vulnerabilities w/ more activity taking place at the same time? You can’t delay/isolate it out as well then. Especially when our hardware is becoming more and more “uninspectable”, and it is of course much more work to think in terms of multiple things happening at once (how do you definitively prevent coupling across those threads or processes?). And debugging capabilities are likely not present or will be so buggy to be dismissed immediately so it’d be like stripping a HW designer of any PSPICE or MATLAB program and tell them to deliver something quickly; and I think SW/firmware people are forced to cover up some of the inadequacies of HW designs…but no the HW designs are nearly “perfect” in your mind apparently.

Clive Robinson December 1, 2015 2:27 AM

@ Figureitout,

… and I think SW/firmware people are forced to cover up some of the inadequacies of HW designs…but no the HW designs are nearly “perfect” in your mind apparently.

Err no I know the exact opposite, ALL and I do mean all hardware designs are imperfect. Further that in many cases a large part of the hardware design is “shaking the bugs out”.

you look back you will see I’ve mentioned “meta-stability” and how it can not be “designed out” you have to mitigate it instead.

Part of C-v-P not mentioned here is the mitigation techniques. In the past I mentioned that you could not build castles in the traditional way on shifting sand, but you could use other techniques to build a raft onto which you could build fortifications which is how you end up with “Man-o-War” sailing vessels and modern battle ships.

I also mentioned the acient Greek story of how to deal with two guarded doors, where one guard who always lies and one who always tells the truth and how it gave rise to “voting protocols” and how they could be used to mitigate failing hardware.

So say you are a SoC designer and you use third party macro libraries. And you suspect one may be deliberatly back doored. You use severel CPU core macros from independent libraries say MIPS ARM SPARK etc and put them on the same chip. You add voting logic and simultaniously present the same question to three different architecture cores. If one is lying, you not only know when it starts lying but which one it is.

The only problem is if the attacker has got to the three independent libraries or the chip foundry some how replaces the macros you use with backdoored macros or modifies the voting logic. There are however ways to deal with this based on the “Trusting Trust” issue with compilers.

Figureitout December 1, 2015 3:03 AM

Clive Robinson
Err no I know the exact opposite, ALL and I do mean all hardware designs are imperfect
–Wanted to hear you say it. And I didn’t believe the metastability bugs couldn’t be shook out, if we’re to believe the theory we’re taught (the analysis methods I still think are insufficient, but they’re still pretty good…). It has to be possible eventually…

I generally assume the worst when it comes to attackers corrupting electronics by attacking where our chips and components are made (and compilers and any other programs compiled w/ those compilers). So I generally assume it’s all corrupted, I just don’t believe in widespread remote comms just yet. It’s getting close, but not yet.

There are however ways to deal with this based on the “Trusting Trust” issue with compilers.
–And I don’t trust this method, I don’t think it actually defeats the attack; where did the clean compiler come from? The attack is just too destructive, carrying it out truly risks destroying electronics…only psychopaths want that on their resume…it’s the f*cking end then…god…

Clive Robinson December 1, 2015 5:30 AM

@ Figureitout,

And I don’t trust this method, I don’t think it actually defeats the attack; where did the clean compiler come from?

Well in my case I have my own tool chains I’ve developed in the past.

You could have a read of appendix A.6 of “Compilers : Principles, Techniques and Tools” by Alfred V.Aho, Ravi Sethi and Jeffrey D.Ullman the copy I have handy has ISBN 0-201-10088-6 which is out of print and around 100USD second hand, however there are newer paperback versions for students.

Basically the advice is write your own stack based calculator that is BNF and add a “context free grammar” to it and build the rules up bit by bit adding the symantic, sytax and lexical layers in turn.

From experience I know that you want to make your own lowlevel abstract machine interpteter to start with. Using the bare minimum of common maths and logic functions branching and jumping (test and skip over jump is logicaly simpler than branches but like BNF it can be hard on the mind). As you are only using it as a first step to porting and improving the simpler it is the better. Thus this interpreter can fairly easily be written in most assemblers you are ever going to come across so acts like a striped down P-code or J-code machine.

Of all the interpreters to port a basic Forth is light on the computer which is one of the resons I suggested it to you in the past.

You can write a very simple tool chain with it, with the advantage you can “walk the assembler code” it also gets around a whole bunch of relocation and linker loder issues as well.

Thus you have a “toe hold” to build up “good code” including pascal or C compilers. Ok they will be “Small C” vintage but you can build better with them. If you can bare digging through the Byte Site you will find the articles they did on what became Small C and if you are lucky you might get a second hand copy of the Small C book. There are also various C compilers for 8bit micros you can get the source and development documents.

There is even a home brew CPU where the hardware developer has done all of this including porting an OS.

Once you get the hang of doing this sort of port&build you can get a new CPU up and running in about a week plus however long it takes you to learn sufficient of the base assembler to be able to hand check the ROM code etc to do the first few steps to get the interpreter up and running using cross platform tools.

The last time I did this was for a MicroChip PIC chip, the reason, although the chip was solid the then suggested tools were both expensive and buggy, and the third party tool developer was I thought “tardy in their responses” and “not very reliable in fixing problems”. Something I think you have experienced in the past with development kits… (Oh and MicroChip for subsequent chips did their own C compiler by porting GCC which makes me feel they felt their customers pain with the third party developer).

Now I know the “optomisation / resource efficiency” question is going to be in the back of some peoples minds. Personaly my response is forget about it, get the code working, if you find you are running short of memory or it’s to sluggish, the go by a bigger/faster chip, unless you are going to go into FMCE scale production.

Figureitout December 2, 2015 10:49 PM

Clive Robinson
Well in my case I have my own tool chains I’ve developed in the past.
–Yeah but it wasn’t entirely “from scratch” b/c it’s way too much work and essentially impossible for an individual. I told you I’ll try when I feel ready and semi-competent, now…no. Just copy. I can be way more effective w/ GCC and IAR doing MCU projects to build some of my temporary roots of trust. Right now I’m essentially “up in the cloud” as far as I’m concerned; and I’m going to have to do a major purge eventually to shake off malware I’m not sure about which I’m not looking forward too.

And I don’t like Forth, not fun for me whatsoever to program in. I just don’t like it. And I prefer C over Asm of course. Hope to be done finally w/ my projects at work and make the company way more money than they pay me, looking forward to next ones, looks like another RF project w/ chip I’m pretty familiar w/. Another potential one was a Microchip one, definitely doing Asm too, importantly when I’m fresh and can come in the morning ready to go, not dead tired mentally and physically at 10-11pm. So if I get that one eventually, work w/ their toolchain more, and I’m saving my microchip OTP chips for some pretty strong security I have in mind, but the code needs to be ready to go and flash needs to be perfect. My mostly AVR MCU’s will be my next lines of defense though for now.

That book is online by the way…1000 pages…right now…no thanks I got other things to do. I checked out appendix A.6 and I’m not sure how to do that exactly besides dumb code (“if this, then that” type stuff). But I am worried about the fuse burning under my a$$ as computers keep getting worse from marketing-led features and essentially corrupted CPU’s, I can buy older PC’s on eBay but that’s really a losing strategy. Who knows what’s on that too…

And I say to hell w/ “optimization”, I mean I want the code I write, no matter how bad, not some “too smart for your own good” compiler optimizations. I could be debugging different binaries potentially, that’s just…a horrifying waste of time. What a perfect spot to backdoor the compiler too eh? No, that’s shady to me; don’t like it at all.

Clive Robinson December 3, 2015 8:14 AM

@ Figureitout,

Yeah but it wasn’t entirely “from scratch” b/c it’s way too much work and essentially impossible for an individual.

Im not quite sure what you mean “from scratch” the first homebrew 6502 system I built I had to program with switches and buttons, having first written out the code long hand and then converted to hex then binary. I would have used a Z80 but the early ones had a minimum clock speed so you could not switch-n-button program them.

A Little later I used a Z80 system to develop code using a subset of it’s instructions that allowed easy conversion to compatable 6502 instructions as it was faster but a big chunk was still done by hand.

When I got my hands on the Apple ][ I used their assembler to develope code, but I still had to hand cut it at hex level including calculating checksums to make ROM images to run on other 6502 systems which ment every byte went under my eyeball…

That is how things happened in the 70s/80s so I was not some kind of abnormal nut or super human, it’s just the way it was due to the likes of “proffesional kit” easily costing three times an engineers annual take home. Oddly for these days you can still find books from the likes of Newness that have “PIC PCB’s” in the back along with a MicroChip CDROM so you can build your own programer so there are still some very poor enthusiasts out there still.

Thus I had to build my own tool chains, and yes they were very rough to start out with (but when every problem was a nail…). But the tools got smother with use and time. Importantly though I realised it was important to always use a limited “common core” subset of all assembler instructions in the tools to make cross-CPU porting much easier.

And I don’t like Forth, not fun for me whatsoever to program in.

No it’s not but you have to get from human norms like ‘infix’ with all it’s problems to something easy for the CPU, which is context-free BNF postfix on stacks, if you want flexability. Humans are lazy and ambiguous with all sorts of hidden assumptions, and infix fits in well with that. It’s why the first compilers had between 15-25 man years on the lexical analysis, before people started to wake up it is easer for humans to think like conputers than it is to make computers aproximate how humans think (a task that arguably after tens of thousands of man years work still has not happened).

The thing with C is although it’s a arguably a high level platform independent assembler, it’s realy nothing more than a quite poor code converter (try writing a large number maths library in it to see why). Further aside from recmpiling the compiler, changing the way it produces low level (asm) output is not easily possible. An interpreter that can self interpret like Forth is easier to tweak or make changes on the fly. It’s also fairly easy to embed space and code execution efficiently in other programs. So there is quite a lot going for it as a core tool, rather more so than the likes of tcl, Perl, Python etc.

As you note C is not what C once was, it’s turned into an unwieldy monster that bites the unwary handler savagely. It’s also very inefficient for low level work and getting more so with people trying to make it “easy to use” for people who realy should use other much higher level languages that are type safe, lack pointers and have worthwhile automatic memory allocation and Garbage Collection.

The other issue is the standard libraries, that “are all things to all people” and don’t realy work well. The input/output stdio is stuck in a past that was a known bad way to go before the idea of the library was even thought of.

Personaly I prefer to give C a miss working either “at the metal” in a good macro assembler or in a more task appropriate higher level language where code cutters don’t have to “work above their pay grade” with pointers, hand built data types and memory managment, should they be called on to upgrade/maintain the code. Oh and turn the dam optimization off if you are working on the lower end MCU’s, with a little practice you will be able to write more efficient asm. However before that when debugging optimized C can leave you with a mental state close to that tortures want to induce before asking you questions. As for the likes of high end CPU’s, spend the money and get the chip manufactures own compiler, whilst GCC has it’s advantages, it can not be all things to all CPUs, especially when the chip manufacture wants to keep some things undisclosed.

Which brings us to,

But I am worried about the fuse burning under my a$$ as computers keep getting worse from marketing-led features and essentially corrupted CPU’s

Yup Intel are one of the worst for CPU corruption, some can not startup correctly unless you load a bunch of microcode corrections at boot time, not as they say “confidence building”. Much of the problems are down to trying to squeeze the code to try and get around the on-chip / off-chip memory bottle neck. It’s one of the reasons I say the old CISC large core memory model has hit a brick wall, there’s only so far you can go with compressing instructions by making them not just more complex but vastly more numerous. The human mind can not keep up, thus you have to use the chip manufactures compiler to stand a chance.

And if you are going to do that go for the highest level language you can, and give C/C++ et al the old heave ho over the side to oblivion.

Which is another aspect of “the future is parallel” reasoning. Humans are “serial thinkers” when it comes to task decomposition. The problem is that they are also badly scope limited as well. It’s why you see in house programing guides that say “program blocks no bigger than 100 lines” and similar such as “do not nest more than four levels”.

What humans need to do is “sweat the small stuff” and “overall function” and let the tools sweat the big stuff inbetween that way the human can think big within the limitations of their scope and think serial within managable small tasks (threads/objects) and let the tools take care of optomising for memory and parallel CPU issues and all the messy signals etc. Yes there are people who can not just think that way but also securely, but they are rarer than “hens teeth” and how do I put it tactfully, kind of unaware of most things others consider important (like ironing clothes, colour coordinating clothes, selecting the right clothes for gatherings, remembering to eat from one day to the next, oh and the one you mention getting sleep overnight). I’ve been known to suffer from one or two of these issues as well (if you believe “she who must be obeyed” 😉 But I have not gone down the Einstein route of seven identical sets of clothes and have stoped eating pot noodles out of my tea mug and forgeting I have before making a cupper in it… Giving some interesting flavor combinations, which can take a while to notice when you are busy thinking 😉

Figureitout December 5, 2015 10:06 PM

Clive Robinson
–By scratch I mean discrete components at least, completely. Not programming a chip (though that’s what I’m going to be doing for awhile, b/c it’s fast, flexible and most fun…). Still, what you describe I want to do someday (and have enough foresight to see bugs before I make/enable them at those levels). Importantly, not just my own toolchain, my own computer as much as I can build myself, but to run that toolchain on such a PC (getting completely off Wintel x86 as dev PC) so I can disregard whatever terrible trend the PC market is taking…I’m going to need a bunch of other parts ready to go (but still have that nasty taste/smell of backdoored PC’s..).

I’m just getting more and more tastes of the bugs that can happen, and why everyone building their own isn’t more common. For instance, this is general enough I think I can tell. Thought we had a bricked board, some weird bug going from “programming” to “debug” mode triggered it (for the record, “it wasn’t me” :p). I generally don’t have the time, it’s “crunch time” now (doesn’t help that this was only board left we had for time-being…fckin’ of course…), to go off on little debugging crusades when a bug is not my fault and the tools are failing me. Turns out, some noise generated by the speed of the comms somehow flipped some important bit that shutdown comms. That’s…fck that sh*t. The solution was just to slow down the comms which must’ve created noise in a different band that didn’t matter as much. I’ve learned real quick compared to couple years ago, don’t try to hit it out the park from the get-go; slow it down, basic functionality, then slowly add. Sounds like common sense until you’re thrown in the lion’s cage or just confronted w/ some massive “thing” and don’t know where to start tearing it down.

RE: Forth
–I wish I liked it, just don’t. I don’t see myself doing much w/ it, nothing really draws me to it. pForth maybe someday. I’d just make a bunch of C functions and convert as much as possible.

Oh and turn the dam optimization off if you are working on the lower end MCU’s
–It’s highly recommended not to for ATtiny, or even AVR’s, and I don’t know why. I had some nasty backwards-compatible code I didn’t know how to recreate (very risky) that would fail when I turned it off. Resetting system clock speed seemed a bit too risky to me, didn’t want to risk bug crusades w/ the house of cards code. Don’t like it.

some can not startup correctly unless you load a bunch of microcode corrections at boot time
–Similar thing compiling for newer chips, I don’t know what changes they make to compilers to “port”. Each chip will have a tweaked linker file of course, and custom memory maps. Electronics is fragile as hell, always will be.

OT question/idea: Have you ever taken say a SoC or larger MCU and blown out a bunch of pins you don’t need? I would place this in the “mitigation” category, if our chips are going to continue to have way too much sh*t I don’t want or need. If it doesn’t destroy some needed functionality, would this at least reduce noise issues? Then you could just check for open circuits to have some good assurance nothing’s passing?

Clive Robinson December 6, 2015 7:34 AM

@ Figureitout,

    By scratch I mean discrete components at least, completely. Not programming a chip (though that’s what I’m going to be doing for awhile, b/c it’s fast, flexible and most fun…).

If you mean at the ‘transistor’ level, then you will have to make it a ‘Serial CPU’ and the clock will be at best in the low MHz range. Thus you would be looking at best at a four to twelve bit equivalent RISC design, which you would then have to agument with a lot of memory to get a usefull instruction set decoder.

Using TTL chips is a nice idea but you just can not get the ALU and Register File chips these days. Whilst you can fake a four bit “bit slice” using a fast 8bit ROM chip for a much greater speed and a,lot less money, the hidden problem is the Register File. I don’t know what you know about “Dual Port RAM” but most memory chips you can get hold of are single port, and that means extra steps in the Register Transfer Language, which can mean having extra clock cycles for every register transfer. And as usually the case on simple micros the program counter gets updated by the ALU as a register, every instruction is going to be slowed down by as much as 50%. The solution most CPU’s these days have an independant “adder” for the PC as this adds a whole load of tricks including a “poor man’s MMU” or “segmented” memory access model, and fits in with “pipelining” the CPU design.

It’s why I started looking at “voting protocols” and still available single chip CPU’s. But you will find that even these old chips are starting to become scarce as they are droping lower than ten for the dollar at auction and the gold content of these old devices pays a lot better than that, if you crush-furnace-refine. So you end up back at single CPU microcontrolers that are around a dollar each new, and with carefull shopping PALs like the 22v10 and some other “registered” PALs for the voting “glue logic”. You can of course use a very fast (100MHz clock) micro controler to do the voting protocol, and use the “on chip” memory of the other microcontrolers as “local” memory to implement your “interpreter” and make it as “high level” as you can, which brings you around to the Prison concept…

The simple fact of life these days is “use what’s mainstream in the DigiKey Catalog”, even if you buy it else where. Because, things go wrong not just during development but in the future thus you will want your own collection of “spare parts” “On Your Shelf”.

So you also need to think about “moving up the stack” software wise. That is don’t think of writing “application code” in low level languages like C or many others, but use the “Rapid Development shell scripting” idea. That is instead of individual ASM instructions or C lines of code, think in Subroutiens, Threads, Objects or Tasks and how you would make them more general purpose “MiniApps” running on the voting CPUs, such that you only need to vote on results not intermediate steps. Likewise consider your System Core Memory as “IPC” only for passing results in the “shell script” pipe line.

Obviously you will have to write at the ASM or C level to come up with the general purpose MiniApps, but you have an advantage here in that you can leverage most of the *nix apps and libraries.

To get a good idea of what MiniApps would be most usefull get into basic “parallel computing” books to see what experiance has taught others.

Oh one thing, pass all IPC data between MiniApps / objects / tasks as human readable “Formated 7bit ASCII Strings”, although slower for many things, it will make the development model easier, the voting easier and any other additional “choke point” security easier, therefor faster to develop and more secure in implementation.

If you have a think about it, it’s a bit further down the line from your,

    I’ve learned real quick compared to couple years ago, don’t try to hit it out the park from the get-go; slow it down, basic functionality, then slowly add.

Provided everybody “follows the rules” then when it comes to,

    … just confronted w/ some massive “thing” and don’t know where to start tearing it down.

You can “check the choke points” and use simple “divide and conquer” debugging to issolate the fault. It took me a number of years to understand the unsaid things behind “The faults leaving here OK”. There are to many people who know enough to be truly dangerous due to their Dunning-Kruger limitations and that managment[1] “only see lines of code written” or “speed to sign off” not the hours spent maintaining and correcting the crap produced[2].

If you want one of the secrets to getting towards the “robust code” and reuse and all the other “good stuff” managment realy want but don’t know how to ask for it’s actually not that difficult to find (as @Nick P has shown the information is out there and findable). Like many things the secret is ruthless simplisity and common sense rules that have been time tested.

One was is based on the following which is almost self evident in the first steps,

Firstly well defined “fool proof human readable message passing” APIs acting as the “choke point” connections into a well found framework. The individual frameworks should have no more than “a depth of three” to keep things within “normal human abilities”. You should never have more than two people working on any given “plug-in” in a framework preferably in a “Captain-Crew” role, where the captain deals with the external interfacing and the crew does the internal plug-in work to meet the interfacing (think of the plug-in as a tasklet or object). Obviously a captain can have more than one crew member working under him but there should only be one person incharge of the individual framework. Importantly the captain and above need to be absolutly ruthless about the specification for the APIs, to keep the as well defined, robust and as simple as provides effective use, idealy using existing standards or a clearly defind subset of them.

Secondly you decompose problems into a nested series of frameworks and you ruthlessly stick to “chains and trees” and avoid both feedback and feedforward wherever posible. However you likewise make sure that errors and exceptions can be passed right back to the begining, no matter what happens down stream as it’s the only way to ensure data does not get lost or corrupt the system so it crashes and burns.

Many application programmers think they don’t need to follow the rules because they are somehow better in some way, or the application is a toy or just for personal fun. It’s a trait you tend to see a lot less of in the generaly better qualified engineers. The reason appart from better training where it counts is the “dangerous machines” reasoning. Most people know you don’t walk up to a machine you have no idea how it works and either “play with it” or “hope for the best” it’s why we say “Your playing with fire” when people do it. Further engineers know that once something is made, in general somebody with Dunning-Kruger will, despite all the warnings come along and play with it or repurpose it. Thus they tend to be taught and follow certain rules about the likes of “least suprise” and following “Established Design Rules” and “Standards”, not least because they are taught the consequences of “Consequential Product Liability” on their personal finances and futures. Application Programmers however have for years lived in a rarefied atmosphere where “Liability” of all forms can be “waved away with a licence”. To late some are waking upto the fact that legislators and judges don’t see that a piece of paper –if it even exists– can wave away basic torts and criminal activity or for that matter basic contractual rights with regards copyright.

Moving on 😉 I can appreciate that you don’t like Forth, but the fact remains it has many advantages when writing the interpreters that ultimately form how CPU’s work (look up RTL and microcode). Getting to grips with BNF and stacks is something you will have to do if you want to design your own CPU and Forth is one way into that thinking. But Forth has other advantages, for instance you don’t have the issues of code relocation that OS’s try to hide behind MMU physical to logical remaping. Something you will have to get to grips with if you want to have your system to be more than single tasking or having overly complicated linkers and loaders in your tool chain. Further Forth has an advantage over most other languages it’s threaded implementation makes it not just memory efficient but minimises information transfer and often execution speed, which gives you one heck of a lot more bang for your buck. You don’t need to go Forth to get these advantages, but you would be hard pushed to find them all together else where and developing your own solution with them all quite an uphill effort. You could for instance have a look at some P-Code or J-Code interpreters, but you probably won’t find the MMU less operation in a multitasking environment supported.

What ever you do the one thing you will find most usefull is that an interpreter for all it’s supposed failings does give you independance from the underlying instruction set of the CPU, which makes a whole load of problems disappear just like magic. In the past I’ve written a simplified interpreter in the BIOS giving me a uniform interface onto which I added a micro-OS, which ment I could port embedded systems with little effort when profit or client dictated a move from one microcontroller family to another.

Speaking of porting on microcontrollers, it usually requires turning off optomisations so you can actually see what is going on at the CPU level. Which is just one reason why I mentioned it, thus reading,

    It’s highly recommended not to for ATtiny, or even AVR’s, and I don’t know why.

Makes me think there is something wrong at the CPU level the manufacturer is covering up for some reason. It might be perfectly valid, but to know that you have to get to the bottom of it and that could be very wasteful of your time. Thus consideration should be given to finding another platform.

Finally,

    OT question/idea: Have you ever taken say a SoC or larger MCU and blown out a bunch of pins you don’t need?

I did experiment, but it’s got many disadvantages for the few advantages you get. A better path is wire cutters / Dremel the unwanted pin right back so it’s flush with the packaging, then use “quartz / ceramic loaded” epoxy and encapsulate it.

The problem with blowing out the pins is that you don’t know where it’s going to “fuse open circuit” or what it’s going to do to the life time of the chip. After all at the very least when a fuse melts O/C the metal has to go somewhere … There is also the consideration that a manufacturer is relying on the pin being there for some reason such as electrical stability or heatsinking, both of which have been true in the past for “RF parts”.

But you further need to consider, an attacker who can get through drill resistant loaded epoxy, can just as easily “de-cap” the chip and probe out the connection the other side of the open circuit.

Tamper resistance is a major subject in it’s own right and usually the best way to go is using a certified Hardware Security Module.

I know @Thoth has just posted on this and would probably give you some “current product” advice and other thoughts and reasoning he has on the subject.

[1]http://ralphbarbagallo.com/2013/05/01/how-to-know-if-you-are-suffering-from-dunning-kruger/

[2] However if you see it in either programmers or engineers without good reason, then defenestration is possibly the best way to go with them. The more socialy acceptable way is by “correct training”, but you need to consider a couple of things. Firstly the persons ego/self belief can be insurmountable, secondly the four levels of the “Hierarchy of Competence”, climbing that mountain requires skill, time and dedication from them otherwise they will fail to summit and the cost of their journey will have been entirely wasted. I’ve seen this happen in Universities where self funding students are given advice more based on keeping them paying the faculty rather than with any hope they will get good grades etc. Look at it this way five years to get a 2:2 which is at best the equivalent of a qualification that can be done in a year and costs 2/3rds or less than one year of the degree course, is quite a financial difference…

Nick P December 6, 2015 8:57 AM

@ Clive Robinson

re compilers

I think the “Dragon Book” is a bit much for a beginner. I know because I started with it, understood compilers a little better, and then used other books to build mine. 😉 It’s a good advanced work for people who already have pieces of the topic under their belt.

Not sure of what’s ideal for beginners but Wirth’s Compiler Construction is a nice start. Plus, sets one up to understand their compilers used in Oberon System. That sets you up to understand actual, implemented systems and their tool chains. So, the person wanting ground up knowledge with ALGOL-like language should probably start with Wirth and his collaborators’ work.

re building blocks

Yeah, the hardware is getting hard to find. It’s why I’m recommending open-source FPGA’s as a solution. However, for those liking visual inspection, one could do an open-source FPGA on 350-500nm with hard blocks for some common I/O. Just as well TTL, PAL, whatever chips. The multi-project wafer costs for 350nm are so cheap now that even a low-end IT worker can afford them. The kind of hardware professional who can build the LUT’s and so on will be making enough to do half a dozen runs. 😉

They might also consider doing it asynchronously. I know you don’t like asynchronous logic that much but all real-world uses of it by academia worked out great. Reason I bring it up is that it makes first-pass silicon & integration of components very easy. A budget operation would probably appreciate shaving off several iterations worth of mask & fab costs. The power advantages are becoming more important, too.

Figureitout December 7, 2015 2:01 AM

Clive Robinson
make it a ‘Serial CPU’ and the clock will be at best in the low MHz range
–That’s completely fine for me. I get unbelievable functionality w/ 16MHz chips (atmega).

don’t know what you know about “Dual Port RAM”
–Not much, but I know I wouldn’t want it in my homebrew. Speed’s on the lower end of totem pole and would prefer avoiding “risky tricks” when possible.

I’ll take all your suggestions into consideration when the time comes, thanks as always (goes w/o saying).

And on the pin blowing, just a silly idea if market that values fashion over function becomes such sh*t I can’t get the solid/reliable scaled-down chips I want. Know it’d be risky, as there’s generally a SPI bus too that goes thru almost entire chip eh? Well, slightly curious, for instance when I blew a motherboard it’d still boot, well…turn on a fan and sound like booting, so I wonder what protected that part (I’m not up for that challenge though, and definitely don’t have the tools even if I knew what I’m looking for). Plus a backdoor would likely electrically protect itself or be such a leach it’d take down it’s host aswell if you kill it…Tamper resistance and complicating EMSEC attacks was just a potential side benefit, main thing was destroying sections or features of a chip where they won’t work w/o further physical tampering.

Guess you can say I feel same way about unnecessary backdoor crap in chips as you do about application programmers. :p

Figureitout December 8, 2015 12:41 AM

thevoid
–The first few pages were great of that (by page 30 something f*cky happens then gets normal again after a few pages…can’t believe people digitize all these books lol). Many compiler texts I’ve glanced at were all theory. Gets most interesting around 408 and then appendix lol. But you know…making a compiler using C seems like cheating lol.

thevoid December 8, 2015 3:07 AM

@Figureitout

–The first few pages were great of that (by page 30 something f*cky happens then gets normal again after a few pages…

yeah, it seems to be enlarged, just reduce the size, and all the info seems to be there. that one’s not a big deal.

the only way to verify the integrity of some books is to go thru the whole thing. in one book i have, page 373 was replaced by a duplicate 273…

i always do a check to make sure the book is all in general order though. the table of contents gives you an idea of what should be there, and you can use the numbered pages ie goto page 1, jump 99 pages to where page 100 should be, then another 100, etc, and some random page checking.

it’s a problem with some of the scanned books by archive.org & google. they’re not always the best quality. some pages the scan got distorted or are missing completely. thankfully for some of those books there are scans from different libraries, which are unlikely to have the same errors. one book i could read only by going back and forth between two versions, both were pretty screwed up, but between them…

unfortunately you don’t know which books are screwed up, and how, until you actually read them.

can’t believe people digitize all these books lol).

i have been quite surprised myself by what people have digitized. in the end, i’m just glad they do, errors and all. it’s better to have most of a book with a few pages missing than no book. though missing pages do bug the hell out of me with what i may be missing. because of that missing page 373 i mentioned, i never did get to know what Ben Franklin’s thoughts/observations on currency were.

Many compiler texts I’ve glanced at were all theory.

i find that to be a problem with most science books. too many books have theories, without the evidence/experiments. i want to know how those theories came about, not merely blindly accept authority.

i was always told science was supposed to prove things…

“Chemistry of the Elements” (Greenwood, Earnshaw) is one of the few books i’ve found that explicitly followed a fact-based approach. one of their stated justifications was that “facts change less often than theories, with the caveat that some facts are theory-laden.” if only more scientists were scientific.

Gets most interesting around 408 and then appendix lol. But you know…making a compiler using C seems like cheating lol.

or circularity, compiling a compiler with a compiler…

Clive’s already detailed the non-cheating method, but sometimes you gotta take shortcuts in some things– if you ever intend to get anything done. (i know you know that already though.)

Clive Robinson December 8, 2015 8:09 AM

@ Figureitout,

making a compiler using C seems like cheating lol.

Do you mean “using C” as “using C’s compiler” or “using the C source code”?

There is a lot of wrigle room on the line between the two, and that is quite important.

Just using the compiler without any other steps is the least secure end of that line and what the thoughts on trusting trust was all about.

At the other end of the line, is using the C source code, simplifining it to one expression per line and then using that as the comments, hand write the ASM hex codes along side. Then using cut and paste etc break out the hex code on it’s own and hand move it into 16 byte lines add the addresses and checksums by hand to make a ROM in Motorola format or COM image then if all is well you have a quite basic but trust worthy C compiler you can then use to recompile it’s self with the next level up. Rinse and wash a few times then low and behold you have a fully standards compliant C compiler.

You can go to various points along the tool chain for your start point. For instance you could use an existing compiler pre-processor and lexical analyser, to pop out an ASM list built with the original C source code along side. You could then do your hand coding from that point.

But the point about the reflections on trusting trust was it was a high level in the tool chain attack, that relied on the compiler recognising certain function calls etc. So you could write your own asembler in the C compiler tool chain and limited loader (no linking to dodgy code 😉

So I would argue that using the C source code as an example to build your own secure compiler sufficiently low down in the tool chain is quite an acceptable way to go.

As the old saying goes “Rome was not built in a day” with the rider that somebody had to hand make the first bricks to build the first brick works, that made the better bricks to build a better brick factory… This “pull yourself up by your bootstraps” method was what gave us Science, Engineering and the Idustrial Revoloution, long before Babbage and Ada Lovelace had their intellectual meeting of minds –that supposedly– laid out the plans for the computer revolution of the later half of the twentieth century.

As Newton remarked “He stood on the shoulders of giants” [1] which many others have done both before and afterwards and you can join them.

[1] supposadly this comment was an acknowledgment of those that had gone before, not a sarcastic comment at those who he felt had held him back by not making their experimental results available to him for years (which alas appears more likely).

Nick P December 8, 2015 8:38 AM

@ Clive Robinson

You beat me to it. One can use untrusted tools as a basis for further understanding of trusted specs or code. The check against them is to do some of the analysis or execution traces by hand on paper. There’s also the diversity approach of using many different devices & OS combinations with same interpreter or compiler onboard.

Outside of security of optimizations, I think compilers are one of the easier things to DIY for security given it’s a batch process, uses untrusted input just once, requires few language-level primitives, and allows equivalence checking as a verification method.

A TLS library or something would be a tad more complicated. 😉

Figureitout December 8, 2015 9:31 PM

thevoid
–Think I’m running low on RAM lol and will have to reboot soon. Some .pdf’s don’t play nice in older iceweasel, definitely got my CDROM spinning though and browser frozen on that page. :/

Yikes on the 273/373 thing…kinda weird. Agree on the theory not being backed up w/ evidence, that irks me to no end. I’m pretty extreme to the other end such that I want to try something first, and if I like it, research it more.

Yeah he “detailed” (still a lot missing, knowing him he probably wants me to get stuck on some of the hard spots), and it wasn’t truly “from scratch”. It really gets me stuck, making a toolchain truly “from scratch”, you use other computers and code.

But oh yeah do I know that and I’m 99% sure I’m not good enough to not “cheat” and there’ll still be that little ever-evasive “dark spot” trying to fully realize my homebrew. Doesn’t help that I don’t find compiler work much fun, just want it done and to use w/ no errors.

Clive Robinson
–How about all the above? I cut off the “from scratch” line at discrete components b/c debugging issues due to crap manufacturing doesn’t make me feel “like a man”, more “like an idiot” lol. And banging in binary which still needs circuits to execute and safely handle the logic so you need to do that too…not using a pre-made chip.

I already know I’m not doing that, doing some asm, and using a ‘scope to verify (which I haven’t heard/seen enough failing scopes to be overly concerned about that, but yeah still a risk of it not catching backdoor behavior). So yeah I’ll be cheating too. So probably using Lex and Yacc, or some other nice starting point that isn’t “from scratch”, there’s always cool projects popping up on HN.

Have you ever seen the movie “Antz”? Well there’s a part where they need to get out the flooding colony (the military guy almost killed the whole colony except for a few of his crony friends, “for the good of the colony”), and everyone stacks up really high. I’d be somewhere around the top wishing I could carry more weight or be the one w/ my feet on the ground lol.

Anywho, for now I’ll be making my own little “shield” which will be my first PCB. But not before a proto I can show everyone here! Got a little more work though…you know how delays go in engineering lol…bleh

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.